WorldWideScience

Sample records for regulatory code modeling

  1. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.; Lee, S. W. [Korea Automic Energy Research Institute, Taejon (Korea, Republic of)

    2004-02-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the second step of the 3 year project, and the main researches were focused on the development of downcorner boiling model. During the current year, the bubble stream model of downcorner has been developed and installed in he auditing code. The model sensitivity analysis has been performed for APR1400 LBLOCA scenario using the modified code. The preliminary calculation has been performed for the experimental test facility using FLUENT and MARS code. The facility for air bubble experiment has been installed. The thermal hydraulic phenomena for VHTR and super critical reactor have been identified for the future application and model development.

  2. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2003-04-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out.

  3. Development of thermal hydraulic models for the reliable regulatory auditing code

    International Nuclear Information System (INIS)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.

    2003-04-01

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out

  4. Development of Off-take Model, Subcooled Boiling Model, and Radiation Heat Transfer Input Model into the MARS Code for a Regulatory Auditing of CANDU Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, C.; Rhee, B. W.; Chung, B. D. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ahn, S. H.; Kim, M. W. [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2009-05-15

    Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to a lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use a vendor's code for a regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed the RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of the existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of the RELAP5/MOD3/CANDU code to the MARS code including a quality assurance of the developed models.

  5. Development of Off-take Model, Subcooled Boiling Model, and Radiation Heat Transfer Input Model into the MARS Code for a Regulatory Auditing of CANDU Reactors

    International Nuclear Information System (INIS)

    Yoon, C.; Rhee, B. W.; Chung, B. D.; Ahn, S. H.; Kim, M. W.

    2009-01-01

    Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to a lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use a vendor's code for a regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed the RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of the existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of the RELAP5/MOD3/CANDU code to the MARS code including a quality assurance of the developed models

  6. Implementation of Wolsong Pump Model, Pressure Tube Deformation Model and Off-take Model into MARS Code for Regulatory Auditing of CANDU Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, C.; Rhee, B. W.; Chung, B. D. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Y. J.; Kim, M. W. [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2008-05-15

    Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use vendor's code for regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of RELAP5/MOD3/CANDU code to MARS code including quality assurance of the developed models. This first part of the research series presents the implementation and verification of the Wolsong pump model, the pressure tube deformation model, and the off-take model for arbitrary-angled branch pipes.

  7. Implementation of Wolsong Pump Model, Pressure Tube Deformation Model and Off-take Model into MARS Code for Regulatory Auditing of CANDU Reactors

    International Nuclear Information System (INIS)

    Yoon, C.; Rhee, B. W.; Chung, B. D.; Cho, Y. J.; Kim, M. W.

    2008-01-01

    Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use vendor's code for regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of RELAP5/MOD3/CANDU code to MARS code including quality assurance of the developed models. This first part of the research series presents the implementation and verification of the Wolsong pump model, the pressure tube deformation model, and the off-take model for arbitrary-angled branch pipes

  8. Deciphering the genetic regulatory code using an inverse error control coding framework.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; May, Elebeoba Eni; Brown, William Michael; Johnston, Anna Marie; Watson, Jean-Paul

    2005-03-01

    We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.

  9. Regulatory context and evolutions - Public Health Code

    International Nuclear Information System (INIS)

    Rodde, S.

    2009-01-01

    After having recalled that numerous laws, decrees and orders have been published between 2001 and 2007 due to the transposition of EURATOM directives defining standards for population and worker health protection against dangers resulting from ionizing radiations, the author reviews the regulatory evolutions which occurred in 2008 and 2009, and those currently in progress. They concern the protection of people exposed to radon, the field of radiotherapy, authorizations issued in application of the French public health code. Some decisions are about to be finalized. They concern the activities submitted to a declaration, the modalities of prolongation of the lifetime of sealed sources, a list of apparatus categories the handling of which requires an ability certificate

  10. Codes and standards and other guidance cited in regulatory documents

    International Nuclear Information System (INIS)

    Nickolaus, J.R.; Bohlander, K.L.

    1996-08-01

    As part of the U.S. Nuclear Regulatory Commission (NRC) Standard Review Plan Update and Development Program (SRP-UDP), Pacific Northwest National Laboratory developed a listing of industry consensus codes and standards and other government and industry guidance referred to in regulatory documents. The SRP-UDP has been completed and the SRP-Maintenance Program (SRP-MP) is now maintaining this listing. Besides updating previous information, Revision 3 adds approximately 80 citations. This listing identifies the version of the code or standard cited in the regulatory document, the regulatory document, and the current version of the code or standard. It also provides a summary characterization of the nature of the citation. This listing was developed from electronic searches of the Code of Federal Regulations and the NRC's Bulletins, Information Notices, Circulars, Enforcement Manual, Generic Letters, Inspection Manual, Policy Statements, Regulatory Guides, Standard Technical Specifications and the Standard Review Plan (NUREG-0800)

  11. Codes and standards and other guidance cited in regulatory documents

    Energy Technology Data Exchange (ETDEWEB)

    Nickolaus, J.R.; Bohlander, K.L.

    1996-08-01

    As part of the U.S. Nuclear Regulatory Commission (NRC) Standard Review Plan Update and Development Program (SRP-UDP), Pacific Northwest National Laboratory developed a listing of industry consensus codes and standards and other government and industry guidance referred to in regulatory documents. The SRP-UDP has been completed and the SRP-Maintenance Program (SRP-MP) is now maintaining this listing. Besides updating previous information, Revision 3 adds approximately 80 citations. This listing identifies the version of the code or standard cited in the regulatory document, the regulatory document, and the current version of the code or standard. It also provides a summary characterization of the nature of the citation. This listing was developed from electronic searches of the Code of Federal Regulations and the NRC`s Bulletins, Information Notices, Circulars, Enforcement Manual, Generic Letters, Inspection Manual, Policy Statements, Regulatory Guides, Standard Technical Specifications and the Standard Review Plan (NUREG-0800).

  12. Regulatory requirements to the thermal-hydraulic and thermal-mechanical computer codes

    International Nuclear Information System (INIS)

    Vitkova, M.; Kalchev, B.; Stefanova, S.

    2006-01-01

    The paper presents an overview of the regulatory requirements to the thermal-hydraulic and thermal-mechanical computer codes, which are used for safety assessment of the fuel design and the fuel utilization. Some requirements to the model development, verification and validation of the codes and analysis of code uncertainties are also define. Questions concerning Quality Assurance during development and implementation of the codes as well as preparation of a detailed verification and validation plan are briefly discussed

  13. Regulatory Non-Coding RNAs in Pluripotent Stem Cells

    Directory of Open Access Journals (Sweden)

    Alessandro Rosa

    2013-07-01

    Full Text Available The most part of our genome encodes for RNA transcripts are never translated into proteins. These include families of RNA molecules with a regulatory function, which can be arbitrarily subdivided in short (less than 200 nucleotides and long non-coding RNAs (ncRNAs. MicroRNAs, which act post-transcriptionally to repress the function of target mRNAs, belong to the first group. Included in the second group are multi-exonic and polyadenylated long ncRNAs (lncRNAs, localized either in the nucleus, where they can associate with chromatin remodeling complexes to regulate transcription, or in the cytoplasm, acting as post-transcriptional regulators. Pluripotent stem cells, such as embryonic stem cells (ESCs or induced pluripotent stem cells (iPSCs, represent useful systems for modeling normal development and human diseases, as well as promising tools for regenerative medicine. To fully explore their potential, however, a deep understanding of the molecular basis of stemness is crucial. In recent years, increasing evidence of the importance of regulation by ncRNAs in pluripotent cells is accumulating. In this review, we will discuss recent findings pointing to multiple roles played by regulatory ncRNAs in ESC and iPSCs, where they act in concert with signaling pathways, transcriptional regulatory circuitries and epigenetic factors to modulate the balance between pluripotency and differentiation.

  14. Development of Regulatory Audit Core Safety Code : COREDAX

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chae Yong; Jo, Jong Chull; Roh, Byung Hwan [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of); Lee, Jae Jun; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2005-07-01

    Korea Institute of Nuclear Safety (KINS) has developed a core neutronics simulator, COREDAX code, for verifying core safety of SMART-P reactor, which is technically supported by Korea Advanced Institute of Science and Technology (KAIST). The COREDAX code would be used for regulatory audit calculations of 3- dimendional core neutronics. The COREDAX code solves the steady-state and timedependent multi-group neutron diffusion equation in hexagonal geometry as well as rectangular geometry by analytic function expansion nodal (AFEN) method. AFEN method was developed at KAIST, and it was internationally verified that its accuracy is excellent. The COREDAX code is originally programmed based on the AFEN method. Accuracy of the code on the AFEN method was excellent for the hexagonal 2-dimensional problems, but there was a need for improvement for hexagonal-z 3-dimensional problems. Hence, several solution routines of the AFEN method are improved, and finally the advanced AFEN method is created. COREDAX code is based on the advanced AFEN method . The initial version of COREDAX code is to complete a basic framework, performing eigenvalue calculations and kinetics calculations with thermal-hydraulic feedbacks, for audit calculations of steady-state core design and reactivity-induced accidents of SMART-P reactor. This study describes the COREDAX code for hexagonal geometry.

  15. A regulatory code for neuron-specific odor receptor expression.

    Directory of Open Access Journals (Sweden)

    Anandasankar Ray

    2008-05-01

    Full Text Available Olfactory receptor neurons (ORNs must select-from a large repertoire-which odor receptors to express. In Drosophila, most ORNs express one of 60 Or genes, and most Or genes are expressed in a single ORN class in a process that produces a stereotyped receptor-to-neuron map. The construction of this map poses a problem of receptor gene regulation that is remarkable in its dimension and about which little is known. By using a phylogenetic approach and the genome sequences of 12 Drosophila species, we systematically identified regulatory elements that are evolutionarily conserved and specific for individual Or genes of the maxillary palp. Genetic analysis of these elements supports a model in which each receptor gene contains a zip code, consisting of elements that act positively to promote expression in a subset of ORN classes, and elements that restrict expression to a single ORN class. We identified a transcription factor, Scalloped, that mediates repression. Some elements are used in other chemosensory organs, and some are conserved upstream of axon-guidance genes. Surprisingly, the odor response spectra and organization of maxillary palp ORNs have been extremely well-conserved for tens of millions of years, even though the amino acid sequences of the receptors are not highly conserved. These results, taken together, define the logic by which individual ORNs in the maxillary palp select which odor receptors to express.

  16. DNA watermarks in non-coding regulatory sequences

    Directory of Open Access Journals (Sweden)

    Pyka Martin

    2009-07-01

    Full Text Available Abstract Background DNA watermarks can be applied to identify the unauthorized use of genetically modified organisms. It has been shown that coding regions can be used to encrypt information into living organisms by using the DNA-Crypt algorithm. Yet, if the sequence of interest presents a non-coding DNA sequence, either the function of a resulting functional RNA molecule or a regulatory sequence, such as a promoter, could be affected. For our studies we used the small cytoplasmic RNA 1 in yeast and the lac promoter region of Escherichia coli. Findings The lac promoter was deactivated by the integrated watermark. In addition, the RNA molecules displayed altered configurations after introducing a watermark, but surprisingly were functionally intact, which has been verified by analyzing the growth characteristics of both wild type and watermarked scR1 transformed yeast cells. In a third approach we introduced a second overlapping watermark into the lac promoter, which did not affect the promoter activity. Conclusion Even though the watermarked RNA and one of the watermarked promoters did not show any significant differences compared to the wild type RNA and wild type promoter region, respectively, it cannot be generalized that other RNA molecules or regulatory sequences behave accordingly. Therefore, we do not recommend integrating watermark sequences into regulatory regions.

  17. Studies on DANESS Code Modeling

    International Nuclear Information System (INIS)

    Jeong, Chang Joon

    2009-09-01

    The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios

  18. Regulatory Endorsement Activities for ASME Nuclear Codes and Standards

    International Nuclear Information System (INIS)

    West, Raymond A.

    2006-01-01

    The ASME Board on Nuclear Codes and Standards (BNCS) has formed a Task Group on Regulatory Endorsement (TG-RE) that is currently in discussions with the United States Nuclear Regulatory Commission (NRC) to look at suggestions and recommendations that can be used to help with the endorsement of new and revised ASME Nuclear Codes and Standards (NC and S). With the coming of new reactors in the USA in the very near future we need to look at both the regulations and all the ASME NC and S to determine where we need to make changes to support these new plants. At the same time it is important that we maintain our operating plants while addressing ageing management needs of our existing reactors. This is going to take new thinking, time, resources, and money. For all this to take place the regulations and requirements that we use must be clear concise and necessary for safety and to that end both the NRC and ASME are working together to make this happen. Because of the influence that the USA has in the world in dealing with these issues, this paper is written to inform the international nuclear engineering community about the issues and what actions are being addressed under this effort. (author)

  19. The Non-Coding Regulatory RNA Revolution in Archaea

    Directory of Open Access Journals (Sweden)

    Diego Rivera Gelsinger

    2018-03-01

    Full Text Available Small non-coding RNAs (sRNAs are ubiquitously found in the three domains of life playing large-scale roles in gene regulation, transposable element silencing and defense against foreign elements. While a substantial body of experimental work has been done to uncover function of sRNAs in Bacteria and Eukarya, the functional roles of sRNAs in Archaea are still poorly understood. Recently, high throughput studies using RNA-sequencing revealed that sRNAs are broadly expressed in the Archaea, comprising thousands of transcripts within the transcriptome during non-challenged and stressed conditions. Antisense sRNAs, which overlap a portion of a gene on the opposite strand (cis-acting, are the most abundantly expressed non-coding RNAs and they can be classified based on their binding patterns to mRNAs (3′ untranslated region (UTR, 5′ UTR, CDS-binding. These antisense sRNAs target many genes and pathways, suggesting extensive roles in gene regulation. Intergenic sRNAs are less abundantly expressed and their targets are difficult to find because of a lack of complete overlap between sRNAs and target mRNAs (trans-acting. While many sRNAs have been validated experimentally, a regulatory role has only been reported for very few of them. Further work is needed to elucidate sRNA-RNA binding mechanisms, the molecular determinants of sRNA-mediated regulation, whether protein components are involved and how sRNAs integrate with complex regulatory networks.

  20. Cheetah: Starspot modeling code

    Science.gov (United States)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  1. The MESORAD dose assessment model: Computer code

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.

    1988-10-01

    MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs

  2. Pervasive hitchhiking at coding and regulatory sites in humans.

    Directory of Open Access Journals (Sweden)

    James J Cai

    2009-01-01

    Full Text Available Much effort and interest have focused on assessing the importance of natural selection, particularly positive natural selection, in shaping the human genome. Although scans for positive selection have identified candidate loci that may be associated with positive selection in humans, such scans do not indicate whether adaptation is frequent in general in humans. Studies based on the reasoning of the MacDonald-Kreitman test, which, in principle, can be used to evaluate the extent of positive selection, suggested that adaptation is detectable in the human genome but that it is less common than in Drosophila or Escherichia coli. Both positive and purifying natural selection at functional sites should affect levels and patterns of polymorphism at linked nonfunctional sites. Here, we search for these effects by analyzing patterns of neutral polymorphism in humans in relation to the rates of recombination, functional density, and functional divergence with chimpanzees. We find that the levels of neutral polymorphism are lower in the regions of lower recombination and in the regions of higher functional density or divergence. These correlations persist after controlling for the variation in GC content, density of simple repeats, selective constraint, mutation rate, and depth of sequencing coverage. We argue that these results are most plausibly explained by the effects of natural selection at functional sites -- either recurrent selective sweeps or background selection -- on the levels of linked neutral polymorphism. Natural selection at both coding and regulatory sites appears to affect linked neutral polymorphism, reducing neutral polymorphism by 6% genome-wide and by 11% in the gene-rich half of the human genome. These findings suggest that the effects of natural selection at linked sites cannot be ignored in the study of neutral human polymorphism.

  3. Evolutionary analysis reveals regulatory and functional landscape of coding and non-coding RNA editing.

    Science.gov (United States)

    Zhang, Rui; Deng, Patricia; Jacobson, Dionna; Li, Jin Billy

    2017-02-01

    Adenosine-to-inosine RNA editing diversifies the transcriptome and promotes functional diversity, particularly in the brain. A plethora of editing sites has been recently identified; however, how they are selected and regulated and which are functionally important are largely unknown. Here we show the cis-regulation and stepwise selection of RNA editing during Drosophila evolution and pinpoint a large number of functional editing sites. We found that the establishment of editing and variation in editing levels across Drosophila species are largely explained and predicted by cis-regulatory elements. Furthermore, editing events that arose early in the species tree tend to be more highly edited in clusters and enriched in slowly-evolved neuronal genes, thus suggesting that the main role of RNA editing is for fine-tuning neurological functions. While nonsynonymous editing events have been long recognized as playing a functional role, in addition to nonsynonymous editing sites, a large fraction of 3'UTR editing sites is evolutionarily constrained, highly edited, and thus likely functional. We find that these 3'UTR editing events can alter mRNA stability and affect miRNA binding and thus highlight the functional roles of noncoding RNA editing. Our work, through evolutionary analyses of RNA editing in Drosophila, uncovers novel insights of RNA editing regulation as well as its functions in both coding and non-coding regions.

  4. Codes and standards and other guidance cited in regulatory documents. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Ankrum, A.; Nickolaus, J.; Vinther, R.; Maguire-Moffitt, N.; Hammer, J.; Sherfey, L.; Warner, R. [Pacific Northwest Lab., Richland, WA (United States)

    1994-08-01

    As part of the US Nuclear Regulatory Commission (NRC) Standard Review Plan Update and Development Program, Pacific Northwest Laboratory developed a listing of industry consensus codes and standards and other government and industry guidance referred to in regulatory documents. In addition to updating previous information, Revision 1 adds citations from the NRC Inspection Manual and the Improved Standard Technical Specifications. This listing identifies the version of the code or standard cited in the regulatory document, the regulatory document, and the current version of the code or standard. It also provides a summary characterization of the nature of the citation. This listing was developed from electronic searches of the Code of Federal Regulations and the NRC`s Bulletins, Information Notices, Circulars, Generic Letters, Policy Statements, Regulatory Guides, and the Standard Review Plan (NUREG-0800).

  5. Codes and standards and other guidance cited in regulatory documents. Revision 1

    International Nuclear Information System (INIS)

    Ankrum, A.; Nickolaus, J.; Vinther, R.; Maguire-Moffitt, N.; Hammer, J.; Sherfey, L.; Warner, R.

    1994-08-01

    As part of the US Nuclear Regulatory Commission (NRC) Standard Review Plan Update and Development Program, Pacific Northwest Laboratory developed a listing of industry consensus codes and standards and other government and industry guidance referred to in regulatory documents. In addition to updating previous information, Revision 1 adds citations from the NRC Inspection Manual and the Improved Standard Technical Specifications. This listing identifies the version of the code or standard cited in the regulatory document, the regulatory document, and the current version of the code or standard. It also provides a summary characterization of the nature of the citation. This listing was developed from electronic searches of the Code of Federal Regulations and the NRC's Bulletins, Information Notices, Circulars, Generic Letters, Policy Statements, Regulatory Guides, and the Standard Review Plan (NUREG-0800)

  6. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  7. Development Perspective of Regulatory Audit Code System for SFR Nuclear Safety Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Moo Hoon; Lee, Gil Soo; Shin, An Dong; Suh, Nam Duk [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-05-15

    A sodium-cooled fast reactor (SFR) in Korea is based on the KALIMER-600 concept developed by KAERI. Based on 'Long-term R and D Plan for Future Reactor Systems' which was approved by the Korea Atomic Energy Commission in 2008, the KAERI designer is scheduled to apply the design certification of the prototype SFR in 2017. In order to establish regulatory infrastructure for the licensing of a prototype SFR, KINS has develop the regulatory requirements for the demonstration SFR since 2010, and are scheduled to develop the regulatory audit code systems in regard to core, fuel, and system, etc. since 2012. In this study, the domestic code systems used for core design and safety evaluation of PWRs and the nuclear physics and code system for SFRs were briefly reviewed, and the development perspective of regulatory audit code system for SFR nuclear safety evaluation were derived

  8. User's manual for a process model code

    International Nuclear Information System (INIS)

    Kern, E.A.; Martinez, D.P.

    1981-03-01

    The MODEL code has been developed for computer modeling of materials processing facilities associated with the nuclear fuel cycle. However, it can also be used in other modeling applications. This report provides sufficient information for a potential user to apply the code to specific process modeling problems. Several examples that demonstrate most of the capabilities of the code are provided

  9. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  10. Short-lived non-coding transcripts (SLiTs): Clues to regulatory long non-coding RNA.

    Science.gov (United States)

    Tani, Hidenori

    2017-03-22

    Whole transcriptome analyses have revealed a large number of novel long non-coding RNAs (lncRNAs). Although the importance of lncRNAs has been documented in previous reports, the biological and physiological functions of lncRNAs remain largely unknown. The role of lncRNAs seems an elusive problem. Here, I propose a clue to the identification of regulatory lncRNAs. The key point is RNA half-life. RNAs with a long half-life (t 1/2 > 4 h) contain a significant proportion of ncRNAs, as well as mRNAs involved in housekeeping functions, whereas RNAs with a short half-life (t 1/2 regulatory ncRNAs and regulatory mRNAs. This novel class of ncRNAs with a short half-life can be categorized as Short-Lived non-coding Transcripts (SLiTs). I consider that SLiTs are likely to be rich in functionally uncharacterized regulatory RNAs. This review describes recent progress in research into SLiTs.

  11. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  12. Development of multipurpose regulatory PSA model

    International Nuclear Information System (INIS)

    Lee, Chang Ju; Sung, Key Yong; Kim, Hho Jung; Yang, Joon Eon; Ha, Jae Joo

    2004-01-01

    Generally, risk information for nuclear facilities comes from the results of Probabilistic safety assessment (PSA). PSA is a systematic tool to ensure the safety of nuclear facilities, since it is based on thorough and consistent application of probability models. In particular, the PSA has been widely utilized for risk-informed regulation (RIR), including various licensee-initiated risk-informed applications (RIA). In any regulatory decision, the main goal is to make a sound safety decision based on technically defensible information. Also, due to the increased public requests for giving a safety guarantee, the regulator should provide the visible means of safety. The use of PSA by the regulator can give the answer on this problem. Therefore, in order to study the applicability of risk information for regulatory safety management, it is a demanding task to prepare a well-established regulatory PSA model and tool. In 2002, KINS and KAERI together made a research cooperation to form a working group to develop the regulatory PSA model - so-called MPAS model. The MPAS stands for multipurpose probabilistic analysis of safety. For instance, a role of the MPAS model is to give some risk insights in the preparation of various regulatory programs. Another role of this model is to provide an independent risk information to the regulator during regulatory decision-making, not depending on the licensee's information

  13. Fatigue modelling according to the JCSS Probabilistic model code

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2007-01-01

    The Joint Committee on Structural Safety is working on a Model Code for full probabilistic design. The code consists out of three major parts: Basis of design, Load Models and Models for Material and Structural Properties. The code is intended as the operational counter part of codes like ISO,

  14. GIS-assisted spatial analysis for urban regulatory detailed planning: designer's dimension in the Chinese code system

    Science.gov (United States)

    Yu, Yang; Zeng, Zheng

    2009-10-01

    By discussing the causes behind the high amendments ratio in the implementation of urban regulatory detailed plans in China despite its law-ensured status, the study aims to reconcile conflict between the legal authority of regulatory detailed planning and the insufficient scientific support in its decision-making and compilation by introducing into the process spatial analysis based on GIS technology and 3D modeling thus present a more scientific and flexible approach to regulatory detailed planning in China. The study first points out that the current compilation process of urban regulatory detailed plan in China employs mainly an empirical approach which renders it constantly subjected to amendments; the study then discusses the need and current utilization of GIS in the Chinese system and proposes the framework of a GIS-assisted 3D spatial analysis process from the designer's perspective which can be regarded as an alternating processes between the descriptive codes and physical design in the compilation of regulatory detailed planning. With a case study of the processes and results from the application of the framework, the paper concludes that the proposed framework can be an effective instrument which provides more rationality, flexibility and thus more efficiency to the compilation and decision-making process of urban regulatory detailed plan in China.

  15. The origins and evolutionary history of human non-coding RNA regulatory networks.

    Science.gov (United States)

    Sherafatian, Masih; Mowla, Seyed Javad

    2017-04-01

    The evolutionary history and origin of the regulatory function of animal non-coding RNAs are not well understood. Lack of conservation of long non-coding RNAs and small sizes of microRNAs has been major obstacles in their phylogenetic analysis. In this study, we tried to shed more light on the evolution of ncRNA regulatory networks by changing our phylogenetic strategy to focus on the evolutionary pattern of their protein coding targets. We used available target databases of miRNAs and lncRNAs to find their protein coding targets in human. We were able to recognize evolutionary hallmarks of ncRNA targets by phylostratigraphic analysis. We found the conventional 3'-UTR and lesser known 5'-UTR targets of miRNAs to be enriched at three consecutive phylostrata. Firstly, in eukaryata phylostratum corresponding to the emergence of miRNAs, our study revealed that miRNA targets function primarily in cell cycle processes. Moreover, the same overrepresentation of the targets observed in the next two consecutive phylostrata, opisthokonta and eumetazoa, corresponded to the expansion periods of miRNAs in animals evolution. Coding sequence targets of miRNAs showed a delayed rise at opisthokonta phylostratum, compared to the 3' and 5' UTR targets of miRNAs. LncRNA regulatory network was the latest to evolve at eumetazoa.

  16. Fusion safety codes International modeling with MELCOR and ATHENA- INTRA

    CERN Document Server

    Marshall, T; Topilski, L; Merrill, B

    2002-01-01

    For a number of years, the world fusion safety community has been involved in benchmarking their safety analyses codes against experiment data to support regulatory approval of a next step fusion device. This paper discusses the benchmarking of two prominent fusion safety thermal-hydraulic computer codes. The MELCOR code was developed in the US for fission severe accident safety analyses and has been modified for fusion safety analyses. The ATHENA code is a multifluid version of the US-developed RELAP5 code that is also widely used for fusion safety analyses. The ENEA Fusion Division uses ATHENA in conjunction with the INTRA code for its safety analyses. The INTRA code was developed in Germany and predicts containment building pressures, temperatures and fluid flow. ENEA employs the French-developed ISAS system to couple ATHENA and INTRA. This paper provides a brief introduction of the MELCOR and ATHENA-INTRA codes and presents their modeling results for the following breaches of a water cooling line into the...

  17. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  18. Plutonium explosive dispersal modeling using the MACCS2 computer code

    International Nuclear Information System (INIS)

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-01-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ''Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants''. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology

  19. Plutonium explosive dispersal modeling using the MACCS2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-11-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.

  20. Genetic coding and gene expression - new Quadruplet genetic coding model

    Science.gov (United States)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  1. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  2. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong

    2011-01-01

    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  3. How effective is the revised regulatory code for alcohol advertising in Australia?

    Science.gov (United States)

    Jones, Sandra C; Hall, Danika; Munro, Geoffrey

    2008-01-01

    Australia, like several other countries, has a self-regulatory approach to advertising. However, in recent years the effectiveness of the regulatory system has been questioned, and there have been increasing public calls for an overhaul of the system. Following a formal review in 2003, the Ministerial Council on Drug Strategy proposed a revised Alcoholic Beverages Advertising Code (ABAC), which came into operation in 2004. The purpose of the present study was to examine the effectiveness of this revised system. From May 2004 until March 2005 television and magazine advertising campaigns were monitored for alcohol products. Over this period 14 complaints against alcohol advertisements were lodged with the self-regulatory board, and the authors recruited an independent expert panel to assess the advertisements and complaints. In eight of the 14 cases a majority of the judges perceived the advertisement to be in breach of the code, and in no cases did a majority perceive no breach. Conversely, however, none of the complaints were upheld by the Advertising Standards Board (ASB) and only one by the ABAC Panel. The results of this study suggest that the decisions made by the ASB in relation to complaints against alcohol advertisements are not in harmony with the judgement of independent experts, and that the ASB may not be performing an adequate job of representing community standards or protecting the community from offensive or inappropriate advertisements. Further, it appears that the revisions to the ABAC code, and associated processes, have not reduced the problems associated with alcohol advertising in Australia.

  4. Computational challenges in modeling gene regulatory events.

    Science.gov (United States)

    Pataskar, Abhijeet; Tiwari, Vijay K

    2016-10-19

    Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.

  5. Steam condensation modelling in aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.

    1986-01-01

    The principal subject of this study is the modelling of the condensation of steam into and evaporation of water from aerosol particles. These processes introduce a new type of term into the equation for the development of the aerosol particle size distribution. This new term faces the code developer with three major problems: the physical modelling of the condensation/evaporation process, the discretisation of the new term and the separate accounting for the masses of the water and of the other components. This study has considered four codes which model the condensation of steam into and its evaporation from aerosol particles: AEROSYM-M (UK), AEROSOLS/B1 (France), NAUA (Federal Republic of Germany) and CONTAIN (USA). The modelling in the codes has been addressed under three headings. These are the physical modelling of condensation, the mathematics of the discretisation of the equations, and the methods for modelling the separate behaviour of different chemical components of the aerosol. The codes are least advanced in area of solute effect modelling. At present only AEROSOLS/B1 includes the effect. The effect is greater for more concentrated solutions. Codes without the effect will be more in error (underestimating the total airborne mass) the less condensation they predict. Data are needed on the water vapour pressure above concentrated solutions of the substances of interest (especially CsOH and CsI) if the extent to which aerosols retain water under superheated conditions is to be modelled. 15 refs

  6. HLA-E regulatory and coding region variability and haplotypes in a Brazilian population sample.

    Science.gov (United States)

    Ramalho, Jaqueline; Veiga-Castelli, Luciana C; Donadi, Eduardo A; Mendes-Junior, Celso T; Castelli, Erick C

    2017-11-01

    The HLA-E gene is characterized by low but wide expression on different tissues. HLA-E is considered a conserved gene, being one of the least polymorphic class I HLA genes. The HLA-E molecule interacts with Natural Killer cell receptors and T lymphocytes receptors, and might activate or inhibit immune responses depending on the peptide associated with HLA-E and with which receptors HLA-E interacts to. Variable sites within the HLA-E regulatory and coding segments may influence the gene function by modifying its expression pattern or encoded molecule, thus, influencing its interaction with receptors and the peptide. Here we propose an approach to evaluate the gene structure, haplotype pattern and the complete HLA-E variability, including regulatory (promoter and 3'UTR) and coding segments (with introns), by using massively parallel sequencing. We investigated the variability of 420 samples from a very admixed population such as Brazilians by using this approach. Considering a segment of about 7kb, 63 variable sites were detected, arranged into 75 extended haplotypes. We detected 37 different promoter sequences (but few frequent ones), 27 different coding sequences (15 representing new HLA-E alleles) and 12 haplotypes at the 3'UTR segment, two of them presenting a summed frequency of 90%. Despite the number of coding alleles, they encode mainly two different full-length molecules, known as E*01:01 and E*01:03, which corresponds to about 90% of all. In addition, differently from what has been previously observed for other non classical HLA genes, the relationship among the HLA-E promoter, coding and 3'UTR haplotypes is not straightforward because the same promoter and 3'UTR haplotypes were many times associated with different HLA-E coding haplotypes. This data reinforces the presence of only two main full-length HLA-E molecules encoded by the many HLA-E alleles detected in our population sample. In addition, this data does indicate that the distal HLA-E promoter is by

  7. Modeling Dynamic Regulatory Processes in Stroke

    Science.gov (United States)

    McDermott, Jason E.; Jarman, Kenneth; Taylor, Ronald; Lancaster, Mary; Shankaran, Harish; Vartanian, Keri B.; Stevens, Susan L.; Stenzel-Poore, Mary P.; Sanfilippo, Antonio

    2012-01-01

    The ability to examine the behavior of biological systems in silico has the potential to greatly accelerate the pace of discovery in diseases, such as stroke, where in vivo analysis is time intensive and costly. In this paper we describe an approach for in silico examination of responses of the blood transcriptome to neuroprotective agents and subsequent stroke through the development of dynamic models of the regulatory processes observed in the experimental gene expression data. First, we identified functional gene clusters from these data. Next, we derived ordinary differential equations (ODEs) from the data relating these functional clusters to each other in terms of their regulatory influence on one another. Dynamic models were developed by coupling these ODEs into a model that simulates the expression of regulated functional clusters. By changing the magnitude of gene expression in the initial input state it was possible to assess the behavior of the networks through time under varying conditions since the dynamic model only requires an initial starting state, and does not require measurement of regulatory influences at each time point in order to make accurate predictions. We discuss the implications of our models on neuroprotection in stroke, explore the limitations of the approach, and report that an optimized dynamic model can provide accurate predictions of overall system behavior under several different neuroprotective paradigms. PMID:23071432

  8. Economic aspects and models for building codes

    DEFF Research Database (Denmark)

    Bonke, Jens; Pedersen, Dan Ove; Johnsen, Kjeld

    It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study.......It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study....

  9. Modeling report of DYMOND code (DUPIC version)

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Yacout, Abdellatif M.

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc

  10. Modeling report of DYMOND code (DUPIC version)

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan [KAERI, Taejon (Korea, Republic of); Yacout, Abdellatif M [Argonne National Laboratory, Ilinois (United States)

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc.

  11. High burnup models in computer code fair

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, B K; Swami Prasad, P; Kushwaha, H S; Mahajan, S C; Kakodar, A [Bhabha Atomic Research Centre, Bombay (India)

    1997-08-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ``Light water reactor fuel rod modelling code evaluation`` and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs.

  12. High burnup models in computer code fair

    International Nuclear Information System (INIS)

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  13. Models and applications of the UEDGE code

    International Nuclear Information System (INIS)

    Rensink, M.E.; Knoll, D.A.; Porter, G.D.; Rognlien, T.D.; Smith, G.R.; Wising, F.

    1996-09-01

    The transport of particles and energy from the core of a tokamak to nearby material surfaces is an important problem for understanding present experiments and for designing reactor-grade devices. A number of fluid transport codes have been developed to model the plasma in the edge and scrape-off layer (SOL) regions. This report will focus on recent model improvements and illustrative results from the UEDGE code. Some geometric and mesh considerations are introduced, followed by a general description of the plasma and neutral fluid models. A few comments on computational issues are given and then two important applications are illustrated concerning benchmarking and the ITER radiative divertor. Finally, we report on some recent work to improve the models in UEDGE by coupling to a Monte Carlo neutrals code and by utilizing an adaptive grid

  14. Characteristics of a third generation regulatory models

    International Nuclear Information System (INIS)

    Kallos, G.; Pilinis, C.; Kassomenos, P.; Hatzakis, G.

    1992-01-01

    A new class of air pollution dispersion models has to be used for regulatory purposes in the future and they should have the following capabilities: Adequate spatial and temporal characterization of the PBL, wind and turbulence fields, enhancement and regionalization of existing air quality models to yield a model applicable on both urban and regional scales, adequate characterization of the chemical transformation of the gaseous pollutants in the atmosphere, adequate description of the dispersion and the physical changes of the toxic material released (i.e. evaporation and condensation), good description of the chemistry in aqueous phases, like fog and cloud chemistry, and accurate predictions of the removal processes of the gas and particulate matter (i.e. wet an dry deposition). Some of these model requirements are already satisfied in some research oriented models. In general, there is no unique model that satisfies all of the above, and even if it existed there is not the necessary database for such application and testing. The two major areas of development of regulatory modeling in the future, i.e. development of a complete meteorological and air quality model and development of the database capabilities for the routine application of the model, in the urban and rural areas of interest. With the computer power available in the near future, this kind of models will be able to be used even for emergency cases. (AB) (20 refs.)

  15. The nuclear reaction model code MEDICUS

    International Nuclear Information System (INIS)

    Ibishia, A.I.

    2008-01-01

    The new computer code MEDICUS has been used to calculate cross sections of nuclear reactions. The code, implemented in MATLAB 6.5, Mathematica 5, and Fortran 95 programming languages, can be run in graphical and command line mode. Graphical User Interface (GUI) has been built that allows the user to perform calculations and to plot results just by mouse clicking. The MS Windows XP and Red Hat Linux platforms are supported. MEDICUS is a modern nuclear reaction code that can compute charged particle-, photon-, and neutron-induced reactions in the energy range from thresholds to about 200 MeV. The calculation of the cross sections of nuclear reactions are done in the framework of the Exact Many-Body Nuclear Cluster Model (EMBNCM), Direct Nuclear Reactions, Pre-equilibrium Reactions, Optical Model, DWBA, and Exciton Model with Cluster Emission. The code can be used also for the calculation of nuclear cluster structure of nuclei. We have calculated nuclear cluster models for some nuclei such as 177 Lu, 90 Y, and 27 Al. It has been found that nucleus 27 Al can be represented through the two different nuclear cluster models: 25 Mg + d and 24 Na + 3 He. Cross sections in function of energy for the reaction 27 Al( 3 He,x) 22 Na, established as a production method of 22 Na, are calculated by the code MEDICUS. Theoretical calculations of cross sections are in good agreement with experimental results. Reaction mechanisms are taken into account. (author)

  16. Sparsity in Model Gene Regulatory Networks

    International Nuclear Information System (INIS)

    Zagorski, M.

    2011-01-01

    We propose a gene regulatory network model which incorporates the microscopic interactions between genes and transcription factors. In particular the gene's expression level is determined by deterministic synchronous dynamics with contribution from excitatory interactions. We study the structure of networks that have a particular '' function '' and are subject to the natural selection pressure. The question of network robustness against point mutations is addressed, and we conclude that only a small part of connections defined as '' essential '' for cell's existence is fragile. Additionally, the obtained networks are sparse with narrow in-degree and broad out-degree, properties well known from experimental study of biological regulatory networks. Furthermore, during sampling procedure we observe that significantly different genotypes can emerge under mutation-selection balance. All the preceding features hold for the model parameters which lay in the experimentally relevant range. (author)

  17. RCS modeling with the TSAR FDTD code

    Energy Technology Data Exchange (ETDEWEB)

    Pennock, S.T.; Ray, S.L.

    1992-03-01

    The TSAR electromagnetic modeling system consists of a family of related codes that have been designed to work together to provide users with a practical way to set up, run, and interpret the results from complex 3-D finite-difference time-domain (FDTD) electromagnetic simulations. The software has been in development at the Lawrence Livermore National Laboratory (LLNL) and at other sites since 1987. Active internal use of the codes began in 1988 with limited external distribution and use beginning in 1991. TSAR was originally developed to analyze high-power microwave and EMP coupling problems. However, the general-purpose nature of the tools has enabled us to use the codes to solve a broader class of electromagnetic applications and has motivated the addition of new features. In particular a family of near-to-far field transformation routines have been added to the codes, enabling TSAR to be used for radar-cross section and antenna analysis problems.

  18. MELCOR code modeling for APR1400

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Young; Park, S. Y.; Kim, D. H.; Ahn, K. I.; Song, Y. M.; Kim, S. D.; Park, J. H

    2001-11-01

    The severe accident phenomena of nuclear power plant have large uncertainties. For the retention of the containment integrity and improvement of nuclear reactor safety against severe accident, it is essential to understand severe accident phenomena and be able to access the accident progression accurately using computer code. Furthermore, it is important to attain a capability for developing technique and assessment tools for an advanced nuclear reactor design as well as for the severe accident prevention and mitigation. The objective of this report is to establish technical bases for an application of the MELCOR code to the Korean Next Generation Reactor (APR1400) by modeling the plant and analyzing plant steady state. This report shows the data and the input preparation for MELCOR code as well as state-state assessment results using MELCOR code.

  19. ITER Dynamic Tritium Inventory Modeling Code

    International Nuclear Information System (INIS)

    Cristescu, Ioana-R.; Doerr, L.; Busigin, A.; Murdoch, D.

    2005-01-01

    A tool for tritium inventory evaluation within each sub-system of the Fuel Cycle of ITER is vital, with respect to both the process of licensing ITER and also for operation. It is very likely that measurements of total tritium inventories may not be possible for all sub-systems, however tritium accounting may be achieved by modeling its hold-up within each sub-system and by validating these models in real-time against the monitored flows and tritium streams between the systems. To get reliable results, an accurate dynamic modeling of the tritium content in each sub-system is necessary. In order to optimize the configuration and operation of the ITER fuel cycle, a dynamic fuel cycle model was developed progressively in the decade up to 2000-2001. As the design for some sub-systems from the fuel cycle (i.e. Vacuum pumping, Neutral Beam Injectors (NBI)) have substantially progressed meanwhile, a new code developed under a different platform to incorporate these modifications has been developed. The new code is taking over the models and algorithms for some subsystems, such as Isotope Separation System (ISS); where simplified models have been previously considered, more detailed have been introduced, as for the Water Detritiation System (WDS). To reflect all these changes, the new code developed inside EU participating team was nominated TRIMO (Tritium Inventory Modeling), to emphasize the use of the code on assessing the tritium inventory within ITER

  20. Hydrogen recycle modeling in transport codes

    International Nuclear Information System (INIS)

    Howe, H.C.

    1979-01-01

    The hydrogen recycling models now used in Tokamak transport codes are reviewed and the method by which realistic recycling models are being added is discussed. Present models use arbitrary recycle coefficients and therefore do not model the actual recycling processes at the wall. A model for the hydrogen concentration in the wall serves two purposes: (1) it allows a better understanding of the density behavior in present gas puff, pellet, and neutral beam heating experiments; and (2) it allows one to extrapolate to long pulse devices such as EBT, ISX-C and reactors where the walls are observed or expected to saturate. Several wall models are presently being studied for inclusion in transport codes

  1. Computer simulations of a generic truck cask in a regulatory fire using the Container Analysis Fire Environment (CAFE) code

    International Nuclear Information System (INIS)

    Ju, H.; Greiner, M.; Suo-Anttila, A.

    2002-01-01

    The Container Analysis Fire Environment (CAFE) computer code is designed to predict accurately convection and radiation heat transfer to a thermally massive object engulfed in a large pool fire. It is well suited for design and risk analyses of spent nuclear fuel transport systems. CAFE employs computational fluid dynamics and several fire and radiation models. These models must be benchmarked using experimental results. In this paper, a set of wind velocity conditions are determined which allow CAFE accurately to reproduce recent heat transfer measurements for a thick walled calorimeter in a ST-1 regulatory pool fire. CAFE is then used to predict the response of an intack (thin walled) generic legal weight truck cask. The maximum temperatures reached by internal components are within safe limits. A simple 800 deg. C, grey-radiation fire model gives maximum component temperatures that are somewhat below those predicted by CAFE. (author)

  2. Chemistry models in the Victoria code

    International Nuclear Information System (INIS)

    Grimley, A.J. III

    1988-01-01

    The VICTORIA Computer code consists of the fission product release and chemistry models for the MELPROG severe accident analysis code. The chemistry models in VICTORIA are used to treat multi-phase interactions in four separate physical regions: fuel grains, gap/open porosity/clad, coolant/aerosols, and structure surfaces. The physical and chemical environment of each region is very different from the others and different models are required for each. The common thread in the modelling is the use of a chemical equilibrium assumption. The validity of this assumption along with a description of the various physical constraints applicable to each region will be discussed. The models that result from the assumptions and constraints will be presented along with samples of calculations in each region

  3. Direct containment heating models in the CONTAIN code

    International Nuclear Information System (INIS)

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale

  4. Direct containment heating models in the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  5. Modeling peripheral olfactory coding in Drosophila larvae.

    Directory of Open Access Journals (Sweden)

    Derek J Hoare

    Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.

  6. Final Technical Report for GO17004 Regulatory Logic: Codes and Standards for the Hydrogen Economy

    Energy Technology Data Exchange (ETDEWEB)

    Nakarado, Gary L. [Regulatory Logic LLC, Golden, CO (United States)

    2017-02-22

    The objectives of this project are to: develop a robust supporting research and development program to provide critical hydrogen behavior data and a detailed understanding of hydrogen combustion and safety across a range of scenarios, needed to establish setback distances in building codes and minimize the overall data gaps in code development; support and facilitate the completion of technical specifications by the International Organization for Standardization (ISO) for gaseous hydrogen refueling (TS 20012) and standards for on-board liquid (ISO 13985) and gaseous or gaseous blend (ISO 15869) hydrogen storage by 2007; support and facilitate the effort, led by the NFPA, to complete the draft Hydrogen Technologies Code (NFPA 2) by 2008; with experimental data and input from Technology Validation Program element activities, support and facilitate the completion of standards for bulk hydrogen storage (e.g., NFPA 55) by 2008; facilitate the adoption of the most recently available model codes (e.g., from the International Code Council [ICC]) in key regions; complete preliminary research and development on hydrogen release scenarios to support the establishment of setback distances in building codes and provide a sound basis for model code development and adoption; support and facilitate the development of Global Technical Regulations (GTRs) by 2010 for hydrogen vehicle systems under the United Nations Economic Commission for Europe, World Forum for Harmonization of Vehicle Regulations and Working Party on Pollution and Energy Program (ECE-WP29/GRPE); and to Support and facilitate the completion by 2012 of necessary codes and standards needed for the early commercialization and market entry of hydrogen energy technologies.

  7. Sodium pool fire model for CONACS code

    International Nuclear Information System (INIS)

    Yung, S.C.

    1982-01-01

    The modeling of sodium pool fires constitutes an important ingredient in conducting LMFBR accident analysis. Such modeling capability has recently come under scrutiny at Westinghouse Hanford Company (WHC) within the context of developing CONACS, the Containment Analysis Code System. One of the efforts in the CONACS program is to model various combustion processes anticipated to occur during postulated accident paths. This effort includes the selection or modification of an existing model and development of a new model if it clearly contributes to the program purpose. As part of this effort, a new sodium pool fire model has been developed that is directed at removing some of the deficiencies in the existing models, such as SOFIRE-II and FEUNA

  8. Towards Product Lining Model-Driven Development Code Generators

    OpenAIRE

    Roth, Alexander; Rumpe, Bernhard

    2015-01-01

    A code generator systematically transforms compact models to detailed code. Today, code generation is regarded as an integral part of model-driven development (MDD). Despite its relevance, the development of code generators is an inherently complex task and common methodologies and architectures are lacking. Additionally, reuse and extension of existing code generators only exist on individual parts. A systematic development and reuse based on a code generator product line is still in its inf...

  9. Regulatory Models and the Environment: Practice, Pitfalls, and Prospects

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, K. John; Graham, Judith A.; McKone, Thomas; Whipple, Chris

    2008-06-01

    Computational models support environmental regulatory activities by providing the regulator an ability to evaluate available knowledge, assess alternative regulations, and provide a framework to assess compliance. But all models face inherent uncertainties, because human and natural systems are always more complex and heterogeneous than can be captured in a model. Here we provide a summary discussion of the activities, findings, and recommendations of the National Research Council's Committee on Regulatory Environmental Models, a committee funded by the US Environmental Protection Agency to provide guidance on the use of computational models in the regulatory process. Modeling is a difficult enterprise even outside of the potentially adversarial regulatory environment. The demands grow when the regulatory requirements for accountability, transparency, public accessibility, and technical rigor are added to the challenges. Moreover, models cannot be validated (declared true) but instead should be evaluated with regard to their suitability as tools to address a specific question. The committee concluded that these characteristics make evaluation of a regulatory model more complex than simply comparing measurement data with model results. Evaluation also must balance the need for a model to be accurate with the need for a model to be reproducible, transparent, and useful for the regulatory decision at hand. Meeting these needs requires model evaluation to be applied over the"life cycle" of a regulatory model with an approach that includes different forms of peer review, uncertainty analysis, and extrapolation methods than for non-regulatory models.

  10. Mainstreaming solar : Stretching the regulatory regime through business model innovation

    NARCIS (Netherlands)

    Huijben, J.C.C.M.; Verbong, G.P.J.; Podoynitsyna, K.S.

    This paper explores how the regulatory regime for Solar PV, defined as a combination of niche shielding and mainstream regulations, affects niche business models, using the Dutch and Flemish regulatory regimes as examples. The regulatory regime does not influence all components of the business

  11. Rapid installation of numerical models in multiple parent codes

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  12. Properties of Sequence Conservation in Upstream Regulatory and Protein Coding Sequences among Paralogs in Arabidopsis thaliana

    Science.gov (United States)

    Richardson, Dale N.; Wiehe, Thomas

    Whole genome duplication (WGD) has catalyzed the formation of new species, genes with novel functions, altered expression patterns, complexified signaling pathways and has provided organisms a level of genetic robustness. We studied the long-term evolution and interrelationships of 5’ upstream regulatory sequences (URSs), protein coding sequences (CDSs) and expression correlations (EC) of duplicated gene pairs in Arabidopsis. Three distinct methods revealed significant evolutionary conservation between paralogous URSs and were highly correlated with microarray-based expression correlation of the respective gene pairs. Positional information on exact matches between sequences unveiled the contribution of micro-chromosomal rearrangements on expression divergence. A three-way rank analysis of URS similarity, CDS divergence and EC uncovered specific gene functional biases. Transcription factor activity was associated with gene pairs exhibiting conserved URSs and divergent CDSs, whereas a broad array of metabolic enzymes was found to be associated with gene pairs showing diverged URSs but conserved CDSs.

  13. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  14. Hydrological model in STEALTH 2-D code

    International Nuclear Information System (INIS)

    Hart, R.; Hofmann, R.

    1979-10-01

    Porous media fluid flow logic has been added to the two-dimensional version of the STEALTH explicit finite-difference code. It is a first-order hydrological model based upon Darcy's Law. Anisotropic permeability can be prescribed through x and y directional permeabilities. The fluid flow equations are formulated for either two-dimensional translation symmetry or two-dimensional axial symmetry. The addition of the hydrological model to STEALTH is a first step toward analyzing a physical system's response to the coupling of thermal, mechanical, and fluid flow phenomena

  15. Tokamak Simulation Code modeling of NSTX

    International Nuclear Information System (INIS)

    Jardin, S.C.; Kaye, S.; Menard, J.; Kessel, C.; Glasser, A.H.

    2000-01-01

    The Tokamak Simulation Code [TSC] is widely used for the design of new axisymmetric toroidal experiments. In particular, TSC was used extensively in the design of the National Spherical Torus eXperiment [NSTX]. The authors have now benchmarked TSC with initial NSTX results and find excellent agreement for plasma and vessel currents and magnetic flux loops when the experimental coil currents are used in the simulations. TSC has also been coupled with a ballooning stability code and with DCON to provide stability predictions for NSTX operation. TSC has also been used to model initial CHI experiments where a large poloidal voltage is applied to the NSTX vacuum vessel, causing a force-free current to appear in the plasma. This is a phenomenon that is similar to the plasma halo current that sometimes develops during a plasma disruption

  16. Comprehensive Reconstruction and Visualization of Non-Coding Regulatory Networks in Human

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape. PMID:25540777

  17. Comprehensive reconstruction and visualization of non-coding regulatory networks in human.

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape.

  18. Hitch code capabilities for modeling AVT chemistry

    International Nuclear Information System (INIS)

    Leibovitz, J.

    1985-01-01

    Several types of corrosion have damaged alloy 600 tubing in the secondary side of steam generators. The types of corrosion include wastage, denting, intergranular attack, stress corrosion, erosion-corrosion, etc. The environments which cause attack may originate from leaks of cooling water into the condensate, etc. When the contaminated feedwater is pumped into the generator, the impurities may concentrate first 200 to 400 fold in the bulk water, depending on the blowdown, and then further to saturation and dryness in heated tube support plate crevices. Characterization of local solution chemistries is the first step to predict and correct the type of corrosion that can occur. The pH is of particular importance because it is a major factor governing the rate of corrosion reactions. The pH of a solution at high temperature is not the same as the ambient temperature, since ionic dissociation constants, solubility and solubility products, activity coefficients, etc., all change with temperature. Because the high temperature chemistry of such solutions is not readily characterized experimentally, modeling techniques were developed under EPRI sponsorship to calculate the high temperature chemistry of the relevant solutions. In many cases, the effects of cooling water impurities on steam generator water chemistry with all volatile treatment (AVT), upon concentration by boiling, and in particular the resulting acid or base concentration can be calculated by a simple code, the HITCH code, which is very easy to use. The scope and applicability of the HITCH code are summarized

  19. Top flooding modeling with MAAP4 code

    International Nuclear Information System (INIS)

    Brunet-Thibault, E.; Marguet, S.

    2006-01-01

    An engineering top flooding model was developed in MAAP4.04d.4, the severe accident code used in EDF, to simulate the thermal-hydraulic phenomena that should take place if emergency core cooling (ECC) water was injected in hot leg during quenching. In the framework of the ISTC (International Science and Technology Centre), a top flooding test was proposed in the PARAMETER facility (Podolsk, Russia). The MAAP calculation of the PARAMETER top flooding test is presented in this paper. A comparison between top and bottom flooding was made on the bundle test geometry. According to this study, top flooding appears to cool quickly and effectively the upper plenum internals. (author)

  20. 24 CFR 200.925c - Model codes.

    Science.gov (United States)

    2010-04-01

    ... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... number 2 (Chapter 7) of the Building Code, but including the Appendices of the Code. Available from...

  1. Containment Modelling with the ASTEC Code

    International Nuclear Information System (INIS)

    Sadek, Sinisa; Grgic, Davor

    2014-01-01

    ASTEC is an integral computer code jointly developed by Institut de Radioprotection et de Surete Nucleaire (IRSN, France) and Gesellschaft fur Anlagen-und Reaktorsicherheit (GRS, Germany) to assess the nuclear power plant behaviour during a severe accident (SA). It consists of 13 coupled modules which compute various SA phenomena in primary and secondary circuits of the nuclear power plants (NPP), and in the containment. The ASTEC code was used to model and to simulate NPP behaviour during a postulated station blackout accident in the NPP Krsko, a two-loop pressurized water reactor (PWR) plant. The primary system of the plant was modelled with 110 thermal hydraulic (TH) volumes, 113 junctions and 128 heat structures. The secondary system was modelled with 76 TH volumes, 77 junctions and 87 heat structures. The containment was modelled with 10 TH volumes by taking into account containment representation as a set of distinctive compartments, connected with 23 junctions. A total of 79 heat structures were used to simulate outer containment walls and internal steel and concrete structures. Prior to the transient calculation, a steady state analysis was performed. In order to achieve correct plant initial conditions, the operation of regulation systems was modelled. Parameters which were subjected to regulation were the pressurizer pressure, the pressurizer narrow range level and steam mass flow rates in the steam lines. The accident analysis was focused on containment behaviour, however the complete integral NPP analysis was carried out in order to provide correct boundary conditions for the containment calculation. During the accident, the containment integrity was challenged by release of reactor system coolant through degraded coolant pump seals and, later in the accident following release of the corium out of the reactor pressure vessel, by the molten corium concrete interaction and direct containment heating mechanisms. Impact of those processes on relevant

  2. Simple mathematical models of gene regulatory dynamics

    CERN Document Server

    Mackey, Michael C; Tyran-Kamińska, Marta; Zeron, Eduardo S

    2016-01-01

    This is a short and self-contained introduction to the field of mathematical modeling of gene-networks in bacteria. As an entry point to the field, we focus on the analysis of simple gene-network dynamics. The notes commence with an introduction to the deterministic modeling of gene-networks, with extensive reference to applicable results coming from dynamical systems theory. The second part of the notes treats extensively several approaches to the study of gene-network dynamics in the presence of noise—either arising from low numbers of molecules involved, or due to noise external to the regulatory process. The third and final part of the notes gives a detailed treatment of three well studied and concrete examples of gene-network dynamics by considering the lactose operon, the tryptophan operon, and the lysis-lysogeny switch. The notes contain an index for easy location of particular topics as well as an extensive bibliography of the current literature. The target audience of these notes are mainly graduat...

  3. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh

    2015-08-12

    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.

  4. Conservation of concrete structures according to fib Model Code 2010

    NARCIS (Netherlands)

    Matthews, S.; Bigaj-Van Vliet, A.; Ueda, T.

    2013-01-01

    Conservation of concrete structures forms an essential part of the fib Model Code for Concrete Structures 2010 (fib Model Code 2010). In particular, Chapter 9 of fib Model Code 2010 addresses issues concerning conservation strategies and tactics, conservation management, condition surveys, condition

  5. Gap Conductance model Validation in the TASS/SMR-S code using MARS code

    International Nuclear Information System (INIS)

    Ahn, Sang Jun; Yang, Soo Hyung; Chung, Young Jong; Lee, Won Jae

    2010-01-01

    Korea Atomic Energy Research Institute (KAERI) has been developing the TASS/SMR-S (Transient and Setpoint Simulation/Small and Medium Reactor) code, which is a thermal hydraulic code for the safety analysis of the advanced integral reactor. An appropriate work to validate the applicability of the thermal hydraulic models within the code should be demanded. Among the models, the gap conductance model which is describes the thermal gap conductivity between fuel and cladding was validated through the comparison with MARS code. The validation of the gap conductance model was performed by evaluating the variation of the gap temperature and gap width as the changed with the power fraction. In this paper, a brief description of the gap conductance model in the TASS/SMR-S code is presented. In addition, calculated results to validate the gap conductance model are demonstrated by comparing with the results of the MARS code with the test case

  6. Systematic review regulatory principles of non-coding RNAs in cardiovascular diseases.

    Science.gov (United States)

    Li, Yongsheng; Huo, Caiqin; Pan, Tao; Li, Lili; Jin, Xiyun; Lin, Xiaoyu; Chen, Juan; Zhang, Jinwen; Guo, Zheng; Xu, Juan; Li, Xia

    2017-08-16

    Cardiovascular diseases (CVDs) continue to be a major cause of morbidity and mortality, and non-coding RNAs (ncRNAs) play critical roles in CVDs. With the recent emergence of high-throughput technologies, including small RNA sequencing, investigations of CVDs have been transformed from candidate-based studies into genome-wide undertakings, and a number of ncRNAs in CVDs were discovered in various studies. A comprehensive review of these ncRNAs would be highly valuable for researchers to get a complete picture of the ncRNAs in CVD. To address these knowledge gaps and clinical needs, in this review, we first discussed dysregulated ncRNAs and their critical roles in cardiovascular development and related diseases. Moreover, we reviewed >28 561 published papers and documented the ncRNA-CVD association benchmarking data sets to summarize the principles of ncRNA regulation in CVDs. This data set included 13 249 curated relationships between 9503 ncRNAs and 139 CVDs in 12 species. Based on this comprehensive resource, we summarized the regulatory principles of dysregulated ncRNAs in CVDs, including the complex associations between ncRNA and CVDs, tissue specificity and ncRNA synergistic regulation. The highlighted principles are that CVD microRNAs (miRNAs) are highly expressed in heart tissue and that they play central roles in miRNA-miRNA functional synergistic network. In addition, CVD-related miRNAs are close to one another in the functional network, indicating the modular characteristic features of CVD miRNAs. We believe that the regulatory principles summarized here will further contribute to our understanding of ncRNA function and dysregulation mechanisms in CVDs. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh; Aboutoraby, Neda; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase

  8. Code Differentiation for Hydrodynamic Model Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  9. PHITS code improvements by Regulatory Standard and Research Department Secretariat of Nuclear Regulation Authority

    International Nuclear Information System (INIS)

    Goko, Shinji

    2017-01-01

    As for the safety analysis to be carried out when a nuclear power company applies for installation permission of facility or equipment, business license, design approval etc., the Regulatory Standard and Research Department Secretariat of Nuclear Regulation Authority continuously conducts safety research for the introduction of various technologies and their improvement in order to evaluate the adequacy of this safety analysis. In the field of the shielding analysis of nuclear fuel transportation materials, this group improved the code to make PHITS applicable to this field, and has been promoting the improvement as a tool used for regulations since FY2013. This paper introduced the history and progress of this safety research. PHITS 2.88, which is the latest version as of November 2016, was equipped with the automatic generation function of variance reduction parameters [T-WWG] etc., and developed as the tool equipped with many effective functions in practical application to nuclear power regulations. In addition, this group conducted the verification analysis against nuclear fuel packages, which showed a good agreement with the analysis by MCNP, which is extensively used worldwide and abundant in actual results. It also shows a relatively good agreement with the measured values, when considering differences in analysis and measurement. (A.O.)

  10. Understanding Epistatic Interactions between Genes Targeted by Non-coding Regulatory Elements in Complex Diseases

    Directory of Open Access Journals (Sweden)

    Min Kyung Sung

    2014-12-01

    Full Text Available Genome-wide association studies have proven the highly polygenic architecture of complex diseases or traits; therefore, single-locus-based methods are usually unable to detect all involved loci, especially when individual loci exert small effects. Moreover, the majority of associated single-nucleotide polymorphisms resides in non-coding regions, making it difficult to understand their phenotypic contribution. In this work, we studied epistatic interactions associated with three common diseases using Korea Association Resource (KARE data: type 2 diabetes mellitus (DM, hypertension (HT, and coronary artery disease (CAD. We showed that epistatic single-nucleotide polymorphisms (SNPs were enriched in enhancers, as well as in DNase I footprints (the Encyclopedia of DNA Elements [ENCODE] Project Consortium 2012, which suggested that the disruption of the regulatory regions where transcription factors bind may be involved in the disease mechanism. Accordingly, to identify the genes affected by the SNPs, we employed whole-genome multiple-cell-type enhancer data which discovered using DNase I profiles and Cap Analysis Gene Expression (CAGE. Assigned genes were significantly enriched in known disease associated gene sets, which were explored based on the literature, suggesting that this approach is useful for detecting relevant affected genes. In our knowledge-based epistatic network, the three diseases share many associated genes and are also closely related with each other through many epistatic interactions. These findings elucidate the genetic basis of the close relationship between DM, HT, and CAD.

  11. ER@CEBAF: Modeling code developments

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Roblin, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-04-13

    A proposal for a multiple-pass, high-energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.

  12. Improved choked flow model for MARS code

    International Nuclear Information System (INIS)

    Chung, Moon Sun; Lee, Won Jae; Ha, Kwi Seok; Hwang, Moon Kyu

    2002-01-01

    Choked flow calculation is improved by using a new sound speed criterion for bubbly flow that is derived by the characteristic analysis of hyperbolic two-fluid model. This model was based on the notion of surface tension for the interfacial pressure jump terms in the momentum equations. Real eigenvalues obtained as the closed-form solution of characteristic polynomial represent the sound speed in the bubbly flow regime that agrees well with the existing experimental data. The present sound speed shows more reasonable result in the extreme case than the Nguyens did. The present choked flow criterion derived by the present sound speed is employed in the MARS code and assessed by using the Marviken choked flow tests. The assessment results without any adjustment made by some discharge coefficients demonstrate more accurate predictions of choked flow rate in the bubbly flow regime than those of the earlier choked flow calculations. By calculating the Typical PWR (SBLOCA) problem, we make sure that the present model can reproduce the reasonable transients of integral reactor system

  13. PetriCode: A Tool for Template-Based Code Generation from CPN Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Code generation is an important part of model driven methodologies. In this paper, we present PetriCode, a software tool for generating protocol software from a subclass of Coloured Petri Nets (CPNs). The CPN subclass is comprised of hierarchical CPN models describing a protocol system at different...

  14. Cracking the regulatory code of biosynthetic gene clusters as a strategy for natural product discovery.

    Science.gov (United States)

    Rigali, Sébastien; Anderssen, Sinaeda; Naômé, Aymeric; van Wezel, Gilles P

    2018-01-05

    The World Health Organization (WHO) describes antibiotic resistance as "one of the biggest threats to global health, food security, and development today", as the number of multi- and pan-resistant bacteria is rising dangerously. Acquired resistance phenomena also impair antifungals, antivirals, anti-cancer drug therapy, while herbicide resistance in weeds threatens the crop industry. On the positive side, it is likely that the chemical space of natural products goes far beyond what has currently been discovered. This idea is fueled by genome sequencing of microorganisms which unveiled numerous so-called cryptic biosynthetic gene clusters (BGCs), many of which are transcriptionally silent under laboratory culture conditions, and by the fact that most bacteria cannot yet be cultivated in the laboratory. However, brute force antibiotic discovery does not yield the same results as it did in the past, and researchers have had to develop creative strategies in order to unravel the hidden potential of microorganisms such as Streptomyces and other antibiotic-producing microorganisms. Identifying the cis elements and their corresponding transcription factors(s) involved in the control of BGCs through bioinformatic approaches is a promising strategy. Theoretically, we are a few 'clicks' away from unveiling the culturing conditions or genetic changes needed to activate the production of cryptic metabolites or increase the production yield of known compounds to make them economically viable. In this opinion article, we describe and illustrate the idea beyond 'cracking' the regulatory code for natural product discovery, by presenting a series of proofs of concept, and discuss what still should be achieved to increase the rate of success of this strategy. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  16. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...

  17. Review on the seismic safety of JRR-3 according to the revised regulatory code on seismic design for nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Tetsuya; Araki, Masaaki; Ohba, Toshinobu; Torii, Yoshiya [Japan Atomic Energy Agency, Tokai, Ibaraki (Japan); Takeuchi, Masaki [Nuclear Safety Commission (Japan)

    2012-03-15

    JRR-3(Japan Research Reactor No.3) with the thermal power of 20MW is a light water moderated and cooled, swimming pool type research reactor. JRR-3 has been operated without major troubles. This paper presents about review on the seismic safety of JRR-3 according to the revised regulatory code on seismic design for nuclear reactors. In addition, some topics concerning damages in JRR-3 due to the Great East Japan Earthquake are presented. (author)

  18. Fuel analysis code FAIR and its high burnup modelling capabilities

    International Nuclear Information System (INIS)

    Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs

  19. The analysis of thermal-hydraulic models in MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M H; Hur, C; Kim, D K; Cho, H J [POhang Univ., of Science and TECHnology, Pohang (Korea, Republic of)

    1996-07-15

    The objective of the present work is to verify the prediction and analysis capability of MELCOR code about the progression of severe accidents in light water reactor and also to evaluate appropriateness of thermal-hydraulic models used in MELCOR code. Comparing the results of experiment and calculation with MELCOR code is carried out to achieve the above objective. Specially, the comparison between the CORA-13 experiment and the MELCOR code calculation was performed.

  20. Tardos fingerprinting codes in the combined digit model

    NARCIS (Netherlands)

    Skoric, B.; Katzenbeisser, S.; Schaathun, H.G.; Celik, M.U.

    2009-01-01

    We introduce a new attack model for collusion-secure codes, called the combined digit model, which represents signal processing attacks against the underlying watermarking level better than existing models. In this paper, we analyze the performance of two variants of the Tardos code and show that

  1. A deeper look into transcription regulatory code by preferred pair distance templates for transcription factor binding sites

    KAUST Repository

    Kulakovskiy, Ivan V.

    2011-08-18

    Motivation: Modern experimental methods provide substantial information on protein-DNA recognition. Studying arrangements of transcription factor binding sites (TFBSs) of interacting transcription factors (TFs) advances understanding of the transcription regulatory code. Results: We constructed binding motifs for TFs forming a complex with HIF-1α at the erythropoietin 3\\'-enhancer. Corresponding TFBSs were predicted in the segments around transcription start sites (TSSs) of all human genes. Using the genome-wide set of regulatory regions, we observed several strongly preferred distances between hypoxia-responsive element (HRE) and binding sites of a particular cofactor protein. The set of preferred distances was called as a preferred pair distance template (PPDT). PPDT dramatically depended on the TF and orientation of its binding sites relative to HRE. PPDT evaluated from the genome-wide set of regulatory sequences was used to detect significant PPDT-consistent binding site pairs in regulatory regions of hypoxia-responsive genes. We believe PPDT can help to reveal the layout of eukaryotic regulatory segments. © The Author 2011. Published by Oxford University Press. All rights reserved.

  2. Auto-Regulatory RNA Editing Fine-Tunes mRNA Re-Coding and Complex Behaviour in Drosophila

    Science.gov (United States)

    Savva, Yiannis A.; Jepson, James E.C; Sahin, Asli; Sugden, Arthur U.; Dorsky, Jacquelyn S.; Alpert, Lauren; Lawrence, Charles; Reenan, Robert A.

    2014-01-01

    Auto-regulatory feedback loops are a common molecular strategy used to optimize protein function. In Drosophila many mRNAs involved in neuro-transmission are re-coded at the RNA level by the RNA editing enzyme dADAR, leading to the incorporation of amino acids that are not directly encoded by the genome. dADAR also re-codes its own transcript, but the consequences of this auto-regulation in vivo are unclear. Here we show that hard-wiring or abolishing endogenous dADAR auto-regulation dramatically remodels the landscape of re-coding events in a site-specific manner. These molecular phenotypes correlate with altered localization of dADAR within the nuclear compartment. Furthermore, auto-editing exhibits sexually dimorphic patterns of spatial regulation and can be modified by abiotic environmental factors. Finally, we demonstrate that modifying dAdar auto-editing affects adaptive complex behaviors. Our results reveal the in vivo relevance of auto-regulatory control over post-transcriptional mRNA re-coding events in fine-tuning brain function and organismal behavior. PMID:22531175

  3. Interfacial and Wall Transport Models for SPACE-CAP Code

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul; Choi, Hoon; Ha, Sang Jun

    2009-01-01

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code

  4. Interfacial and Wall Transport Models for SPACE-CAP Code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Choi, Hoon; Ha, Sang Jun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2009-10-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code.

  5. Measuring and Modeling the U.S. Regulatory Ecosystem

    Science.gov (United States)

    Bommarito, Michael J., II; Katz, Daniel Martin

    2017-09-01

    Over the last 23 years, the U.S. Securities and Exchange Commission has required over 34,000 companies to file over 165,000 annual reports. These reports, the so-called "Form 10-Ks," contain a characterization of a company's financial performance and its risks, including the regulatory environment in which a company operates. In this paper, we analyze over 4.5 million references to U.S. Federal Acts and Agencies contained within these reports to measure the regulatory ecosystem, in which companies are organisms inhabiting a regulatory environment. While individuals across the political, economic, and academic world frequently refer to trends in this regulatory ecosystem, far less attention has been paid to supporting such claims with large-scale, longitudinal data. In this paper, in addition to positing a model of regulatory ecosystems, we document an increase in the regulatory energy per filing, i.e., a warming "temperature." We also find that the diversity of the regulatory ecosystem has been increasing over the past two decades. These findings support the claim that regulatory activity and complexity are increasing, and this framework contributes an important step towards improving academic and policy discussions around legal complexity and regulation.

  6. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  7. Improving the quality of clinical coding: a comprehensive audit model

    Directory of Open Access Journals (Sweden)

    Hamid Moghaddasi

    2014-04-01

    Full Text Available Introduction: The review of medical records with the aim of assessing the quality of codes has long been conducted in different countries. Auditing medical coding, as an instructive approach, could help to review the quality of codes objectively using defined attributes, and this in turn would lead to improvement of the quality of codes. Method: The current study aimed to present a model for auditing the quality of clinical codes. The audit model was formed after reviewing other audit models, considering their strengths and weaknesses. A clear definition was presented for each quality attribute and more detailed criteria were then set for assessing the quality of codes. Results: The audit tool (based on the quality attributes included legibility, relevancy, completeness, accuracy, definition and timeliness; led to development of an audit model for assessing the quality of medical coding. Delphi technique was then used to reassure the validity of the model. Conclusion: The inclusive audit model designed could provide a reliable and valid basis for assessing the quality of codes considering more quality attributes and their clear definition. The inter-observer check suggested in the method of auditing is of particular importance to reassure the reliability of coding.

  8. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  9. Model uncertainty from a regulatory point of view

    International Nuclear Information System (INIS)

    Abramson, L.R.

    1994-01-01

    This paper discusses model uncertainty in the larger context of knowledge and random uncertainty. It explores some regulatory implications of model uncertainty and argues that, from a regulator's perspective, a conservative approach must be taken. As a consequence of this perspective, averaging over model results is ruled out

  10. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  11. Benchmark Simulation for the Development of the Regulatory Audit Subchannel Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G. H.; Song, C.; Woo, S. W. [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-05-15

    For the safe and reliable operation of a reactor, it is important to predict accurately the flow and temperature distributions in the thermal-hydraulic design of a reactor core. A subchannel approach can give the reasonable flow and temperature distributions with the short computing time. Korea Institute of Nuclear Safety (KINS) is presently reviewing new subchannel code, THALES, which will substitute for both THINC-IV and TORC code. To assess the prediction performance of THALES, KINS is developing the subchannel analysis code for the independent audit calculation. The code is based on workstation version of COBRA-IV-I. The main objective of the present study is to assess the performance of COBRA-IV-I code by comparing the simulation results with experimental ones for the sample problems

  12. WWER radial reflector modeling by diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P. T.; Mittag, S.

    2005-01-01

    The two commonly used approaches to describe the WWER radial reflectors in diffusion codes, by albedo on the core-reflector boundary and by a ring of diffusive assembly size nodes, are discussed. The advantages and disadvantages of the first approach are presented first, then the Koebke's equivalence theory is outlined and its implementation for the WWER radial reflectors is discussed. Results for the WWER-1000 reactor are presented. Then the boundary conditions on the outer reflector boundary are discussed. The possibility to divide the library into fuel assembly and reflector parts and to generate each library by a separate code package is discussed. Finally, the homogenization errors for rodded assemblies are presented and discussed (Author)

  13. The GNASH preequilibrium-statistical nuclear model code

    International Nuclear Information System (INIS)

    Arthur, E. D.

    1988-01-01

    The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on 58 Ni. the second illustrates the fission model capabilities implemented in the code and involves n + 235 U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs

  14. COMPBRN III: a computer code for modeling compartment fires

    International Nuclear Information System (INIS)

    Ho, V.; Siu, N.; Apostolakis, G.; Flanagan, G.F.

    1986-07-01

    The computer code COMPBRN III deterministically models the behavior of compartment fires. This code is an improvement of the original COMPBRN codes. It employs a different air entrainment model and numerical scheme to estimate properties of the ceiling hot gas layer model. Moreover, COMPBRN III incorporates a number of improvements in shape factor calculations and error checking, which distinguish it from the COMPBRN II code. This report presents the ceiling hot gas layer model employed by COMPBRN III as well as several other modifications. Information necessary to run COMPBRN III, including descriptions of required input and resulting output, are also presented. Simulation of experiments and a sample problem are included to demonstrate the usage of the code. 37 figs., 46 refs

  15. Sequence-based model of gap gene regulatory network.

    Science.gov (United States)

    Kozlov, Konstantin; Gursky, Vitaly; Kulakovskiy, Ivan; Samsonova, Maria

    2014-01-01

    The detailed analysis of transcriptional regulation is crucially important for understanding biological processes. The gap gene network in Drosophila attracts large interest among researches studying mechanisms of transcriptional regulation. It implements the most upstream regulatory layer of the segmentation gene network. The knowledge of molecular mechanisms involved in gap gene regulation is far less complete than that of genetics of the system. Mathematical modeling goes beyond insights gained by genetics and molecular approaches. It allows us to reconstruct wild-type gene expression patterns in silico, infer underlying regulatory mechanism and prove its sufficiency. We developed a new model that provides a dynamical description of gap gene regulatory systems, using detailed DNA-based information, as well as spatial transcription factor concentration data at varying time points. We showed that this model correctly reproduces gap gene expression patterns in wild type embryos and is able to predict gap expression patterns in Kr mutants and four reporter constructs. We used four-fold cross validation test and fitting to random dataset to validate the model and proof its sufficiency in data description. The identifiability analysis showed that most model parameters are well identifiable. We reconstructed the gap gene network topology and studied the impact of individual transcription factor binding sites on the model output. We measured this impact by calculating the site regulatory weight as a normalized difference between the residual sum of squares error for the set of all annotated sites and for the set with the site of interest excluded. The reconstructed topology of the gap gene network is in agreement with previous modeling results and data from literature. We showed that 1) the regulatory weights of transcription factor binding sites show very weak correlation with their PWM score; 2) sites with low regulatory weight are important for the model output; 3

  16. Phylum-Level Conservation of Regulatory Information in Nematodes despite Extensive Non-coding Sequence Divergence

    Science.gov (United States)

    Gordon, Kacy L.; Arthur, Robert K.; Ruvinsky, Ilya

    2015-01-01

    Gene regulatory information guides development and shapes the course of evolution. To test conservation of gene regulation within the phylum Nematoda, we compared the functions of putative cis-regulatory sequences of four sets of orthologs (unc-47, unc-25, mec-3 and elt-2) from distantly-related nematode species. These species, Caenorhabditis elegans, its congeneric C. briggsae, and three parasitic species Meloidogyne hapla, Brugia malayi, and Trichinella spiralis, represent four of the five major clades in the phylum Nematoda. Despite the great phylogenetic distances sampled and the extensive sequence divergence of nematode genomes, all but one of the regulatory elements we tested are able to drive at least a subset of the expected gene expression patterns. We show that functionally conserved cis-regulatory elements have no more extended sequence similarity to their C. elegans orthologs than would be expected by chance, but they do harbor motifs that are important for proper expression of the C. elegans genes. These motifs are too short to be distinguished from the background level of sequence similarity, and while identical in sequence they are not conserved in orientation or position. Functional tests reveal that some of these motifs contribute to proper expression. Our results suggest that conserved regulatory circuitry can persist despite considerable turnover within cis elements. PMID:26020930

  17. Phylum-Level Conservation of Regulatory Information in Nematodes despite Extensive Non-coding Sequence Divergence.

    Directory of Open Access Journals (Sweden)

    Kacy L Gordon

    2015-05-01

    Full Text Available Gene regulatory information guides development and shapes the course of evolution. To test conservation of gene regulation within the phylum Nematoda, we compared the functions of putative cis-regulatory sequences of four sets of orthologs (unc-47, unc-25, mec-3 and elt-2 from distantly-related nematode species. These species, Caenorhabditis elegans, its congeneric C. briggsae, and three parasitic species Meloidogyne hapla, Brugia malayi, and Trichinella spiralis, represent four of the five major clades in the phylum Nematoda. Despite the great phylogenetic distances sampled and the extensive sequence divergence of nematode genomes, all but one of the regulatory elements we tested are able to drive at least a subset of the expected gene expression patterns. We show that functionally conserved cis-regulatory elements have no more extended sequence similarity to their C. elegans orthologs than would be expected by chance, but they do harbor motifs that are important for proper expression of the C. elegans genes. These motifs are too short to be distinguished from the background level of sequence similarity, and while identical in sequence they are not conserved in orientation or position. Functional tests reveal that some of these motifs contribute to proper expression. Our results suggest that conserved regulatory circuitry can persist despite considerable turnover within cis elements.

  18. Review of SFR Design Safety using Preliminary Regulatory PSA Model

    International Nuclear Information System (INIS)

    Na, Hyun Ju; Lee, Yong Suk; Shin, Andong; Suh, Nam Duk

    2013-01-01

    The major objective of this research is to develop a risk model for regulatory verification of the SFR design, and thereby, make sure that the SFR design is adequate from a risk perspective. In this paper, the development result of preliminary regulatory PSA model of SFR is discussed. In this paper, development and quantification result of preliminary regulatory PSA model of SFR is discussed. It was confirmed that the importance PDRC and ADRC dampers is significant as stated in the result of KAERI PSA model. However, the importance can be changed significantly depending on assumption of CCCG and CCF factor of PDRC and ADRC dampers. SFR (sodium-cooled fast reactor) which is Gen-IV nuclear energy system, is designed to accord with the concept of stability, sustainability and proliferation resistance. KALIMER-600, which is under development in Korea, includes passive safety systems (e. g. passive reactor shutdown, passive residual heat removal, and etc.) as well as active safety systems. Risk analysis from a regulatory perspective is needed to support the regulatory body in its safety and licensing review for SFR (KALIMER-600). Safety issues should be identified in the early design phase in order to prevent the unexpected cost increase and delay of the SFR licensing schedule that may be caused otherwise

  19. Human population doses: Comparative analysis of CREAM code results with currently computer codes of Nuclear Regulatory Authority; Dosis en la poblacion: comparacion de los resultados del codigo CREAM con resultados de modelos vigentes en la ARN

    Energy Technology Data Exchange (ETDEWEB)

    Alonso Jimenez, Maria Teresa; Curti, Adriana [Autoridad Regulatoria Nuclear, Buenos Aires (Argentina)]. E-mail: mtalonso@sede.arn.gov.ar; acurti@sede.arn.gov.ar

    2001-07-01

    The Nuclear Regulatory Authority is performing an analysis with PC CREAM, developed at the NRPB, for updating computer programs and models used for calculating the transfer of radionuclides through the environment. For CREAM dose assessment verification for local scenarios, this paper presents a comparison of population doses assessed with the computer codes used nowadays and with CREAM, for unitary releases of main radionuclides in nuclear power plant discharges. The results of atmospheric dispersion processes and the transfer of radionuclides through the environment for local scenarios are analysed. The programs used are PLUME for atmospheric dispersion, FARMLAND for the transfer of radionuclides into foodstuffs following atmospheric deposition in the terrestrial environment and ASSESSOR for individual and collective dose assessments.This paper presents the general assumptions made for dose assessments. The results show some differences between doses due to differences in models, in the complexity level of the same models, or in parameters. (author)

  20. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    International Nuclear Information System (INIS)

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes

  1. GASFLOW computer code (physical models and input data)

    International Nuclear Information System (INIS)

    Muehlbauer, Petr

    2007-11-01

    The GASFLOW computer code was developed jointly by the Los Alamos National Laboratory, USA, and Forschungszentrum Karlsruhe, Germany. The code is primarily intended for calculations of the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and in other facilities. The physical models and the input data are described, and a commented simple calculation is presented

  2. Fuel behavior modeling using the MARS computer code

    International Nuclear Information System (INIS)

    Faya, S.C.S.; Faya, A.J.G.

    1983-01-01

    The fuel behaviour modeling code MARS against experimental data, was evaluated. Two cases were selected: an early comercial PWR rod (Maine Yankee rod) and an experimental rod from the Canadian BWR program (Canadian rod). The MARS predictions are compared with experimental data and predictions made by other fuel modeling codes. Improvements are suggested for some fuel behaviour models. Mars results are satisfactory based on the data available. (Author) [pt

  3. MIDAS/PK code development using point kinetics model

    International Nuclear Information System (INIS)

    Song, Y. M.; Park, S. H.

    1999-01-01

    In this study, a MIDAS/PK code has been developed for analyzing the ATWS (Anticipated Transients Without Scram) which can be one of severe accident initiating events. The MIDAS is an integrated computer code based on the MELCOR code to develop a severe accident risk reduction strategy by Korea Atomic Energy Research Institute. In the mean time, the Chexal-Layman correlation in the current MELCOR, which was developed under a BWR condition, is appeared to be inappropriate for a PWR. So as to provide ATWS analysis capability to the MIDAS code, a point kinetics module, PKINETIC, has first been developed as a stand-alone code whose reference model was selected from the current accident analysis codes. In the next step, the MIDAS/PK code has been developed via coupling PKINETIC with the MIDAS code by inter-connecting several thermal hydraulic parameters between the two codes. Since the major concern in the ATWS analysis is the primary peak pressure during the early few minutes into the accident, the peak pressure from the PKINETIC module and the MIDAS/PK are compared with the RETRAN calculations showing a good agreement between them. The MIDAS/PK code is considered to be valuable for analyzing the plant response during ATWS deterministically, especially for the early domestic Westinghouse plants which rely on the operator procedure instead of an AMSAC (ATWS Mitigating System Actuation Circuitry) against ATWS. This capability of ATWS analysis is also important from the view point of accident management and mitigation

  4. Description of codes and models to be used in risk assessment

    International Nuclear Information System (INIS)

    1991-09-01

    Human health and environmental risk assessments will be performed as part of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) remedial investigation/feasibility study (RI/FS) activities at the Hanford Site. Analytical and computer encoded numerical models are commonly used during both the remedial investigation (RI) and feasibility study (FS) to predict or estimate the concentration of contaminants at the point of exposure to humans and/or the environment. This document has been prepared to identify the computer codes that will be used in support of RI/FS human health and environmental risk assessments at the Hanford Site. In addition to the CERCLA RI/FS process, it is recommended that these computer codes be used when fate and transport analyses is required for other activities. Additional computer codes may be used for other purposes (e.g., design of tracer tests, location of observation wells, etc.). This document provides guidance for unit managers in charge of RI/FS activities. Use of the same computer codes for all analytical activities at the Hanford Site will promote consistency, reduce the effort required to develop, validate, and implement models to simulate Hanford Site conditions, and expedite regulatory review. The discussion provides a description of how models will likely be developed and utilized at the Hanford Site. It is intended to summarize previous environmental-related modeling at the Hanford Site and provide background for future model development. The modeling capabilities that are desirable for the Hanford Site and the codes that were evaluated. The recommendations include the codes proposed to support future risk assessment modeling at the Hanford Site, and provides the rational for the codes selected. 27 refs., 3 figs., 1 tab

  5. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi; Jordan, Kirk; Kaushik, Dinesh; Perrone, Michael; Sachdeva, Vipin; Tautges, Timothy J.; Magerlein, John

    2012-01-01

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  6. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi

    2012-06-02

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  7. ChIPBase v2.0: decoding transcriptional regulatory networks of non-coding RNAs and protein-coding genes from ChIP-seq data.

    Science.gov (United States)

    Zhou, Ke-Ren; Liu, Shun; Sun, Wen-Ju; Zheng, Ling-Ling; Zhou, Hui; Yang, Jian-Hua; Qu, Liang-Hu

    2017-01-04

    The abnormal transcriptional regulation of non-coding RNAs (ncRNAs) and protein-coding genes (PCGs) is contributed to various biological processes and linked with human diseases, but the underlying mechanisms remain elusive. In this study, we developed ChIPBase v2.0 (http://rna.sysu.edu.cn/chipbase/) to explore the transcriptional regulatory networks of ncRNAs and PCGs. ChIPBase v2.0 has been expanded with ∼10 200 curated ChIP-seq datasets, which represent about 20 times expansion when comparing to the previous released version. We identified thousands of binding motif matrices and their binding sites from ChIP-seq data of DNA-binding proteins and predicted millions of transcriptional regulatory relationships between transcription factors (TFs) and genes. We constructed 'Regulator' module to predict hundreds of TFs and histone modifications that were involved in or affected transcription of ncRNAs and PCGs. Moreover, we built a web-based tool, Co-Expression, to explore the co-expression patterns between DNA-binding proteins and various types of genes by integrating the gene expression profiles of ∼10 000 tumor samples and ∼9100 normal tissues and cell lines. ChIPBase also provides a ChIP-Function tool and a genome browser to predict functions of diverse genes and visualize various ChIP-seq data. This study will greatly expand our understanding of the transcriptional regulations of ncRNAs and PCGs. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Code Help: Can This Unique State Regulatory Intervention Improve Emergency Department Crowding?

    Science.gov (United States)

    Michael, Sean S; Broach, John P; Kotkowski, Kevin A; Brush, D Eric; Volturo, Gregory A; Reznek, Martin A

    2018-05-01

    Emergency department (ED) crowding adversely affects multiple facets of high-quality care. The Commonwealth of Massachusetts mandates specific, hospital action plans to reduce ED boarding via a mechanism termed "Code Help." Because implementation appears inconsistent even when hospital conditions should have triggered its activation, we hypothesized that compliance with the Code Help policy would be associated with reduction in ED boarding time and total ED length of stay (LOS) for admitted patients, compared to patients seen when the Code Help policy was not followed. This was a retrospective analysis of data collected from electronic, patient-care, timestamp events and from a prospective Code Help registry for consecutive adult patients admitted from the ED at a single academic center during a 15-month period. For each patient, we determined whether the concurrent hospital status complied with the Code Help policy or violated it at the time of admission decision. We then compared ED boarding time and overall ED LOS for patients cared for during periods of Code Help policy compliance and during periods of Code Help policy violation, both with reference to patients cared for during normal operations. Of 89,587 adult patients who presented to the ED during the study period, 24,017 (26.8%) were admitted to an acute care or critical care bed. Boarding time ranged from zero to 67 hours 30 minutes (median 4 hours 31 minutes). Total ED LOS for admitted patients ranged from 11 minutes to 85 hours 25 minutes (median nine hours). Patients admitted during periods of Code Help policy violation experienced significantly longer boarding times (median 20 minutes longer) and total ED LOS (median 46 minutes longer), compared to patients admitted under normal operations. However, patients admitted during Code Help policy compliance did not experience a significant increase in either metric, compared to normal operations. In this single-center experience, implementation of the

  9. EM modeling for GPIR using 3D FDTD modeling codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.

    1994-10-01

    An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.

  10. Cavitation Modeling in Euler and Navier-Stokes Codes

    Science.gov (United States)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  11. Modeling stochasticity and robustness in gene regulatory networks.

    Science.gov (United States)

    Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis

    2009-06-15

    Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.

  12. Quality Improvement of MARS Code and Establishment of Code Coupling

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Jeong, Jae Jun; Kim, Kyung Doo

    2010-04-01

    The improvement of MARS code quality and coupling with regulatory auditing code have been accomplished for the establishment of self-reliable technology based regulatory auditing system. The unified auditing system code was realized also by implementing the CANDU specific models and correlations. As a part of the quality assurance activities, the various QA reports were published through the code assessments. The code manuals were updated and published a new manual which describe the new models and correlations. The code coupling methods were verified though the exercise of plant application. The education-training seminar and technology transfer were performed for the code users. The developed MARS-KS is utilized as reliable auditing tool for the resolving the safety issue and other regulatory calculations. The code can be utilized as a base technology for GEN IV reactor applications

  13. RELAP5/MOD3 code coupling model

    International Nuclear Information System (INIS)

    Martin, R.P.; Johnsen, G.W.

    1994-01-01

    A new capability has been incorporated into RELAP5/MOD3 that enables the coupling of RELAP5/MOD3 to other computer codes. The new capability has been designed to support analysis of the new advanced reactor concepts. Its user features rely solely on new RELAP5 open-quotes styledclose quotes input and the Parallel Virtual Machine (PVM) software, which facilitates process management and distributed communication of multiprocess problems. RELAP5/MOD3 manages the input processing, communication instruction, process synchronization, and its own send and receive data processing. The flexible capability requires that an explicit coupling be established, which updates boundary conditions at discrete time intervals. Two test cases are presented that demonstrate the functionality, applicability, and issues involving use of this capability

  14. Models in environmental regulatory decision making

    National Research Council Canada - National Science Library

    Committee on Models in the Regulatory Decision Process, National Research Council

    2007-01-01

    .... Models help EPA explain environmental phenomena in settings where direct observations are limited or unavailable, and anticipate the effects of agency policies on the environment, human health and the economy...

  15. Partitioning of genetic variation between regulatory and coding gene segments: the predominance of software variation in genes encoding introvert proteins.

    Science.gov (United States)

    Mitchison, A

    1997-01-01

    In considering genetic variation in eukaryotes, a fundamental distinction can be made between variation in regulatory (software) and coding (hardware) gene segments. For quantitative traits the bulk of variation, particularly that near the population mean, appears to reside in regulatory segments. The main exceptions to this rule concern proteins which handle extrinsic substances, here termed extrovert proteins. The immune system includes an unusually large proportion of this exceptional category, but even so its chief source of variation may well be polymorphism in regulatory gene segments. The main evidence for this view emerges from genome scanning for quantitative trait loci (QTL), which in the case of the immune system points to a major contribution of pro-inflammatory cytokine genes. Further support comes from sequencing of major histocompatibility complex (Mhc) class II promoters, where a high level of polymorphism has been detected. These Mhc promoters appear to act, in part at least, by gating the back-signal from T cells into antigen-presenting cells. Both these forms of polymorphism are likely to be sustained by the need for flexibility in the immune response. Future work on promoter polymorphism is likely to benefit from the input from genome informatics.

  16. Transcriptomic Analysis of Long Non-Coding RNAs and Coding Genes Uncovers a Complex Regulatory Network That Is Involved in Maize Seed Development

    Directory of Open Access Journals (Sweden)

    Ming Zhu

    2017-10-01

    Full Text Available Long non-coding RNAs (lncRNAs have been reported to be involved in the development of maize plant. However, few focused on seed development of maize. Here, we identified 753 lncRNA candidates in maize genome from six seed samples. Similar to the mRNAs, lncRNAs showed tissue developmental stage specific and differential expression, indicating their putative role in seed development. Increasing evidence shows that crosstalk among RNAs mediated by shared microRNAs (miRNAs represents a novel layer of gene regulation, which plays important roles in plant development. Functional roles and regulatory mechanisms of lncRNAs as competing endogenous RNAs (ceRNA in plants, particularly in maize seed development, are unclear. We combined analyses of consistently altered 17 lncRNAs, 840 mRNAs and known miRNA to genome-wide investigate potential lncRNA-mediated ceRNA based on “ceRNA hypothesis”. The results uncovered seven novel lncRNAs as potential functional ceRNAs. Functional analyses based on their competitive coding-gene partners by Gene Ontology (GO and KEGG biological pathway demonstrated that combined effects of multiple ceRNAs can have major impacts on general developmental and metabolic processes in maize seed. These findings provided a useful platform for uncovering novel mechanisms of maize seed development and may provide opportunities for the functional characterization of individual lncRNA in future studies.

  17. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  18. Models in environmental regulatory decision making

    National Research Council Canada - National Science Library

    Committee on Models in the Regulatory Decision Process; National Research Council; Division on Earth and Life Studies; National Research Council

    2007-01-01

    .... The centerpiece of the book's recommended vision is a life-cycle approach to model evaluation which includes peer review, corroboration of results, and other activities. This will enhance the agency's ability to respond to requirements from a 2001 law on information quality and improve policy development and implementation.

  19. Task-based dermal exposure models for regulatory risk assessment

    NARCIS (Netherlands)

    Warren, N.D.; Marquart, H.; Christopher, Y.; Laitinen, J.; Hemmen, J.J. van

    2006-01-01

    The regulatory risk assessment of chemicals requires the estimation of occupational dermal exposure. Until recently, the models used were either based on limited data or were specific to a particular class of chemical or application. The EU project RISKOFDERM has gathered a considerable number of

  20. ATHENA code manual. Volume 1. Code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Carlson, K.E.; Roth, P.A.; Ransom, V.H.

    1986-09-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation

  1. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    Energy Technology Data Exchange (ETDEWEB)

    Poole, B R; Nelson, S D; Langdon, S

    2005-05-05

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes.

  2. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    International Nuclear Information System (INIS)

    Poole, B R; Nelson, S D; Langdon, S

    2005-01-01

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes

  3. Development of the Level 1 PSA Model for PGSFR Regulatory

    International Nuclear Information System (INIS)

    Na, Hyun Ju; Lee, Yong Suk; Shin, Andong; Suh, Nam Duk

    2014-01-01

    SFR (sodium-cooled fast reactor) is Gen-IV nuclear energy system, which is designed for stability, sustainability and proliferation resistance. KALIMER-600 and PGSFR (Prototype Gen-IV SFR) are under development in Korea with enhanced passive safety concepts, e.g. passive reactor shutdown, passive residual heat removal, and etc. Risk analysis from a regulatory perspective is necessary for regulatory body to support the safety and licensing review of SFR. Safety issues should be identified in the early design phase in order to prevent the unexpected cost increase and the delay of PGSFR licensing schedule. In this respect, the preliminary PSA Model of KALIMER-600 had been developed for regulatory. In this study, the development of PSA Level 1 Model is presented. The important impact factors in the risk analysis for the PGSFR, such as Core Damage Frequency (CDF), have been identified and the related safety insights have been derived. The PSA level 1 model for PGSFR regulatory is developed and the risk analysis is conducted. Regarding CDF, LOISF frequency, uncertainty parameter for passive system CCF, loss of 125V DC control center bus and damper CCF are identified as the important factors. Sensitivity analyses show that the CDF would be differentiated (lowered) according to their values

  4. Code Generation for Protocols from CPN models Annotated with Pragmatics

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael; Kindler, Ekkart

    software implementation satisfies the properties verified for the model. Coloured Petri Nets (CPNs) have been widely used to model and verify protocol software, but limited work exists on using CPN models of protocol software as a basis for automated code generation. In this report, we present an approach...... modelling languages, MDE further has the advantage that models are amenable to model checking which allows key behavioural properties of the software design to be verified. The combination of formally verified models and automated code generation contributes to a high degree of assurance that the resulting...... for generating protocol software from a restricted class of CPN models. The class of CPN models considered aims at being descriptive in that the models are intended to be helpful in understanding and conveying the operation of the protocol. At the same time, a descriptive model is close to a verifiable version...

  5. LMFBR models for the ORIGEN2 computer code

    International Nuclear Information System (INIS)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1981-10-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th- 238 U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given

  6. CMCpy: Genetic Code-Message Coevolution Models in Python

    Science.gov (United States)

    Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.

    2013-01-01

    Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367

  7. WDEC: A Code for Modeling White Dwarf Structure and Pulsations

    Science.gov (United States)

    Bischoff-Kim, Agnès; Montgomery, Michael H.

    2018-05-01

    The White Dwarf Evolution Code (WDEC), written in Fortran, makes models of white dwarf stars. It is fast, versatile, and includes the latest physics. The code evolves hot (∼100,000 K) input models down to a chosen effective temperature by relaxing the models to be solutions of the equations of stellar structure. The code can also be used to obtain g-mode oscillation modes for the models. WDEC has a long history going back to the late 1960s. Over the years, it has been updated and re-packaged for modern computer architectures and has specifically been used in computationally intensive asteroseismic fitting. Generations of white dwarf astronomers and dozens of publications have made use of the WDEC, although the last true instrument paper is the original one, published in 1975. This paper discusses the history of the code, necessary to understand why it works the way it does, details the physics and features in the code today, and points the reader to where to find the code and a user guide.

  8. "Coding" and "Decoding": hypothesis for the regulatory mechanism involved in heparan sulfate biosynthesis.

    Science.gov (United States)

    Zhang, Xu; Wang, Fengshan; Sheng, Juzheng

    2016-06-16

    Heparan sulfate (HS) is widely distributed in mammalian tissues in the form of HS proteoglycans, which play essential roles in various physiological and pathological processes. In contrast to the template-guided processes involved in the synthesis of DNA and proteins, HS biosynthesis is not believed to involve a template. However, it appears that the final structure of HS chains was strictly regulated. Herein, we report research based hypothesis that two major steps, namely "coding" and "decoding" steps, are involved in the biosynthesis of HS, which strictly regulate its chemical structure and biological activity. The "coding" process in this context is based on the distribution of sulfate moieties on the amino groups of the glucosamine residues in the HS chains. The sulfation of these amine groups is catalyzed by N-deacetylase/N-sulfotransferase, which has four isozymes. The composition and distribution of sulfate groups and iduronic acid residues on the glycan chains of HS are determined by several other modification enzymes, which can recognize these coding sequences (i.e., the "decoding" process). The degree and pattern of the sulfation and epimerization in the HS chains determines the extent of their interactions with several different protein factors, which further influences their biological activity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Marketing of breast-milk substitutes in Zambia: evaluation of compliance to the international regulatory code.

    Science.gov (United States)

    Funduluka, P; Bosomprah, S; Chilengi, R; Mugode, R H; Bwembya, P A; Mudenda, B

    2018-03-01

    We sought to assess the level of non-compliance with the International Code of Marketing breast-milk substitutes (BMS) and/or Statutory Instrument (SI) Number 48 of 2006 of the Laws of Zambia in two suburbs, Kalingalinga and Chelstone, in Zambia. This was a cross sectional survey. Shop owners (80), health workers (8) and mothers (214) were interviewed. BMS labels and advertisements (62) were observed. The primary outcome was mean non-compliance defined as the number of article violations divided by the total 'obtainable' violations. The score ranges from 0 to 1 with 0 representing no violations in all the articles and one representing violations in all the articles. A total of 62 BMS were assessed. The mean non-compliance score by manufacturers in terms of violations in labelling of BMS was 0.33 (SD = 0.28; 95% CI: 0.26, 0.40). These violations were mainly due to labels containing pictures or graphics representing an infant. 80 shops were also assessed with mean non-compliance score in respect of violations in tie-in-sales, special display, and contact with mothers at the shop estimated as 0.14 (SD = 0.14; 95% CI: 0.11, 0.18). Non-compliance with the Code and/or the local SI is high after 10 years of domesticating the Code.

  10. Improvement of a combustion model in MELCOR code

    International Nuclear Information System (INIS)

    Ogino, Masao; Hashimoto, Takashi

    1999-01-01

    NUPEC has been improving a hydrogen combustion model in MELCOR code for severe accident analysis. In the proposed combustion model, the flame velocity in a node was predicted using five different flame front shapes of fireball, prism, bubble, spherical jet, and plane jet. For validation of the proposed model, the results of the Battelle multi-compartment hydrogen combustion test were used. The selected test cases for the study were Hx-6, 13, 14, 20 and Ix-2 which had two, three or four compartments under homogeneous hydrogen concentration of 5 to 10 vol%. The proposed model could predict well the combustion behavior in multi-compartment containment geometry on the whole. MELCOR code, incorporating the present combustion model, can simulate combustion behavior during severe accident with acceptable computing time and some degree of accuracy. The applicability study of the improved MELCOR code to the actual reactor plants will be further continued. (author)

  11. NRC model simulations in support of the hydrologic code intercomparison study (HYDROCOIN): Level 1-code verification

    International Nuclear Information System (INIS)

    1988-03-01

    HYDROCOIN is an international study for examining ground-water flow modeling strategies and their influence on safety assessments of geologic repositories for nuclear waste. This report summarizes only the combined NRC project temas' simulation efforts on the computer code bench-marking problems. The codes used to simulate thesee seven problems were SWIFT II, FEMWATER, UNSAT2M USGS-3D, AND TOUGH. In general, linear problems involving scalars such as hydraulic head were accurately simulated by both finite-difference and finite-element solution algorithms. Both types of codes produced accurate results even for complex geometrics such as intersecting fractures. Difficulties were encountered in solving problems that invovled nonlinear effects such as density-driven flow and unsaturated flow. In order to fully evaluate the accuracy of these codes, post-processing of results using paricle tracking algorithms and calculating fluxes were examined. This proved very valuable by uncovering disagreements among code results even through the hydraulic-head solutions had been in agreement. 9 refs., 111 figs., 6 tabs

  12. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  13. The ELOCA fuel modelling code: past, present and future

    International Nuclear Information System (INIS)

    Williams, A.F.

    2005-01-01

    ELOCA is the Industry Standard Toolset (IST) computer code for modelling CANDU fuel under the transient coolant conditions typical of an accident scenario. Since its original inception in the early 1970's, the code has undergone continual development and improvement. The code now embodies much of the knowledge and experience of fuel behaviour gained by the Canadian nuclear industry over this period. ELOCA has proven to be a valuable tool for the safety analyst, and continues to be used extensively to support the licensing cases of CANDU reactors. This paper provides a brief and much simplified view of this development history, its current status, and plans for future development. (author)

  14. Hamming Code Based Watermarking Scheme for 3D Model Verification

    Directory of Open Access Journals (Sweden)

    Jen-Tse Wang

    2014-01-01

    Full Text Available Due to the explosive growth of the Internet and maturing of 3D hardware techniques, protecting 3D objects becomes a more and more important issue. In this paper, a public hamming code based fragile watermarking technique is proposed for 3D objects verification. An adaptive watermark is generated from each cover model by using the hamming code technique. A simple least significant bit (LSB substitution technique is employed for watermark embedding. In the extraction stage, the hamming code based watermark can be verified by using the hamming code checking without embedding any verification information. Experimental results shows that 100% vertices of the cover model can be watermarked, extracted, and verified. It also shows that the proposed method can improve security and achieve low distortion of stego object.

  15. Modeling Guidelines for Code Generation in the Railway Signaling Context

    Science.gov (United States)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  16. Implications of duplicated cis-regulatory elements in the evolution of metazoans: the DDI model or how simplicity begets novelty.

    Science.gov (United States)

    Jiménez-Delgado, Senda; Pascual-Anaya, Juan; Garcia-Fernàndez, Jordi

    2009-07-01

    The discovery that most regulatory genes were conserved among animals from distant phyla challenged the ideas that gene duplication and divergence of homologous coding sequences were the basis for major morphological changes in metazoan evolution. In recent years, however, the interest for the roles, conservation and changes of non-coding sequences grew-up in parallel with genome sequencing projects. Presently, many independent studies are highlighting the importance that subtle changes in cis-regulatory regions had in the evolution of morphology trough the Animal Kingdom. Here we will show and discuss some of these studies, and underscore the future of cis-Evo-Devo research. Nevertheless, we would also explore how gene duplication, which includes duplication of regulatory regions, may have been critical for spatial or temporal co-option of new regulatory networks, causing the deployment of new transcriptome scenarios, and how these induced morphological changes were critical for the evolution of new forms. Forty years after Susumu Ohno famous sentence 'natural selection merely modifies, while redundancy creates', we suggest the alternative: 'natural selection modifies, while redundancy of cis-regulatory elements innovates', and propose the Duplication-Degeneration-Innovation model to explain the increased evolvability of duplicated cis-regulatory regions. Paradoxically, making regulation simpler by subfunctionalization paved the path for future complexity or, in other words, 'to make it simple to make it complex'.

  17. MARS CODE MANUAL VOLUME V: Models and Correlations

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Bae, Sung Won; Lee, Seung Wook; Yoon, Churl; Hwang, Moon Kyu; Kim, Kyung Doo; Jeong, Jae Jun

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This models and correlations manual provides a complete list of detailed information of the thermal-hydraulic models used in MARS, so that this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  18. The drift flux model in the ASSERT subchannel code

    International Nuclear Information System (INIS)

    Carver, M.B.; Judd, R.A.; Kiteley, J.C.; Tahir, A.

    1987-01-01

    The ASSERT subchannel code has been developed specifically to model flow and phase distributions within CANDU fuel bundles. ASSERT uses a drift-flux model that permits the phases to have unequal velocities, and can thus model phase separation tendencies that may occur in horizontal flow. The basic principles of ASSERT are outlined, and computed results are compared against data from various experiments for validation purposes. The paper concludes with an example of the use of the code to predict critical heat flux in CANDU geometries

  19. Fuel rod modelling during transients: The TOUTATIS code

    International Nuclear Information System (INIS)

    Bentejac, F.; Bourreau, S.; Brochard, J.; Hourdequin, N.; Lansiart, S.

    2001-01-01

    The TOUTATIS code is devoted to the PCI local phenomena simulation, in correlation with the METEOR code for the global behaviour of the fuel rod. More specifically, the TOUTATIS objective is to evaluate the mechanical constraints on the cladding during a power transient thus predicting its behaviour in term of stress corrosion cracking. Based upon the finite element computation code CASTEM 2000, TOUTATIS is a set of modules written in a macro language. The aim of this paper is to present both code modules: The axisymmetric bi-dimensional module, modeling a unique block pellet; The tri dimensional module modeling a radially fragmented pellet. Having shown the boundary conditions and the algorithms used, the application will be illustrated by: A short presentation of the bidimensional axisymmetric modeling performances as well as its limits; The enhancement due to the three dimensional modeling will be displayed by sensitivity studies to the geometry, in this case the pellet height/diameter ratio. Finally, we will show the easiness of the development inherent to the CASTEM 2000 system by depicting the process of a modeling enhancement by adding the possibility of an axial (horizontal) fissuration of the pellet. As conclusion, the future improvements planned for the code are depicted. (author)

  20. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    Science.gov (United States)

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.

  1. A system-level model for the microbial regulatory genome.

    Science.gov (United States)

    Brooks, Aaron N; Reiss, David J; Allard, Antoine; Wu, Wei-Ju; Salvanha, Diego M; Plaisier, Christopher L; Chandrasekaran, Sriram; Pan, Min; Kaur, Amardeep; Baliga, Nitin S

    2014-07-15

    Microbes can tailor transcriptional responses to diverse environmental challenges despite having streamlined genomes and a limited number of regulators. Here, we present data-driven models that capture the dynamic interplay of the environment and genome-encoded regulatory programs of two types of prokaryotes: Escherichia coli (a bacterium) and Halobacterium salinarum (an archaeon). The models reveal how the genome-wide distributions of cis-acting gene regulatory elements and the conditional influences of transcription factors at each of those elements encode programs for eliciting a wide array of environment-specific responses. We demonstrate how these programs partition transcriptional regulation of genes within regulons and operons to re-organize gene-gene functional associations in each environment. The models capture fitness-relevant co-regulation by different transcriptional control mechanisms acting across the entire genome, to define a generalized, system-level organizing principle for prokaryotic gene regulatory networks that goes well beyond existing paradigms of gene regulation. An online resource (http://egrin2.systemsbiology.net) has been developed to facilitate multiscale exploration of conditional gene regulation in the two prokaryotes. © 2014 The Authors. Published under the terms of the CC BY 4.0 license.

  2. Universal Regularizers For Robust Sparse Coding and Modeling

    OpenAIRE

    Ramirez, Ignacio; Sapiro, Guillermo

    2010-01-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding...

  3. Model-Driven Engineering of Machine Executable Code

    Science.gov (United States)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  4. An improved thermal model for the computer code NAIAD

    International Nuclear Information System (INIS)

    Rainbow, M.T.

    1982-12-01

    An improved thermal model, based on the concept of heat slabs, has been incorporated as an option into the thermal hydraulic computer code NAIAD. The heat slabs are one-dimensional thermal conduction models with temperature independent thermal properties which may be internal and/or external to the fluid. Thermal energy may be added to or removed from the fluid via heat slabs and passed across the external boundary of external heat slabs at a rate which is a linear function of the external surface temperatures. The code input for the new option has been restructured to simplify data preparation. A full description of current input requirements is presented

  5. Properties of non-coding DNA and identification of putative cis-regulatory elements in Theileria parva

    Directory of Open Access Journals (Sweden)

    Guo Xiang

    2008-12-01

    Full Text Available Abstract Background Parasites in the genus Theileria cause lymphoproliferative diseases in cattle, resulting in enormous socio-economic losses. The availability of the genome sequences and annotation for T. parva and T. annulata has facilitated the study of parasite biology and their relationship with host cell transformation and tropism. However, the mechanism of transcriptional regulation in this genus, which may be key to understanding fundamental aspects of its parasitology, remains poorly understood. In this study, we analyze the evolution of non-coding sequences in the Theileria genome and identify conserved sequence elements that may be involved in gene regulation of these parasitic species. Results Intergenic regions and introns in Theileria are short, and their length distributions are considerably right-skewed. Intergenic regions flanked by genes in 5'-5' orientation tend to be longer and slightly more AT-rich than those flanked by two stop codons; intergenic regions flanked by genes in 3'-5' orientation have intermediate values of length and AT composition. Intron position is negatively correlated with intron length, and positively correlated with GC content. Using stringent criteria, we identified a set of high-quality orthologous non-coding sequences between T. parva and T. annulata, and determined the distribution of selective constraints across regions, which are shown to be higher close to translation start sites. A positive correlation between constraint and length in both intergenic regions and introns suggests a tight control over length expansion of non-coding regions. Genome-wide searches for functional elements revealed several conserved motifs in intergenic regions of Theileria genomes. Two such motifs are preferentially located within the first 60 base pairs upstream of transcription start sites in T. parva, are preferentially associated with specific protein functional categories, and have significant similarity to know

  6. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  7. Selection on Coding and Regulatory Variation Maintains Individuality in Major Urinary Protein Scent Marks in Wild Mice.

    Directory of Open Access Journals (Sweden)

    Michael J Sheehan

    2016-03-01

    Full Text Available Recognition of individuals by scent is widespread across animal taxa. Though animals can often discriminate chemical blends based on many compounds, recent work shows that specific protein pheromones are necessary and sufficient for individual recognition via scent marks in mice. The genetic nature of individuality in scent marks (e.g. coding versus regulatory variation and the evolutionary processes that maintain diversity are poorly understood. The individual signatures in scent marks of house mice are the protein products of a group of highly similar paralogs in the major urinary protein (Mup gene family. Using the offspring of wild-caught mice, we examine individuality in the major urinary protein (MUP scent marks at the DNA, RNA and protein levels. We show that individuality arises through a combination of variation at amino acid coding sites and differential transcription of central Mup genes across individuals, and we identify eSNPs in promoters. There is no evidence of post-transcriptional processes influencing phenotypic diversity as transcripts accurately predict the relative abundance of proteins in urine samples. The match between transcripts and urine samples taken six months earlier also emphasizes that the proportional relationships across central MUP isoforms in urine is stable. Balancing selection maintains coding variants at moderate frequencies, though pheromone diversity appears limited by interactions with vomeronasal receptors. We find that differential transcription of the central Mup paralogs within and between individuals significantly increases the individuality of pheromone blends. Balancing selection on gene regulation allows for increased individuality via combinatorial diversity in a limited number of pheromones.

  8. Data model description for the DESCARTES and CIDER codes

    International Nuclear Information System (INIS)

    Miley, T.B.; Ouderkirk, S.J.; Nichols, W.E.; Eslinger, P.W.

    1993-01-01

    The primary objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. One of the major objectives of the HEDR Project is to develop several computer codes to model the airborne releases. transport and envirorunental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In July 1992, the HEDR Project Manager determined that the computer codes being developed (DESCARTES, calculation of environmental accumulation from airborne releases, and CIDER, dose calculations from environmental accumulation) were not sufficient to create accurate models. A team of HEDR staff members developed a plan to assure that computer codes would meet HEDR Project goals. The plan consists of five tasks: (1) code requirements definition. (2) scoping studies, (3) design specifications, (4) benchmarking, and (5) data modeling. This report defines the data requirements for the DESCARTES and CIDER codes

  9. COCOA code for creating mock observations of star cluster models

    Science.gov (United States)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2018-04-01

    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the COCOA code and demonstrate its different applications by utilizing globular cluster (GC) models simulated with the MOCCA (MOnte Carlo Cluster simulAtor) code. COCOA is used to synthetically observe these different GC models with optical telescopes, perform point spread function photometry, and subsequently produce observed colour-magnitude diagrams. We also use COCOA to compare the results from synthetic observations of a cluster model that has the same age and metallicity as the Galactic GC NGC 2808 with observations of the same cluster carried out with a 2.2 m optical telescope. We find that COCOA can effectively simulate realistic observations and recover photometric data. COCOA has numerous scientific applications that maybe be helpful for both theoreticians and observers that work on star clusters. Plans for further improving and developing the code are also discussed in this paper.

  10. Code-code comparisons of DIVIMP's 'onion-skin model' and the EDGE2D fluid code

    International Nuclear Information System (INIS)

    Stangeby, P.C.; Elder, J.D.; Horton, L.D.; Simonini, R.; Taroni, A.; Matthews, O.F.; Monk, R.D.

    1997-01-01

    In onion-skin modelling, O-SM, of the edge plasma, the cross-field power and particle flows are treated very simply e.g. as spatially uniform. The validity of O-S modelling requires demonstration that such approximations can still result in reasonable solutions for the edge plasma. This is demonstrated here by comparison of O-SM with full 2D fluid edge solutions generated by the EDGE2D code. The target boundary conditions for the O-SM are taken from the EDGE2D output and the complete O-SM solutions are then compared with the EDGE2D ones. Agreement is generally within 20% for n e , T e , T i and parallel particle flux density Γ for the medium and high recycling JET cases examined and somewhat less good for a strongly detached CMOD example. (orig.)

  11. Advanced Electric and Magnetic Material Models for FDTD Electromagnetic Codes

    CERN Document Server

    Poole, Brian R; Nelson, Scott D

    2005-01-01

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which requires nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes an...

  12. Radiation transport phenomena and modeling - part A: Codes

    International Nuclear Information System (INIS)

    Lorence, L.J.

    1997-01-01

    The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped

  13. Three-dimensional modeling with finite element codes

    Energy Technology Data Exchange (ETDEWEB)

    Druce, R.L.

    1986-01-17

    This paper describes work done to model magnetostatic field problems in three dimensions. Finite element codes, available at LLNL, and pre- and post-processors were used in the solution of the mathematical model, the output from which agreed well with the experimentally obtained data. The geometry used in this work was a cylinder with ports in the periphery and no current sources in the space modeled. 6 refs., 8 figs.

  14. Code Development for Control Design Applications: Phase I: Structural Modeling

    International Nuclear Information System (INIS)

    Bir, G. S.; Robinson, M.

    1998-01-01

    The design of integrated controls for a complex system like a wind turbine relies on a system model in an explicit format, e.g., state-space format. Current wind turbine codes focus on turbine simulation and not on system characterization, which is desired for controls design as well as applications like operating turbine model analysis, optimal design, and aeroelastic stability analysis. This paper reviews structural modeling that comprises three major steps: formation of component equations, assembly into system equations, and linearization

  15. Model checking optimal finite-horizon control for probabilistic gene regulatory networks.

    Science.gov (United States)

    Wei, Ou; Guo, Zonghao; Niu, Yun; Liao, Wenyuan

    2017-12-14

    Probabilistic Boolean networks (PBNs) have been proposed for analyzing external control in gene regulatory networks with incorporation of uncertainty. A context-sensitive PBN with perturbation (CS-PBNp), extending a PBN with context-sensitivity to reflect the inherent biological stability and random perturbations to express the impact of external stimuli, is considered to be more suitable for modeling small biological systems intervened by conditions from the outside. In this paper, we apply probabilistic model checking, a formal verification technique, to optimal control for a CS-PBNp that minimizes the expected cost over a finite control horizon. We first describe a procedure of modeling a CS-PBNp using the language provided by a widely used probabilistic model checker PRISM. We then analyze the reward-based temporal properties and the computation in probabilistic model checking; based on the analysis, we provide a method to formulate the optimal control problem as minimum reachability reward properties. Furthermore, we incorporate control and state cost information into the PRISM code of a CS-PBNp such that automated model checking a minimum reachability reward property on the code gives the solution to the optimal control problem. We conduct experiments on two examples, an apoptosis network and a WNT5A network. Preliminary experiment results show the feasibility and effectiveness of our approach. The approach based on probabilistic model checking for optimal control avoids explicit computation of large-size state transition relations associated with PBNs. It enables a natural depiction of the dynamics of gene regulatory networks, and provides a canonical form to formulate optimal control problems using temporal properties that can be automated solved by leveraging the analysis power of underlying model checking engines. This work will be helpful for further utilization of the advances in formal verification techniques in system biology.

  16. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)

    2013-05-15

    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  17. Collection of regulatory texts relative to radiation protection. Part 1: laws and decrees (Extracts of the Public Health Code and of the Labour Code dealing with the protection of population, patients and workers against the hazards of ionizing radiations

    International Nuclear Information System (INIS)

    Rivas, Robert; Feries, Jean; Marzorati, Frank; Chevalier, Celine; Lachaume, Jean-Luc

    2013-01-01

    This first part contains legal and regulatory texts extracted from the Public Health Code and related to health general protection and to health products (medical devices), from the Social Security Code, and from the Labour Code related to individual work relationships, to health and safety at work, to work places, to work equipment and means of protection, to the prevention of some exposure risks and of risks related to some activities. This document is an update of the previous version from January 25, 2011

  18. Performance Theories for Sentence Coding: Some Quantitative Models

    Science.gov (United States)

    Aaronson, Doris; And Others

    1977-01-01

    This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)

  19. Modeling of PHWR fuel elements using FUDA code

    International Nuclear Information System (INIS)

    Tripathi, Rahul Mani; Soni, Rakesh; Prasad, P.N.; Pandarinathan, P.R.

    2008-01-01

    The computer code FUDA (Fuel Design Analysis) is used for modeling PHWR fuel bundle operation history and carry out fuel element thermo-mechanical analysis. The radial temperature profile across fuel and sheath, fission gas release, internal gas pressure, sheath stress and strains during the life of fuel bundle are estimated

  20. 28 CFR 36.608 - Guidance concerning model codes.

    Science.gov (United States)

    2010-07-01

    ... Section 36.608 Judicial Administration DEPARTMENT OF JUSTICE NONDISCRIMINATION ON THE BASIS OF DISABILITY BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building... private entity responsible for developing a model code, the Assistant Attorney General may review the...

  1. Code Shift: Grid Specifications and Dynamic Wind Turbine Models

    DEFF Research Database (Denmark)

    Ackermann, Thomas; Ellis, Abraham; Fortmann, Jens

    2013-01-01

    Grid codes (GCs) and dynamic wind turbine (WT) models are key tools to allow increasing renewable energy penetration without challenging security of supply. In this article, the state of the art and the further development of both tools are discussed, focusing on the European and North American e...

  2. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  3. Preliminary Development of Regulatory PSA Models for SFR

    International Nuclear Information System (INIS)

    Choi, Yong Won; Shin, Andong; Bae, Moohoon; Suh, Namduk; Lee, Yong Suk

    2013-01-01

    Well developed PRA methodology exists for LWR (Light Water Reactor) and PHWR (Pressurized Heavy Water Reactor). Since KAERI is developing a prototype SFR targeting to apply for a license by 2017, KINS needs to have a PRA models to assess the safety of this prototype reactor. The purpose of this study is to develop the regulatory PSA models for the independent verification of the SFR safety. Since the design of the prototype SFR is not mature yet, we have tried to develop the preliminary models based on the design data of KAERI's previous SFR design. In this study, the preliminary initiating events of level 1 internal event for SFR were selected through reviews of existing PRA (LWR, PRISM, ASTRID and KALIMER-600) models. Then, the event tree for each selected initiating event was developed. The regulatory PRA models of SFR developed are preliminary in a sense, because the prototype SFR design is not mature and provided yet. Still it might be utilized for the forthcoming licensing review in assessing the risk of safety issues and the configuration control of the design

  4. Synchronous versus asynchronous modeling of gene regulatory networks.

    Science.gov (United States)

    Garg, Abhishek; Di Cara, Alessandro; Xenarios, Ioannis; Mendoza, Luis; De Micheli, Giovanni

    2008-09-01

    In silico modeling of gene regulatory networks has gained some momentum recently due to increased interest in analyzing the dynamics of biological systems. This has been further facilitated by the increasing availability of experimental data on gene-gene, protein-protein and gene-protein interactions. The two dynamical properties that are often experimentally testable are perturbations and stable steady states. Although a lot of work has been done on the identification of steady states, not much work has been reported on in silico modeling of cellular differentiation processes. In this manuscript, we provide algorithms based on reduced ordered binary decision diagrams (ROBDDs) for Boolean modeling of gene regulatory networks. Algorithms for synchronous and asynchronous transition models have been proposed and their corresponding computational properties have been analyzed. These algorithms allow users to compute cyclic attractors of large networks that are currently not feasible using existing software. Hereby we provide a framework to analyze the effect of multiple gene perturbation protocols, and their effect on cell differentiation processes. These algorithms were validated on the T-helper model showing the correct steady state identification and Th1-Th2 cellular differentiation process. The software binaries for Windows and Linux platforms can be downloaded from http://si2.epfl.ch/~garg/genysis.html.

  5. Uncovering Transcriptional Regulatory Networks by Sparse Bayesian Factor Model

    Directory of Open Access Journals (Sweden)

    Qi Yuan(Alan

    2010-01-01

    Full Text Available Abstract The problem of uncovering transcriptional regulation by transcription factors (TFs based on microarray data is considered. A novel Bayesian sparse correlated rectified factor model (BSCRFM is proposed that models the unknown TF protein level activity, the correlated regulations between TFs, and the sparse nature of TF-regulated genes. The model admits prior knowledge from existing database regarding TF-regulated target genes based on a sparse prior and through a developed Gibbs sampling algorithm, a context-specific transcriptional regulatory network specific to the experimental condition of the microarray data can be obtained. The proposed model and the Gibbs sampling algorithm were evaluated on the simulated systems, and results demonstrated the validity and effectiveness of the proposed approach. The proposed model was then applied to the breast cancer microarray data of patients with Estrogen Receptor positive ( status and Estrogen Receptor negative ( status, respectively.

  6. Recent improvements of the TNG statistical model code

    International Nuclear Information System (INIS)

    Shibata, K.; Fu, C.Y.

    1986-08-01

    The applicability of the nuclear model code TNG to cross-section evaluations has been extended. The new TNG is capable of using variable bins for outgoing particle energies. Moreover, three additional quantities can now be calculated: capture gamma-ray spectrum, the precompound mode of the (n,γ) reaction, and fission cross section. In this report, the new features of the code are described together with some sample calculations and a brief explanation of the input data. 15 refs., 6 figs., 2 tabs

  7. Modeling RERTR experimental fuel plates using the PLATE code

    International Nuclear Information System (INIS)

    Hayes, S.L.; Meyer, M.K.; Hofman, G.L.; Snelgrove, J.L.; Brazener, R.A.

    2003-01-01

    Modeling results using the PLATE dispersion fuel performance code are presented for the U-Mo/Al experimental fuel plates from the RERTR-1, -2, -3 and -5 irradiation tests. Agreement of the calculations with experimental data obtained in post-irradiation examinations of these fuels, where available, is shown to be good. Use of the code to perform a series of parametric evaluations highlights the sensitivity of U-Mo dispersion fuel performance to fabrication variables, especially fuel particle shape and size distributions. (author)

  8. Review of the chronic exposure pathways models in MACCS [MELCOR Accident Consequence Code System] and several other well-known probabilistic risk assessment models

    International Nuclear Information System (INIS)

    Tveten, U.

    1990-06-01

    The purpose of this report is to document the results of the work performed by the author in connection with the following task, performed for US Nuclear Regulatory Commission, (USNRC) Office of Nuclear Regulatory Research, Division of Systems Research: MACCS Chronic Exposure Pathway Models: Review the chronic exposure pathway models implemented in the MELCOR Accident Consequence Code System (MACCS) and compare those models to the chronic exposure pathway models implemented in similar codes developed in countries that are members of the OECD. The chronic exposures concerned are via: the terrestrial food pathways, the water pathways, the long-term groundshine pathway, and the inhalation of resuspended radionuclides pathway. The USNRC has indicated during discussions of the task that the major effort should be spent on the terrestrial food pathways. There is one chapter for each of the categories of chronic exposure pathways listed above

  9. Thermohydraulic modeling of nuclear thermal rockets: The KLAXON code

    International Nuclear Information System (INIS)

    Hall, M.L.; Rider, W.J.; Cappiello, M.W.

    1992-01-01

    The hydrogen flow from the storage tanks, through the reactor core, and out the nozzle of a Nuclear Thermal Rocket is an integral design consideration. To provide an analysis and design tool for this phenomenon, the KLAXON code is being developed. A shock-capturing numerical methodology is used to model the gas flow (the Harten, Lax, and van Leer method, as implemented by Einfeldt). Preliminary results of modeling the flow through the reactor core and nozzle are given in this paper

  10. Automatic modeling for the monte carlo transport TRIPOLI code

    International Nuclear Information System (INIS)

    Zhang Junjun; Zeng Qin; Wu Yican; Wang Guozhong; FDS Team

    2010-01-01

    TRIPOLI, developed by CEA, France, is Monte Carlo particle transport simulation code. It has been widely applied to nuclear physics, shielding design, evaluation of nuclear safety. However, it is time-consuming and error-prone to manually describe the TRIPOLI input file. This paper implemented bi-directional conversion between CAD model and TRIPOLI model. Its feasibility and efficiency have been demonstrated by several benchmarking examples. (authors)

  11. Steam generator and circulator model for the HELAP code

    International Nuclear Information System (INIS)

    Ludewig, H.

    1975-07-01

    An outline is presented of the work carried out in the 1974 fiscal year on the GCFBR safety research project consisting of the development of improved steam generator and circulator (steam turbine driven helium compressor) models which will eventually be inserted in the HELAP (1) code. Furthermore, a code was developed which will be used to generate steady state input for the primary and secondary sides of the steam generator. The following conclusions and suggestions for further work are made: (1) The steam-generator and circulator model are consistent with the volume and junction layout used in HELAP, (2) with minor changes these models, when incorporated in HELAP, could be used to simulate a direct cycle plant, (3) an explicit control valve model is still to be developed and would be very desirable to control the flow to the turbine during a transient (initially this flow will be controlled by using the existing check valve model); (4) the friction factor in the laminar flow region is computed inaccurately, this might cause significant errors in loss-of-flow accidents; and (5) it is felt that HELAP will still use a large amount of computer time and will thus be limited to design basis accidents without scram or loss of flow transients with and without scram. Finally it may also be used as a test bed for the development of prototype component models which would be incorporated in a more sophisticated system code, developed specifically for GCFBR's

  12. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  13. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  14. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-12-15

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  15. An improved steam generator model for the SASSYS code

    International Nuclear Information System (INIS)

    Pizzica, P.A.

    1989-01-01

    A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid-metal-cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model, but it can also function as a stand-alone model. The model provides a full solution of the steady-state condition before the transient calculation begins for given sodium and water flow rates, inlet and outlet sodium temperatures, and inlet enthalpy and region lengths on the water side

  16. Dual coding: a cognitive model for psychoanalytic research.

    Science.gov (United States)

    Bucci, W

    1985-01-01

    Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code

  17. Phenomenological optical potentials and optical model computer codes

    International Nuclear Information System (INIS)

    Prince, A.

    1980-01-01

    An introduction to the Optical Model is presented. Starting with the purpose and nature of the physical problems to be analyzed, a general formulation and the various phenomenological methods of solution are discussed. This includes the calculation of observables based on assumed potentials such as local and non-local and their forms, e.g. Woods-Saxon, folded model etc. Also discussed are the various calculational methods and model codes employed to describe nuclear reactions in the spherical and deformed regions (e.g. coupled-channel analysis). An examination of the numerical solutions and minimization techniques associated with the various codes, is briefly touched upon. Several computer programs are described for carrying out the calculations. The preparation of input, (formats and options), determination of model parameters and analysis of output are described. The class is given a series of problems to carry out using the available computer. Interpretation and evaluation of the samples includes the effect of varying parameters, and comparison of calculations with the experimental data. Also included is an intercomparison of the results from the various model codes, along with their advantages and limitations. (author)

  18. Improvement of blow down model for LEAP code

    International Nuclear Information System (INIS)

    Itooka, Satoshi; Fujimata, Kazuhiro

    2003-03-01

    In Japan Nuclear Cycle Development Institute, the improvement of analysis method for overheating tube rapture was studied for the accident of sodium-water reactions in the steam generator of a fast breeder reactor and the evaluation of heat transfer condition in the tube were carried out based on study of critical heat flux (CHF) and post-CHF heat transfer equation in Light Water Reactors. In this study, the improvement of blow down model for the LEAP code was carried out taking into consideration the above-mentioned evaluation of heat transfer condition. Improvements of the LEAP code were following items. Calculations and verification were performed with the improved LEAP code in order to confirm the code functions. The addition of critical heat flux (CHF) by the formula of Katto and the formula of Tong. The addition of post-CHF heat transfer equation by the formula of Condie-BengstonIV and the formula of Groeneveld 5.9. The physical properties of the water and steam are expanded to the critical conditions of the water. The expansion of the total number of section and the improvement of the input form. The addition of the function to control the valve setting by the PID control model. (author)

  19. Analysis of deterministic cyclic gene regulatory network models with delays

    CERN Document Server

    Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian

    2015-01-01

    This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.

  20. The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering

    NARCIS (Netherlands)

    Walraven, J.C.; Bigaj-Van Vliet, A.

    2011-01-01

    The fib Model Code is a recommendation for the design of reinforced and prestressed concrete which is intended to be a guiding document for future codes. Model Codes have been published before, in 1978 and 1990. The draft for fib Model Code 2010 was published in May 2010. The most important new

  1. Convective effects in a regulatory and proposed fire model

    International Nuclear Information System (INIS)

    Wix, S.D.; Hohnstreiter, G.F.

    1995-01-01

    Radiation is the dominant mode of heat transfer in large fires. However, convection can be as much as 10 to 20 percent of the total heat transfer to an object in a large fire. The current radioactive material transportation packaging regulations include convection as a mode of heat transfer in the accident condition scenario. The current International Atomic Energy Agency Safety Series 6 packaging regulation states ''the convection coefficient shall be that value which the designer can justify if the package were exposed to the specified fire''. The current Title 10, Code of Federal Regulations, Part 71 (10CFR71) packaging regulation states ''when significant, convection heat input must be included on the basis of still, ambient air at 800 degrees C (1475 degrees F)''. Two questions that can arise in an analysts mind from an examination of the packaging regulations is whether convection is significant and whether convection should be included in the design analysis of a radioactive materials transportation container. The objective of this study is to examine the convective effects on an actual radioactive materials transportation package using a regulatory and a proposed thermal boundary condition

  2. The WARP Code: Modeling High Intensity Ion Beams

    International Nuclear Information System (INIS)

    Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving

    2005-01-01

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand

  3. The WARP Code: Modeling High Intensity Ion Beams

    International Nuclear Information System (INIS)

    Grote, D P; Friedman, A; Vay, J L; Haber, I

    2004-01-01

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP( ) summary.html

  4. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  5. Geochemical modelling of groundwater evolution using chemical equilibrium codes

    International Nuclear Information System (INIS)

    Pitkaenen, P.; Pirhonen, V.

    1991-01-01

    Geochemical equilibrium codes are a modern tool in studying interaction between groundwater and solid phases. The most common used programs and application subjects are shortly presented in this article. The main emphasis is laid on the approach method of using calculated results in evaluating groundwater evolution in hydrogeological system. At present in geochemical equilibrium modelling also kinetic as well as hydrologic constrains along a flow path are taken into consideration

  6. Optical model calculations with the code ECIS95

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, B V [Departamento de Fisica, Instituto Tecnologico da Aeronautica, Centro Tecnico Aeroespacial (Brazil)

    2001-12-15

    The basic features of elastic and inelastic scattering within the framework of the spherical and deformed nuclear optical models are discussed. The calculation of cross sections, angular distributions and other scattering quantities using J. Raynal's code ECIS95 is described. The use of the ECIS method (Equations Couplees en Iterations Sequentielles) in coupled-channels and distorted-wave Born approximation calculations is also reviewed. (author)

  7. Predictive modelling of gene expression from transcriptional regulatory elements.

    Science.gov (United States)

    Budden, David M; Hurley, Daniel G; Crampin, Edmund J

    2015-07-01

    Predictive modelling of gene expression provides a powerful framework for exploring the regulatory logic underpinning transcriptional regulation. Recent studies have demonstrated the utility of such models in identifying dysregulation of gene and miRNA expression associated with abnormal patterns of transcription factor (TF) binding or nucleosomal histone modifications (HMs). Despite the growing popularity of such approaches, a comparative review of the various modelling algorithms and feature extraction methods is lacking. We define and compare three methods of quantifying pairwise gene-TF/HM interactions and discuss their suitability for integrating the heterogeneous chromatin immunoprecipitation (ChIP)-seq binding patterns exhibited by TFs and HMs. We then construct log-linear and ϵ-support vector regression models from various mouse embryonic stem cell (mESC) and human lymphoblastoid (GM12878) data sets, considering both ChIP-seq- and position weight matrix- (PWM)-derived in silico TF-binding. The two algorithms are evaluated both in terms of their modelling prediction accuracy and ability to identify the established regulatory roles of individual TFs and HMs. Our results demonstrate that TF-binding and HMs are highly predictive of gene expression as measured by mRNA transcript abundance, irrespective of algorithm or cell type selection and considering both ChIP-seq and PWM-derived TF-binding. As we encourage other researchers to explore and develop these results, our framework is implemented using open-source software and made available as a preconfigured bootable virtual environment. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  8. Modeling post-transcriptional regulation activity of small non-coding RNAs in Escherichia coli.

    Science.gov (United States)

    Wang, Rui-Sheng; Jin, Guangxu; Zhang, Xiang-Sun; Chen, Luonan

    2009-04-29

    Transcriptional regulation is a fundamental process in biological systems, where transcription factors (TFs) have been revealed to play crucial roles. In recent years, in addition to TFs, an increasing number of non-coding RNAs (ncRNAs) have been shown to mediate post-transcriptional processes and regulate many critical pathways in both prokaryotes and eukaryotes. On the other hand, with more and more high-throughput biological data becoming available, it is possible and imperative to quantitatively study gene regulation in a systematic and detailed manner. Most existing studies for inferring transcriptional regulatory interactions and the activity of TFs ignore the possible post-transcriptional effects of ncRNAs. In this work, we propose a novel framework to infer the activity of regulators including both TFs and ncRNAs by exploring the expression profiles of target genes and (post)transcriptional regulatory relationships. We model the integrated regulatory system by a set of biochemical reactions which lead to a log-bilinear problem. The inference process is achieved by an iterative algorithm, in which two linear programming models are efficiently solved. In contrast to available related studies, the effects of ncRNAs on transcription process are considered in this work, and thus more reasonable and accurate reconstruction can be expected. In addition, the approach is suitable for large-scale problems from the viewpoint of computation. Experiments on two synthesized data sets and a model system of Escherichia coli (E. coli) carbon source transition from glucose to acetate illustrate the effectiveness of our model and algorithm. Our results show that incorporating the post-transcriptional regulation of ncRNAs into system model can mine the hidden effects from the regulation activity of TFs in transcription processes and thus can uncover the biological mechanisms in gene regulation in a more accurate manner. The software for the algorithm in this paper is available

  9. International codes and model intercomparison for intermediate energy activation yields

    International Nuclear Information System (INIS)

    Rolf, M.; Nagel, P.

    1997-01-01

    The motivation for this intercomparison came from data needs of accelerator-based waste transmutation, energy amplification and medical therapy. The aim of this exercise is to determine the degree of reliability of current nuclear reaction models and codes when calculating activation yields in the intermediate energy range up to 5000 MeV. Emphasis has been placed for a wide range of target elements ( O, Al, Fe, Co, Zr and Au). This work is mainly based on calculation of (P,xPyN) integral cross section for incident proton. A qualitative description of some of the nuclear models and code options employed is made. The systematics of graphical presentation of the results allows a quick quantitative measure of agreement or deviation. This code intercomparison highlights the fact that modeling calculations of energy activation yields may at best have uncertainties of a factor of two. The causes of such discrepancies are multi-factorial. Problems are encountered which are connected with the calculation of nuclear masses, binding energies, Q-values, shell effects, medium energy fission and Fermi break-up. (A.C.)

  10. Film grain noise modeling in advanced video coding

    Science.gov (United States)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  11. Algebraic model checking for Boolean gene regulatory networks.

    Science.gov (United States)

    Tran, Quoc-Nam

    2011-01-01

    We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.

  12. Regulatory agencies and regulatory risk

    OpenAIRE

    Knieps, Günter; Weiß, Hans-Jörg

    2008-01-01

    The aim of this paper is to show that regulatory risk is due to the discretionary behaviour of regulatory agencies, caused by a too extensive regulatory mandate provided by the legislator. The normative point of reference and a behavioural model of regulatory agencies based on the positive theory of regulation are presented. Regulatory risk with regard to the future behaviour of regulatory agencies is modelled as the consequence of the ex ante uncertainty about the relative influence of inter...

  13. An evaluation model for the definition of regulatory requirements on spent fuel pool cooling systems

    International Nuclear Information System (INIS)

    Izquierdo, J.M.

    1979-01-01

    A calculation model is presented for establishing regulatory requirements in the SFPCS System. The major design factors, regulatory and design limits and key parameters are discussed. A regulatory position for internal use is proposed. Finally, associated problems and experience are presented. (author)

  14. Development of Parallel Code for the Alaska Tsunami Forecast Model

    Science.gov (United States)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  15. Channel modeling, signal processing and coding for perpendicular magnetic recording

    Science.gov (United States)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by

  16. Task-based dermal exposure models for regulatory risk assessment.

    Science.gov (United States)

    Warren, Nicholas D; Marquart, Hans; Christopher, Yvette; Laitinen, Juha; VAN Hemmen, Joop J

    2006-07-01

    The regulatory risk assessment of chemicals requires the estimation of occupational dermal exposure. Until recently, the models used were either based on limited data or were specific to a particular class of chemical or application. The EU project RISKOFDERM has gathered a considerable number of new measurements of dermal exposure together with detailed contextual information. This article describes the development of a set of generic task-based models capable of predicting potential dermal exposure to both solids and liquids in a wide range of situations. To facilitate modelling of the wide variety of dermal exposure situations six separate models were made for groupings of exposure scenarios called Dermal Exposure Operation units (DEO units). These task-based groupings cluster exposure scenarios with regard to the expected routes of dermal exposure and the expected influence of exposure determinants. Within these groupings linear mixed effect models were used to estimate the influence of various exposure determinants and to estimate components of variance. The models predict median potential dermal exposure rates for the hands and the rest of the body from the values of relevant exposure determinants. These rates are expressed as mg or microl product per minute. Using these median potential dermal exposure rates and an accompanying geometric standard deviation allows a range of exposure percentiles to be calculated.

  17. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  18. The top-down reflooding model in the Cathare code

    International Nuclear Information System (INIS)

    Bartak, J.; Bestion, D.; Haapalehto, T.

    1993-01-01

    A top-down reflooding model was developed for the French best-estimate thermalhydraulic code CATHARE. The paper presents the current state of development of this model. Based on a literature survey and on compatibility considerations with respect to the existing CATHARE bottom reflooding package, a falling film top-down reflooding model was developed and implemented into CATHARE version 1.3E. Following a brief review of previous work, the paper describes the most important features of the model. The model was validated with the WINFRITH single tube top-down reflooding experiment and with the REWET - II simultaneous bottom and top-down reflooding experiment in rod bundle geometry. The results demonstrate the ability of the new package to describe the falling film rewetting phenomena and the main parametric trends both in a simple analytical experimental setup and in a much more complex rod bundle reflooding experiment. (authors). 9 figs., 28 refs

  19. Toward a Probabilistic Automata Model of Some Aspects of Code-Switching.

    Science.gov (United States)

    Dearholt, D. W.; Valdes-Fallis, G.

    1978-01-01

    The purpose of the model is to select either Spanish or English as the language to be used; its goals at this stage of development include modeling code-switching for lexical need, apparently random code-switching, dependency of code-switching upon sociolinguistic context, and code-switching within syntactic constraints. (EJS)

  20. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    Science.gov (United States)

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  2. Modeling RIA scenarios with the FRAPTRAN and SCANAIR codes

    International Nuclear Information System (INIS)

    Sagrado Garcia, I. C.; Vallejo, I.; Herranz, L. E.

    2013-01-01

    The need of defining new RIA safety criteria has pointed out the importance of performing a rigorous assessment of the transient codes capabilities. The present work is a comparative exercise devoted to identify the origin of the key deviations found between the predictions of FRAPTRAN-1.4 and SCANAIR-7.1. To do so, the calculations submitted by CIEMAT to the OECD/NEA RIA benchmark have been exploited. This work shows that deviations in clad temperatures mainly come from the treatment of the oxide layer. The systematically higher deformations calculated by FRAPTRAN-1.4 in early failed tests are caused by the different gap closure estimation. Besides, the dissimilarities observed in the FGR predictions are inherent to the different modeling strategies adopted in each code.

  3. Modeling RIA scenarios with the FRAPTRAN and SCANAIR codes

    Energy Technology Data Exchange (ETDEWEB)

    Sagrado Garcia, I. C.; Vallejo, I.; Herranz, L. E.

    2013-07-01

    The need of defining new RIA safety criteria has pointed out the importance of performing a rigorous assessment of the transient codes capabilities. The present work is a comparative exercise devoted to identify the origin of the key deviations found between the predictions of FRAPTRAN-1.4 and SCANAIR-7.1. To do so, the calculations submitted by CIEMAT to the OECD/NEA RIA benchmark have been exploited. This work shows that deviations in clad temperatures mainly come from the treatment of the oxide layer. The systematically higher deformations calculated by FRAPTRAN-1.4 in early failed tests are caused by the different gap closure estimation. Besides, the dissimilarities observed in the FGR predictions are inherent to the different modeling strategies adopted in each code.

  4. Simplified model for radioactive contaminant transport: the TRANSS code

    International Nuclear Information System (INIS)

    Simmons, C.S.; Kincaid, C.T.; Reisenauer, A.E.

    1986-09-01

    A simplified ground-water transport model called TRANSS was devised to estimate the rate of migration of a decaying radionuclide that is subject to sorption governed by a linear isotherm. Transport is modeled as a contaminant mass transmitted along a collection of streamlines constituting a streamtube, which connects a source release zone with an environmental arrival zone. The probability-weighted contaminant arrival distribution along each streamline is represented by an analytical solution of the one-dimensional advection-dispersion equation with constant velocity and dispersion coefficient. The appropriate effective constant velocity for each streamline is based on the exact travel time required to traverse a streamline with a known length. An assumption used in the model to facilitate the mathematical simplification is that transverse dispersion within a streamtube is negligible. Release of contaminant from a source is described in terms of a fraction-remaining curve provided as input information. However, an option included in the code is the calculation of a fraction-remaining curve based on four specialized release models: (1) constant release rate, (2) solubility-controlled release, (3) adsorption-controlled release, and (4) diffusion-controlled release from beneath an infiltration barrier. To apply the code, a user supplies only a certain minimal number of parameters: a probability-weighted list of travel times for streamlines, a local-scale dispersion coefficient, a sorption distribution coefficient, total initial radionuclide inventory, radioactive half-life, a release model choice, and size dimensions of the source. The code is intended to provide scoping estimates of contaminant transport and does not predict the evolution of a concentration distribution in a ground-water flow field. Moreover, the required travel times along streamlines must be obtained from a prior ground-water flow simulation

  5. An ensemble model of QSAR tools for regulatory risk assessment.

    Science.gov (United States)

    Pradeep, Prachi; Povinelli, Richard J; White, Shannon; Merrill, Stephen J

    2016-01-01

    Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa ( κ ): 0

  6. A critical flow model for the Cathena thermalhydraulic code

    International Nuclear Information System (INIS)

    Popov, N.K.; Hanna, B.N.

    1990-01-01

    The calculation of critical flow rate, e.g., of choked flow through a break, is required for simulating a loss of coolant transient in a reactor or reactor-like experimental facility. A model was developed to calculate the flow rate through the break for given geometrical parameters near the break and fluid parameters upstream of the break for ordinary water, as well as heavy water, with or without non- condensible gases. This model has been incorporated in the CATHENA, one-dimensional, two-fluid thermalhydraulic code. In the CATHENA code a standard staggered-mesh, finite-difference representation is used to solve the thermalhydraulic equations. This model compares the fluid mixture velocity, calculated using the CATHENA momentum equations, with a critical velocity. When the mixture velocity is smaller than the critical velocity, the flow is assumed to be subcritical, and the model remains passive. When the fluid mixture velocity is higher than the critical velocity, the model sets the fluid mixture velocity equal to the critical velocity. In this paper the critical velocity at a link (momentum cell) is first estimated separately for single-phase liquid, two- phase, or single-phase gas flow condition at the upstream node (mass/energy cell). In all three regimes non-condensible gas can be present in the flow. For single-phase liquid flow, the critical velocity is estimated using a Bernoulli- type of equation, the pressure at the link is estimated by the pressure undershoot method

  7. Fixed Points in Discrete Models for Regulatory Genetic Networks

    Directory of Open Access Journals (Sweden)

    Orozco Edusmildo

    2007-01-01

    Full Text Available It is desirable to have efficient mathematical methods to extract information about regulatory iterations between genes from repeated measurements of gene transcript concentrations. One piece of information is of interest when the dynamics reaches a steady state. In this paper we develop tools that enable the detection of steady states that are modeled by fixed points in discrete finite dynamical systems. We discuss two algebraic models, a univariate model and a multivariate model. We show that these two models are equivalent and that one can be converted to the other by means of a discrete Fourier transform. We give a new, more general definition of a linear finite dynamical system and we give a necessary and sufficient condition for such a system to be a fixed point system, that is, all cycles are of length one. We show how this result for generalized linear systems can be used to determine when certain nonlinear systems (monomial dynamical systems over finite fields are fixed point systems. We also show how it is possible to determine in polynomial time when an ordinary linear system (defined over a finite field is a fixed point system. We conclude with a necessary condition for a univariate finite dynamical system to be a fixed point system.

  8. MDEP Technical Report TR-CSWG-01. Technical Report: Regulatory Frameworks for the Use of Nuclear Pressure Boundary Codes and Standards in MDEP Countries

    International Nuclear Information System (INIS)

    2013-01-01

    The Codes and Standards Working Group (CSWG) is one of the issue-specific working groups that the MDEP members are undertaking; its long term goal is harmonisation of regulatory and code requirements for design and construction of pressure-retaining components in order to improve the effectiveness and efficiency of the regulatory design reviews, increase quality of safety assessments, and to enable each regulator to become stronger in its ability to make safety decisions. The CSWG has interacted closely with the Standards Development Organisations (SDOs) and CORDEL in code comparison and code convergence. The Code Comparison Report STP-NU-051 has been issued by SDO members to identify the extent of similarities and differences amongst the pressure-boundary codes and standards used in various countries. Besides the differences in codes and standards, the way how the codes and standards are applied to systems, structures and components also affects the design and construction of nuclear power plant. Therefore, to accomplish the goal of potential harmonisation, it is also vital that the regulators learn about each other's procedures, processes, and regulations. To facilitate the learning process, the CSWG meets regularly to discuss issues relevant to licensing new reactors and using codes and standards in licensing safety reviews. The CSWG communicates very frequently with the SDOs to discuss similarities and differences among the various codes and how to proceed with potential harmonisation. It should be noted that the IAEA is invited to all of the issue-specific working groups within MDEP to ensure consistency with IAEA standards. The primary focus of this technical report is to consolidate information shared and accomplishments achieved by the member countries. This report seeks to document how each MDEP regulator utilises national or regional mechanical codes and standards in its safety reviews and licensing of new reactors. The preparation of this report

  9. A model surveillance program based on regulatory experience

    International Nuclear Information System (INIS)

    Conte, R.J.

    1980-01-01

    A model surveillance program is presented based on regulatory experience. The program consists of three phases: Program Delineation, Data Acquistion and Data Analysis. Each phase is described in terms of key quality assurance elements and some current philosophies is the United States Licensing Program. Other topics include the application of these ideas to test equipment used in the surveillance progam and audits of the established program. Program Delineation discusses the establishment of administrative controls for organization and the description of responsibilities using the 'Program Coordinator' concept, with assistance from Data Acquisition and Analysis Teams. Ideas regarding frequency of surveillance testing are also presented. The Data Acquisition Phase discusses various methods for acquiring data including operator observations, test procedures, operator logs, and computer output, for trending equipment performance. The Data Analysis Phase discusses the process for drawing conclusions regarding component/equipment service life, proper application, and generic problems through the use of trend analysis and failure rate data. (orig.)

  10. Modeling of fission product release in integral codes

    International Nuclear Information System (INIS)

    Obaidurrahman, K.; Raman, Rupak K.; Gaikwad, Avinash J.

    2014-01-01

    The Great Tohoku earthquake and tsunami that stroke the Fukushima-Daiichi nuclear power station in March 11, 2011 has intensified the needs of detailed nuclear safety research and with this objective all streams associated with severe accident phenomenology are being revisited thoroughly. The present paper would cover an overview of state of art FP release models being used, the important phenomenon considered in semi-mechanistic models and knowledge gaps in present FP release modeling. Capability of FP release module, ELSA of ASTEC integral code in appropriate prediction of FP release under several diversified core degraded conditions will also be demonstrated. Use of semi-mechanistic fission product release models at AERB in source-term estimation shall be briefed. (author)

  11. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    International Nuclear Information System (INIS)

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-01-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  12. Dataset of coded handwriting features for use in statistical modelling

    Directory of Open Access Journals (Sweden)

    Anna Agius

    2018-02-01

    Full Text Available The data presented here is related to the article titled, “Using handwriting to infer a writer's country of origin for forensic intelligence purposes” (Agius et al., 2017 [1]. This article reports original writer, spatial and construction characteristic data for thirty-seven English Australian11 In this study, English writers were Australians whom had learnt to write in New South Wales (NSW. writers and thirty-seven Vietnamese writers. All of these characteristics were coded and recorded in Microsoft Excel 2013 (version 15.31. The construction characteristics coded were only extracted from seven characters, which were: ‘g’, ‘h’, ‘th’, ‘M’, ‘0’, ‘7’ and ‘9’. The coded format of the writer, spatial and construction characteristics is made available in this Data in Brief in order to allow others to perform statistical analyses and modelling to investigate whether there is a relationship between the handwriting features and the nationality of the writer, and whether the two nationalities can be differentiated. Furthermore, to employ mathematical techniques that are capable of characterising the extracted features from each participant.

  13. Auditory information coding by modeled cochlear nucleus neurons.

    Science.gov (United States)

    Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner

    2011-06-01

    In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 μs). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.

  14. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  15. Regulatory focus at work : the moderating role of regulatory focus in the job demands-resources model

    NARCIS (Netherlands)

    Brenninkmeijer, V.; Demerouti, E.; Blanc, Le P.M.; Emmerik, van I.J.H.

    2010-01-01

    Purpose – The purpose of this study is to examine the moderating role of regulatory focus in the job demands-resources model. Design/methodology/approach – A questionnaire survey was conducted among 146 teachers in secondary education. It was expected that detrimental effects of job demands (i.e.

  16. An improved steam generator model for the SASSYS code

    International Nuclear Information System (INIS)

    Pizzica, P.A.

    1989-01-01

    A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid metal cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model but it can also function on a stand-alone basis. The steam generator can be used in a once-through mode, or a variant of the model can be used as a separate evaporator and a superheater with recirculation loop. The new model provides for an exact steady-state solution as well as the transient calculation. There was a need for a faster and more flexible model than the old steam generator model. The new model provides for more detail with its multi-mode treatment as opposed to the previous model's one node per region approach. Numerical instability problems which were the result of cell-centered spatial differencing, fully explicit time differencing, and the moving boundary treatment of the boiling crisis point in the boiling region have been reduced. This leads to an increase in speed as larger time steps can now be taken. The new model is an improvement in many respects. 2 refs., 3 figs

  17. Modeling of the CTEx subcritical unit using MCNPX code

    International Nuclear Information System (INIS)

    Santos, Avelino; Silva, Ademir X. da; Rebello, Wilson F.; Cunha, Victor L. Lassance

    2011-01-01

    The present work aims at simulating the subcritical unit of Army Technology Center (CTEx) namely ARGUS pile (subcritical uranium-graphite arrangement) by using the computational code MCNPX. Once such modeling is finished, it could be used in k-effective calculations for systems using natural uranium as fuel, for instance. ARGUS is a subcritical assembly which uses reactor-grade graphite as moderator of fission neutrons and metallic uranium fuel rods with aluminum cladding. The pile is driven by an Am-Be spontaneous neutron source. In order to achieve a higher value for k eff , a higher concentration of U235 can be proposed, provided it safely remains below one. (author)

  18. Status of emergency spray modelling in the integral code ASTEC

    International Nuclear Information System (INIS)

    Plumecocq, W.; Passalacqua, R.

    2001-01-01

    Containment spray systems are emergency systems that would be used in very low probability events which may lead to severe accidents in Light Water Reactors. In most cases, the primary function of the spray would be to remove heat and condense steam in order to reduce pressure and temperature in the containment building. Spray would also wash out fission products (aerosols and gaseous species) from the containment atmosphere. The efficiency of the spray system in the containment depressurization as well as in the removal of aerosols, during a severe accident, depends on the evolution of the spray droplet size distribution with the height in the containment, due to kinetic and thermal relaxation, gravitational agglomeration and mass transfer with the gas. A model has been developed taking into account all of these phenomena. This model has been implemented in the ASTEC code with a validation of the droplets relaxation against the CARAIDAS experiment (IPSN). Applications of this modelling to a PWR 900, during a severe accident, with special emphasis on the effect of spray on containment hydrogen distribution have been performed in multi-compartment configuration with the ASTEC V0.3 code. (author)

  19. Ecological models for regulatory risk assessments of pesticides: Developing a strategy for the future.

    NARCIS (Netherlands)

    Thorbek, P.; Forbes, V.; Heimbach, F.; Hommen, U.; Thulke, H.H.; Brink, van den P.J.

    2010-01-01

    Ecological Models for Regulatory Risk Assessments of Pesticides: Developing a Strategy for the Future provides a coherent, science-based view on ecological modeling for regulatory risk assessments. It discusses the benefits of modeling in the context of registrations, identifies the obstacles that

  20. 49 CFR 41.120 - Acceptable model codes.

    Science.gov (United States)

    2010-10-01

    ... 1991 International Conference of Building Officials (ICBO) Uniform Building Code, published by the... Supplement to the Building Officials and Code Administrators International (BOCA) National Building Code, published by the Building Officials and Code Administrators, 4051 West Flossmoor Rd., Country Club Hills...

  1. Influential input parameters for reflood model of MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Best Estimate (BE) calculation has been more broadly used in nuclear industries and regulations to reduce the significant conservatism for evaluating Loss of Coolant Accident (LOCA). Reflood model has been identified as one of the problems in BE calculation. The objective of the Post BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) program of OECD/NEA is to make progress the issue of the quantification of the uncertainty of the physical models in system thermal hydraulic codes, by considering an experimental result especially for reflood. It is important to establish a methodology to identify and select the parameters influential to the response of reflood phenomena following Large Break LOCA. For this aspect, a reference calculation and sensitivity analysis to select the dominant influential parameters for FEBA experiment are performed.

  2. Modeling the PUSPATI TRIGA Reactor using MCNP code

    International Nuclear Information System (INIS)

    Mohamad Hairie Rabir; Mark Dennis Usang; Naim Syauqi Hamzah; Julia Abdul Karim; Mohd Amin Sharifuldin Salleh

    2012-01-01

    The 1 MW TRIGA MARK II research reactor at Malaysian Nuclear Agency achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. This paper describes the reactor parameters calculation for the PUSPATI TRIGA REACTOR (RTP); focusing on the application of the developed reactor 3D model for criticality calculation, analysis of power and neutron flux distribution and depletion study of TRIGA fuel. The 3D continuous energy Monte Carlo code MCNP was used to develop a versatile and accurate full model of the TRIGA reactor. The model represents in detailed all important components of the core and shielding with literally no physical approximation. (author)

  3. Establishing a regulatory value chain model: An innovative approach to strengthening medicines regulatory systems in resource-constrained settings.

    Science.gov (United States)

    Chahal, Harinder Singh; Kashfipour, Farrah; Susko, Matt; Feachem, Neelam Sekhri; Boyle, Colin

    2016-05-01

    Medicines Regulatory Authorities (MRAs) are an essential part of national health systems and are charged with protecting and promoting public health through regulation of medicines. However, MRAs in resource-constrained settings often struggle to provide effective oversight of market entry and use of health commodities. This paper proposes a regulatory value chain model (RVCM) that policymakers and regulators can use as a conceptual framework to guide investments aimed at strengthening regulatory systems. The RVCM incorporates nine core functions of MRAs into five modules: (i) clear guidelines and requirements; (ii) control of clinical trials; (iii) market authorization of medical products; (iv) pre-market quality control; and (v) post-market activities. Application of the RVCM allows national stakeholders to identify and prioritize investments according to where they can add the most value to the regulatory process. Depending on the economy, capacity, and needs of a country, some functions can be elevated to a regional or supranational level, while others can be maintained at the national level. In contrast to a "one size fits all" approach to regulation in which each country manages the full regulatory process at the national level, the RVCM encourages leveraging the expertise and capabilities of other MRAs where shared processes strengthen regulation. This value chain approach provides a framework for policymakers to maximize investment impact while striving to reach the goal of safe, affordable, and rapidly accessible medicines for all.

  4. Model comparisons of the reactive burn model SURF in three ASC codes

    Energy Technology Data Exchange (ETDEWEB)

    Whitley, Von Howard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stalsberg, Krista Lynn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reichelt, Benjamin Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shipley, Sarah Jayne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    A study of the SURF reactive burn model was performed in FLAG, PAGOSA and XRAGE. In this study, three different shock-to-detonation transition experiments were modeled in each code. All three codes produced similar model results for all the experiments modeled and at all resolutions. Buildup-to-detonation time, particle velocities and resolution dependence of the models was notably similar between the codes. Given the current PBX 9502 equations of state and SURF calibrations, each code is equally capable of predicting the correct detonation time and distance when impacted by a 1D impactor at pressures ranging from 10-16 GPa, as long as the resolution of the mesh is not too coarse.

  5. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  6. Kinetic models of gene expression including non-coding RNAs

    Energy Technology Data Exchange (ETDEWEB)

    Zhdanov, Vladimir P., E-mail: zhdanov@catalysis.r

    2011-03-15

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  7. On boundary layer modelling using the ASTEC code

    International Nuclear Information System (INIS)

    Smith, B.L.

    1991-07-01

    The modelling of fluid boundary layers adjacent to non-slip, heated surface using the ASTEC code is described. The pricipal boundary layer characteristics are derived using simple dimensional arguments and these are developed into criteria for optimum placement of the computational mesh to achieve realistic simulation. In particular, the need for externally-imposed drag and heat transfer correlations as a function of the local mesh concentration is discussed in the context of both laminar and turbulent flow conditions. Special emphasis is placed in the latter case on the (k-ε) turbulence model, which is standard in the code. As far as possible, the analyses are pursued from first principles, so that no comprehensive knowledge of the history of the subject is required for the general ASTEC user to derive practical advice from the document. Some attention is paid to the use of heat transfer correlations for internal solid/fluid surfaces, whose treatment is not straightforward in ASTEC. It is shown that three formulations are possible to effect the heat transfer, called Explicit, Jacobian and Implicit. The particular advantages and disadvantages of each are discussed with regard to numerical stability and computational efficiency. (author) 18 figs., 1 tab., 39 refs

  8. Physicochemical analog for modeling superimposed and coded memories

    Science.gov (United States)

    Ensanian, Minas

    1992-07-01

    The mammalian brain is distinguished by a life-time of memories being stored within the same general region of physicochemical space, and having two extraordinary features. First, memories to varying degrees are superimposed, as well as coded. Second, instantaneous recall of past events can often be affected by relatively simple, and seemingly unrelated sensory clues. For the purposes of attempting to mathematically model such complex behavior, and for gaining additional insights, it would be highly advantageous to be able to simulate or mimic similar behavior in a nonbiological entity where some analogical parameters of interest can reasonably be controlled. It has recently been discovered that in nonlinear accumulative metal fatigue memories (related to mechanical deformation) can be superimposed and coded in the crystal lattice, and that memory, that is, the total number of stress cycles can be recalled (determined) by scanning not the surfaces but the `edges' of the objects. The new scanning technique known as electrotopography (ETG) now makes the state space modeling of metallic networks possible. The author provides an overview of the new field and outlines the areas that are of immediate interest to the science of artificial neural networks.

  9. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2) of...

  10. Isotopic modelling using the ENIGMA-B fuel performance code

    International Nuclear Information System (INIS)

    Rossiter, G.D.; Cook, P.M.A.; Weston, R.

    2001-01-01

    A number of experimental programmes by BNFL and other MOX fabricators have now shown that the in-pile performance of MOX fuel is generally similar to that of conventional UO 2 fuel. Models based on UO 2 fuel experience form a good basis for a description of MOX fuel behaviour. However, an area where the performance of MOX fuel is sufficiently different from that of UO 2 to warrant model changes is in the radial power and burnup profile. The differences in radial power and burnup profile arise from the presence of significant concentrations of plutonium in MOX fuel, at beginning of life, and their subsequent evolution with burnup. Amongst other effects, plutonium has a greater neutron absorption cross-section than uranium. This paper focuses on the development of a new model for the radial power and burnup profile within a UO 2 or MOX fuel rod, in which the underlying fissile isotope concentration distributions are tracked during irradiation. The new model has been incorporated into the ENIGMA-B fuel performance code and has been extended to track the isotopic concentrations of the fission gases, xenon and krypton. The calculated distributions have been validated against results from rod puncture measurements and electron probe micro-analysis (EPMA) linescans, performed during the M501 post irradiation examination (PIE) programme. The predicted gas inventory of the fuel/clad gap is compared with the isotopic composition measured during rod puncture and the measured radial distributions of burnup (from neodymium measurements) and plutonium in the fuel are compared with the calculated distributions. It is shown that there is good agreement between the code predictions and the measurements. (author)

  11. A Positive Regulatory Loop between a Wnt-Regulated Non-coding RNA and ASCL2 Controls Intestinal Stem Cell Fate.

    Science.gov (United States)

    Giakountis, Antonis; Moulos, Panagiotis; Zarkou, Vasiliki; Oikonomou, Christina; Harokopos, Vaggelis; Hatzigeorgiou, Artemis G; Reczko, Martin; Hatzis, Pantelis

    2016-06-21

    The canonical Wnt pathway plays a central role in stem cell maintenance, differentiation, and proliferation in the intestinal epithelium. Constitutive, aberrant activity of the TCF4/β-catenin transcriptional complex is the primary transforming factor in colorectal cancer. We identify a nuclear long non-coding RNA, termed WiNTRLINC1, as a direct target of TCF4/β-catenin in colorectal cancer cells. WiNTRLINC1 positively regulates the expression of its genomic neighbor ASCL2, a transcription factor that controls intestinal stem cell fate. WiNTRLINC1 interacts with TCF4/β-catenin to mediate the juxtaposition of its promoter with the regulatory regions of ASCL2. ASCL2, in turn, regulates WiNTRLINC1 transcriptionally, closing a feedforward regulatory loop that controls stem cell-related gene expression. This regulatory circuitry is highly amplified in colorectal cancer and correlates with increased metastatic potential and decreased patient survival. Our results uncover the interplay between non-coding RNA-mediated regulation and Wnt signaling and point to the diagnostic and therapeutic potential of WiNTRLINC1. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. Impurity seeding in ASDEX upgrade tokamak modeled by COREDIV code

    Energy Technology Data Exchange (ETDEWEB)

    Galazka, K.; Ivanova-Stanik, I.; Czarnecka, A.; Zagoerski, R. [Institute of Plasma Physics and Laser Microfusion, Warsaw (Poland); Bernert, M.; Kallenbach, A. [Max-Planck-Institut fuer Plasmaphysik, Garching (Germany); Collaboration: ASDEX Upgrade Team

    2016-08-15

    The self-consistent COREDIV code is used to simulate discharges in a tokamak plasma, especially the influence of impurities during nitrogen and argon seeding on the key plasma parameters. The calculations are performed with and without taking into account the W prompt redeposition in the divertor area and are compared to the experimental results acquired on ASDEX Upgrade tokamak (shots 29254 and 29257). For both impurities the modeling shows a better agreement with the experiment in the case without prompt redeposition. It is attributed to higher average tungsten concentration, which on the other hand seriously exceeds the experimental value. By turning the prompt redeposition process on, the W concentration is lowered, what, in turn, results in underestimation of the radiative power losses. By analyzing the influence of the transport coefficients on the radiative power loss and average W concentration it is concluded that the way to compromise the opposing tendencies is to include the edge-localized mode flushing mechanism into the code, which dominates the experimental particle and energy balance. Also performing the calculations with both anomalous and neoclassical diffusion transport mechanisms included is suggested. (copyright 2016 The Authors. Contributions to Plasma Physics published by Wiley-VCH Verlag GmbH and Co. KGaA Weinheim. This)

  13. Modelling guidelines for core exit temperature simulations with system codes

    Energy Technology Data Exchange (ETDEWEB)

    Freixa, J., E-mail: jordi.freixa-terradas@upc.edu [Department of Physics and Nuclear Engineering, Technical University of Catalonia (UPC) (Spain); Paul Scherrer Institut (PSI), 5232 Villigen (Switzerland); Martínez-Quiroga, V., E-mail: victor.martinez@nortuen.com [Department of Physics and Nuclear Engineering, Technical University of Catalonia (UPC) (Spain); Zerkak, O., E-mail: omar.zerkak@psi.ch [Paul Scherrer Institut (PSI), 5232 Villigen (Switzerland); Reventós, F., E-mail: francesc.reventos@upc.edu [Department of Physics and Nuclear Engineering, Technical University of Catalonia (UPC) (Spain)

    2015-05-15

    Highlights: • Core exit temperature is used in PWRs as an indication of core heat up. • Modelling guidelines of CET response with system codes. • Modelling of heat transfer processes in the core and UP regions. - Abstract: Core exit temperature (CET) measurements play an important role in the sequence of actions under accidental conditions in pressurized water reactors (PWR). Given the difficulties in placing measurements in the core region, CET readings are used as criterion for the initiation of accident management (AM) procedures because they can indicate a core heat up scenario. However, the CET responses have some limitation in detecting inadequate core cooling and core uncovery simply because the measurement is not placed inside the core. Therefore, it is of main importance in the field of nuclear safety for PWR power plants to assess the capabilities of system codes for simulating the relation between the CET and the peak cladding temperature (PCT). The work presented in this paper intends to address this open question by making use of experimental work at integral test facilities (ITF) where experiments related to the evolution of the CET and the PCT during transient conditions have been carried out. In particular, simulations of two experiments performed at the ROSA/LSTF and PKL facilities are presented. The two experiments are part of a counterpart exercise between the OECD/NEA ROSA-2 and OECD/NEA PKL-2 projects. The simulations are used to derive guidelines in how to correctly reproduce the CET response during a core heat up scenario. Three aspects have been identified to be of main importance: (1) the need for a 3-dimensional representation of the core and Upper Plenum (UP) regions in order to model the heterogeneity of the power zones and axial areas, (2) the detailed representation of the active and passive heat structures, and (3) the use of simulated thermocouples instead of steam temperatures to represent the CET readings.

  14. CODE's new solar radiation pressure model for GNSS orbit determination

    Science.gov (United States)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.

    2015-08-01

    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  15. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  16. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  17. Simplified modeling and code usage in the PASC-3 code system by the introduction of a programming environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.

    1991-06-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  18. Watershed monitoring and modelling and USA regulatory compliance.

    Science.gov (United States)

    Turner, B G; Boner, M C

    2004-01-01

    The aim of the Columbus program was to implement a comprehensive watershed monitoring-network including water chemistry, aquatic biology and alternative sensors to establish water environment health and methods for determining future restoration progress and early warning for protection of drinking water supplies. The program was implemented to comply with USA regulatory requirements including Total Maximum Daily Load (TMDL) rules of the Clean Water Act (CWA) and Source Water Assessment and Protection (SWAP) rules under the Safe Drinking Water Act (SDWA). The USEPA Office of Research and Development and the Water Environment Research Foundation provided quality assurance oversight. The results obtained demonstrated that significant wet weather data is necessary to establish relationships between land use, water chemistry, aquatic biology and sensor data. These measurements and relationships formed the basis for calibrating the US EPA BASINS Model, prioritizing watershed health and determination of compliance with water quality standards. Conclusions specify priorities of cost-effective drainage system controls that attenuate stormwater flows and capture flushed pollutants. A network of permanent long-term real-time monitoring using combination of continuous sensor measurements, water column sampling and aquatic biology surveys and a regional organization is prescribed to protect drinking water supplies and measure progress towards water quality targets.

  19. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    Science.gov (United States)

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  20. Modeling Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  1. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  2. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  3. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  4. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  5. Physical model of the nuclear fuel cycle simulation code SITON

    International Nuclear Information System (INIS)

    Brolly, Á.; Halász, M.; Szieberth, M.; Nagy, L.; Fehér, S.

    2017-01-01

    Finding answers to main challenges of nuclear energy, like resource utilisation or waste minimisation, calls for transient fuel cycle modelling. This motivation led to the development of SITON v2.0 a dynamic, discrete facilities/discrete materials and also discrete events fuel cycle simulation code. The physical model of the code includes the most important fuel cycle facilities. Facilities can be connected flexibly; their number is not limited. Material transfer between facilities is tracked by taking into account 52 nuclides. Composition of discharged fuel is determined using burnup tables except for the 2400 MW thermal power design of the Gas-Cooled Fast Reactor (GFR2400). For the GFR2400 the FITXS method is used, which fits one-group microscopic cross-sections as polynomial functions of the fuel composition. This method is accurate and fast enough to be used in fuel cycle simulations. Operation of the fuel cycle, i.e. material requests and transfers, is described by discrete events. In advance of the simulation reactors and plants formulate their requests as events; triggered requests are tracked. After that, the events are simulated, i.e. the requests are fulfilled and composition of the material flow between facilities is calculated. To demonstrate capabilities of SITON v2.0, a hypothetical transient fuel cycle is presented in which a 4-unit VVER-440 reactor park was replaced by one GFR2400 that recycled its own spent fuel. It is found that the GFR2400 can be started if the cooling time of its spent fuel is 2 years. However, if the cooling time is 5 years it needs an additional plutonium feed, which can be covered from the spent fuel of a Generation III light water reactor.

  6. Nuclear model codes available at the Nuclear Energy Agency Computer Program Library (NEA-CPL)

    International Nuclear Information System (INIS)

    Sartori, E.; Garcia Viedma, L. de

    1976-01-01

    This paper briefly outlines the objectives of the NEA-CPL and its activities in the field of Nuclear Model Computer Codes. A short description of the computer codes available from the CPL in this field is also presented. (author)

  7. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  8. MINIMARS interim report appendix halo model and computer code

    International Nuclear Information System (INIS)

    Santarius, J.F.; Barr, W.L.; Deng, B.Q.; Emmert, G.A.

    1985-01-01

    A tenuous, cool plasma called the halo shields the core plasma in a tandem mirror from neutral gas and impurities. The neutral particles are ionized and then pumped by the halo to the end tanks of the device, since flow of plasma along field lines is much faster than radial flow. Plasma reaching the end tank walls recombines, and the resulting neutral gas is vacuum pumped. The basic geometry of the MINIMARS halo is shown. For halo modeling purposes, the core plasma and cold gas regions may be treated as single radial zones leading to halo source and sink terms. The halo itself is differential into two major radial zones: halo scraper and halo dump. The halo scraper zone is defined by the radial distance required for the ion end plugging potential to drop to the central cell value, and thus have no effect on axial confinement; this distance is typically a sloshing plug ion Larmor diameter. The outer edge of the halo dump zone is defined by the last central cell flux tube to pass through the choke coil. This appendix will summarize the halo model that has been developed for MINIMARS and the methodology used in implementing that model as a computer code

  9. PCCS model development for SBWR using the CONTAIN code

    International Nuclear Information System (INIS)

    Tills, J.; Murata, K.K.; Washington, K.E.

    1994-01-01

    The General Electric Simplified Boiling Water Reactor (SBWR) employs a passive containment cooling system (PCCS) to maintain long-term containment gas pressure and temperature below design limits during accidents. This system consists of a steam supply line that connects the upper portion of the drywell with a vertical shell-and-tube single pass heat exchanger located in an open water pool outside of the containment safety envelope. The heat exchanger tube outlet is connected to a vent line that is submerged below the suppression pool surface but above the main suppression pool horizontal vents. Steam generated in the post-shutdown period flows into the heat exchanger tubes as the result of suction and/or a low pressure differential between the drywell and suppression chamber. Operation of the PCCS is complicated by the presence of noncondensables in the flow stream. Build-up of noncondensables in the exchanger and vent line for the periods when the vent is not cleared causes a reduction in the exchanger heat removal capacity. As flow to the exchanger is reduced due to the noncondensable gas build-up, the drywell pressure increases until the vent line is cleared and the noncondensables are purged into the suppression chamber, restoring the heat removal capability of the PCCS. This paper reports on progress made in modeling SBWR containment loads using the CONTAIN code. As a central part of this effort, a PCCS model development effort has recently been undertaken to implement an appropriate model in CONTAIN. The CONTAIN PCCS modeling approach is discussed and validated. A full SBWR containment input deck has also been developed for CONTAIN. The plant response to a postulated design basis accident (DBA) has been calculated with the CONTAIN PCCS model and plant deck, and the preliminary results are discussed

  10. Future time perspective, regulatory focus, and selection, optimization, and compensation: Testing a longitudinal model

    NARCIS (Netherlands)

    Baltes, B.B.; Wynne, K.; Sirabian, M.; Krenn, D.; Lange, A.H. de

    2014-01-01

    This study examines the behavioral processes through which future time perspective (FTP) and regulatory focus may influence coping behaviors in older workers. A three-wave longitudinal study was conducted to test a novel model, positing that FTP affects regulatory focus, which then influences the

  11. ETFOD: a point model physics code with arbitrary input

    International Nuclear Information System (INIS)

    Rothe, K.E.; Attenberger, S.E.

    1980-06-01

    ETFOD is a zero-dimensional code which solves a set of physics equations by minimization. The technique used is different than normally used, in that the input is arbitrary. The user is supplied with a set of variables from which he specifies which variables are input (unchanging). The remaining variables become the output. Presently the code is being used for ETF reactor design studies. The code was written in a manner to allow easy modificaton of equations, variables, and physics calculations. The solution technique is presented along with hints for using the code

  12. Regulatory framework and business models for charging plug-in electric vehicles: Infrastructure, agents, and commercial relationships

    International Nuclear Information System (INIS)

    Gomez San Roman, Tomas; Momber, Ilan; Rivier Abbad, Michel; Sanchez Miralles, Alvaro

    2011-01-01

    Electric vehicles (EVs) present efficiency and environmental advantages over conventional transportation. It is expected that in the next decade this technology will progressively penetrate the market. The integration of plug-in electric vehicles in electric power systems poses new challenges in terms of regulation and business models. This paper proposes a conceptual regulatory framework for charging EVs. Two new electricity market agents, the EV charging manager and the EV aggregator, in charge of developing charging infrastructure and providing charging services are introduced. According to that, several charging modes such as EV home charging, public charging on streets, and dedicated charging stations are formulated. Involved market agents and their commercial relationships are analysed in detail. The paper elaborates the opportunities to formulate more sophisticated business models for vehicle-to-grid applications under which the storage capability of EV batteries is used for providing peak power or frequency regulation to support the power system operation. Finally penetration phase dependent policy and regulatory recommendations are given concerning time-of-use pricing, smart meter deployment, stable and simple regulation for reselling energy on private property, roll-out of public charging infrastructure as well as reviewing of grid codes and operational system procedures for interactions between network operators and vehicle aggregators. - Highlights: → A conceptual regulatory framework for charging EVs is proposed. → 2 new agents, EV charging point manager, EV aggregator and their functions are introduced. → Depending on private or public access of charging points, contractual relations change. → A classification of charging scenarios alludes implications on regulatory topics. → EV penetration phase dependent policy and regulatory recommendations are given.

  13. Regulatory framework and business models for charging plug-in electric vehicles: Infrastructure, agents, and commercial relationships

    Energy Technology Data Exchange (ETDEWEB)

    Gomez San Roman, Tomas [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, Madrid (Spain); Momber, Ilan, E-mail: ilan.momber@iit.upcomillas.es [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, Madrid (Spain); Rivier Abbad, Michel; Sanchez Miralles, Alvaro [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, Madrid (Spain)

    2011-10-15

    Electric vehicles (EVs) present efficiency and environmental advantages over conventional transportation. It is expected that in the next decade this technology will progressively penetrate the market. The integration of plug-in electric vehicles in electric power systems poses new challenges in terms of regulation and business models. This paper proposes a conceptual regulatory framework for charging EVs. Two new electricity market agents, the EV charging manager and the EV aggregator, in charge of developing charging infrastructure and providing charging services are introduced. According to that, several charging modes such as EV home charging, public charging on streets, and dedicated charging stations are formulated. Involved market agents and their commercial relationships are analysed in detail. The paper elaborates the opportunities to formulate more sophisticated business models for vehicle-to-grid applications under which the storage capability of EV batteries is used for providing peak power or frequency regulation to support the power system operation. Finally penetration phase dependent policy and regulatory recommendations are given concerning time-of-use pricing, smart meter deployment, stable and simple regulation for reselling energy on private property, roll-out of public charging infrastructure as well as reviewing of grid codes and operational system procedures for interactions between network operators and vehicle aggregators. - Highlights: > A conceptual regulatory framework for charging EVs is proposed. > 2 new agents, EV charging point manager, EV aggregator and their functions are introduced. > Depending on private or public access of charging points, contractual relations change. > A classification of charging scenarios alludes implications on regulatory topics. > EV penetration phase dependent policy and regulatory recommendations are given.

  14. Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code

    International Nuclear Information System (INIS)

    Viani, B.E.; Bruton, C.J.

    1992-06-01

    Assessing the suitability of Yucca Mtn., NV as a potential repository for high-level nuclear waste requires the means to simulate ion-exchange behavior of zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs or Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites

  15. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  16. Mapping regulatory models for medicinal cannabis: a matrix of options.

    Science.gov (United States)

    Belackova, Vendula; Shanahan, Marian; Ritter, Alison

    2017-05-30

    Objective The aim of the present study was to develop a framework for assessing regulatory options for medicinal cannabis in Australia. Methods International regulatory regimes for medicinal cannabis were reviewed with a qualitative policy analysis approach and key policy features were synthesised, leading to a conceptual framework that facilitates decision making across multiple dimensions. Results Two central organising dimensions of medicinal cannabis regulation were identified: cannabis supply and patient authorisation (including patient access). A number of the different supply options can be matched with a number of different patient authorisation options, leading to a matrix of possible regulatory regimes. Conclusions The regulatory options, as used internationally, involve different forms of cannabis (synthetic and plant-based pharmaceutical preparations or herbal cannabis) and the varying extent to which patient authorisation policies and procedures are stringently or more loosely defined. The optimal combination of supply and patient authorisation options in any jurisdiction that chooses to make medicinal cannabis accessible will depend on policy goals. What is known about the topic? Internationally, regulation of medicinal cannabis has developed idiosyncratically, depending on formulations that were made available and local context. There has been no attempt to date in the scientific literature to systematically document the variety of regulatory possibilities for medicinal cannabis. What does this paper add? This paper presents a new conceptual schema for considering options for the regulation of medicinal cannabis, across both supply and patient authorisation aspects. What are the implications for practitioners? The design of regulatory systems in Australia, whether for pharmaceutical or herbal products, is a vital issue for policy makers right now as federal and state and territory governments grapple with the complexities of medicinal cannabis

  17. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  18. Lost opportunities: Modeling commercial building energy code adoption in the United States

    International Nuclear Information System (INIS)

    Nelson, Hal T.

    2012-01-01

    This paper models the adoption of commercial building energy codes in the US between 1977 and 2006. Energy code adoption typically results in an increase in aggregate social welfare by cost effectively reducing energy expenditures. Using a Cox proportional hazards model, I test if relative state funding, a new, objective, multivariate regression-derived measure of government capacity, as well as a vector of control variables commonly used in comparative state research, predict commercial building energy code adoption. The research shows little political influence over historical commercial building energy code adoption in the sample. Colder climates and higher electricity prices also do not predict more frequent code adoptions. I do find evidence of high government capacity states being 60 percent more likely than low capacity states to adopt commercial building energy codes in the following year. Wealthier states are also more likely to adopt commercial codes. Policy recommendations to increase building code adoption include increasing access to low cost capital for the private sector and providing noncompetitive block grants to the states from the federal government. - Highlights: ► Model the adoption of commercial building energy codes from 1977–2006 in the US. ► Little political influence over historical building energy code adoption. ► High capacity states are over 60 percent more likely than low capacity states to adopt codes. ► Wealthier states are more likely to adopt commercial codes. ► Access to capital and technical assistance is critical to increase code adoption.

  19. Coding conventions and principles for a National Land-Change Modeling Framework

    Science.gov (United States)

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  20. A comparative study of covariance selection models for the inference of gene regulatory networks.

    Science.gov (United States)

    Stifanelli, Patrizia F; Creanza, Teresa M; Anglani, Roberto; Liuzzi, Vania C; Mukherjee, Sayan; Schena, Francesco P; Ancona, Nicola

    2013-10-01

    The inference, or 'reverse-engineering', of gene regulatory networks from expression data and the description of the complex dependency structures among genes are open issues in modern molecular biology. In this paper we compared three regularized methods of covariance selection for the inference of gene regulatory networks, developed to circumvent the problems raising when the number of observations n is smaller than the number of genes p. The examined approaches provided three alternative estimates of the inverse covariance matrix: (a) the 'PINV' method is based on the Moore-Penrose pseudoinverse, (b) the 'RCM' method performs correlation between regression residuals and (c) 'ℓ(2C)' method maximizes a properly regularized log-likelihood function. Our extensive simulation studies showed that ℓ(2C) outperformed the other two methods having the most predictive partial correlation estimates and the highest values of sensitivity to infer conditional dependencies between genes even when a few number of observations was available. The application of this method for inferring gene networks of the isoprenoid biosynthesis pathways in Arabidopsis thaliana allowed to enlighten a negative partial correlation coefficient between the two hubs in the two isoprenoid pathways and, more importantly, provided an evidence of cross-talk between genes in the plastidial and the cytosolic pathways. When applied to gene expression data relative to a signature of HRAS oncogene in human cell cultures, the method revealed 9 genes (p-value<0.0005) directly interacting with HRAS, sharing the same Ras-responsive binding site for the transcription factor RREB1. This result suggests that the transcriptional activation of these genes is mediated by a common transcription factor downstream of Ras signaling. Software implementing the methods in the form of Matlab scripts are available at: http://users.ba.cnr.it/issia/iesina18/CovSelModelsCodes.zip. Copyright © 2013 The Authors. Published by

  1. Three-field modeling for MARS 1-D code

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Lim, Ho-Gon; Jeong, Jae-Jun; Chung, Bub-Dong

    2006-01-01

    In this study, the three-field modeling of the two-phase mixture is developed. The finite difference equations for the three-field equations thereafter are devised. The solution scheme has been implemented into the MARS 1-D code. The three-field formulations adopted are similar to those for MARS 3-D module, in a sense that the mass and momentum are treated separately for the entrained liquid and continuous liquid. As in the MARS-3D module, the entrained liquid and continuous liquid are combined into one for the energy equation, assuming thermal equilibrium between the two. All the non-linear terms are linearized to arrange the finite difference equation set into a linear matrix form with respect to the unknown arguments. The problems chosen for the assessment of the newly added entrained field consist of basic conceptual tests. Among the tests are gas-only test, liquid-only test, gas-only with supplied entrained liquid test, Edwards pipe problem, and GE level swell problem. The conceptual tests performed confirm the sound integrity of the three-field solver

  2. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... influencing the coding performance of DVC. A TDWZ video decoder with a novel cross-band based adaptive noise model is proposed, and a noise residue refinement scheme is introduced to successively update the estimated noise residue for noise modeling after each bit-plane. Experimental results show...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length....

  3. Safety margin evaluation of pre-stressed concrete nuclear containment vessel model with BARC code ULCA

    International Nuclear Information System (INIS)

    Basha, S.M.; Patnaik, R.; Ramanujam, S.; Singh, R.K.; Kushwaha, H.S.; Venkat Raj, V.

    2002-01-01

    Full text: Ultimate load capacity assessment of nuclear containments has been a thrust research area for Indian pressurised heavy water reactor (PHWR) power programme. For containment safety assessment of Indian PHWRs a finite element code ULCA was developed at BARC, Trombay. This code has been extensively benchmarked with experimental results and for prediction of safety margins of Indian PHWRs. The present paper highlights the analysis results for prestressed concrete containment vessel (PCCV) tested at Sandia National Labs, USA in a round robin analysis activity co-sponsored by Nuclear Power Engineering Corporation (NUPEC), Japan and the U.S Nuclear Regulatory Commission (NRC). Three levels of failure pressure predictions namely the upper bound, the most probable and the lower bound (all with 90% confidence) were made as per the requirements of the round robin analysis activity. The most likely failure pressure is predicted to be in the range of 2.95 Pd to 3.15 Pd (Pd = design pressure of 0.39 MPa for the PCCV model) depending on the type of liners used in the construction of the PCCV model. The lower bound value of the ultimate pressure of 2.80 Pd and the upper bound of the ultimate pressure of 3.45 Pd are also predicted from the analysis. These limiting values depend on the assumptions of the analysis for simulating the concrete tendon interaction and the strain hardening characteristics of the steel members. The experimental test has been recently concluded at Sandia Laboratory and the peak pressure reached during the test is 3.3 Pd that is enveloped by our upper bound prediction of 3.45 Pd and is close to the predicted most likely pressure of 3.15 Pd

  4. Lnc2Meth: a manually curated database of regulatory relationships between long non-coding RNAs and DNA methylation associated with human disease.

    Science.gov (United States)

    Zhi, Hui; Li, Xin; Wang, Peng; Gao, Yue; Gao, Baoqing; Zhou, Dianshuang; Zhang, Yan; Guo, Maoni; Yue, Ming; Shen, Weitao; Ning, Shangwei; Jin, Lianhong; Li, Xia

    2018-01-04

    Lnc2Meth (http://www.bio-bigdata.com/Lnc2Meth/), an interactive resource to identify regulatory relationships between human long non-coding RNAs (lncRNAs) and DNA methylation, is not only a manually curated collection and annotation of experimentally supported lncRNAs-DNA methylation associations but also a platform that effectively integrates tools for calculating and identifying the differentially methylated lncRNAs and protein-coding genes (PCGs) in diverse human diseases. The resource provides: (i) advanced search possibilities, e.g. retrieval of the database by searching the lncRNA symbol of interest, DNA methylation patterns, regulatory mechanisms and disease types; (ii) abundant computationally calculated DNA methylation array profiles for the lncRNAs and PCGs; (iii) the prognostic values for each hit transcript calculated from the patients clinical data; (iv) a genome browser to display the DNA methylation landscape of the lncRNA transcripts for a specific type of disease; (v) tools to re-annotate probes to lncRNA loci and identify the differential methylation patterns for lncRNAs and PCGs with user-supplied external datasets; (vi) an R package (LncDM) to complete the differentially methylated lncRNAs identification and visualization with local computers. Lnc2Meth provides a timely and valuable resource that can be applied to significantly expand our understanding of the regulatory relationships between lncRNAs and DNA methylation in various human diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. On the development of LWR fuel analysis code (1). Analysis of the FEMAXI code and proposal of a new model

    International Nuclear Information System (INIS)

    Lemehov, Sergei; Suzuki, Motoe

    2000-01-01

    This report summarizes the review on the modeling features of FEMAXI code and proposal of a new theoretical equation model of clad creep on the basis of irradiation-induced microstructure change. It was pointed out that plutonium build-up in fuel matrix and non-uniform radial power profile at high burn-up affect significantly fuel behavior through the interconnected effects with such phenomena as clad irradiation-induced creep, fission gas release, fuel thermal conductivity degradation, rim porous band formation and associated fuel swelling. Therefore, these combined effects should be properly incorporated into the models of the FEMAXI code so that the code can carry out numerical analysis at the level of accuracy and elaboration that modern experimental data obtained in test reactors have. Also, the proposed new mechanistic clad creep model has a general formalism which allows the model to be flexibly applied for clad behavior analysis under normal operation conditions and power transients as well for Zr-based clad materials by the use of established out-of-pile mechanical properties. The model has been tested against experimental data, while further verification is needed with specific emphasis on power ramps and transients. (author)

  6. Sodium/water pool-deposit bed model of the CONACS code

    International Nuclear Information System (INIS)

    Peak, R.D.

    1983-01-01

    A new Pool-Bed model of the CONACS (Containment Analysis Code System) code represents a major advance over the pool models of other containment analysis code (NABE code of France, CEDAN code of Japan and CACECO and CONTAIN codes of the United States). This new model advances pool-bed modeling because of the number of significant materials and processes which are included with appropriate rigor. This CONACS pool-bed model maintains material balances for eight chemical species (C, H 2 O, Na, NaH, Na 2 O, Na 2 O 2 , Na 2 CO 3 and NaOH) that collect in the stationary liquid pool on the floor and in the desposit bed on the elevated shelf of the standard CONACS analysis cell

  7. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrausch, A.; Heusdens, R.; Jensen, J.; Holdt Jensen, S.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  8. A perceptual model for sinusoidal audio coding based on spectral integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrauch, A.; Heusdens, R.; Jensen, J.; Jensen, S.H.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  9. COCOA Code for Creating Mock Observations of Star Cluster Models

    OpenAIRE

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2017-01-01

    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or \\textit{N}-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the C...

  10. Assessment of the Effects on PCT Evaluation of Enhanced Fuel Model Facilitated by Coupling the MARS Code with the FRAPTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyong Chol; Lee, Young Jin; Han, Sam Hee [NSE Technology Inc., Daejeon (Korea, Republic of)

    2016-10-15

    The principal objectives of the two safety criteria, peak cladding temperature (PCT) and total oxidation limits, are to ensure that the fuel rod claddings remain sufficiently ductile so that they do not crack and fragment during a LOCA. Another important purpose of the PCT limit is to ensure that the fuel cladding does not enter the regime of runaway oxidation and uncontrollable heat-up. However, even when the PCT limit is satisfied, it is known that cladding failures may still occur in a certain percentage of the fuel rods during a LOCA. This is largely because a 100% fuel failure is assumed for the radiological consequence analysis in the US regulatory practices. In this study, we analyze the effects of cladding failure and other fuel model features on PCT during a LOCA using the MARS-FRAPTRAN coupled code. MARS code has been coupled with FRAPTRAN code to extend fuel modeling capability. The coupling allows feedback of FRAPTRAN results in real time. Because of the significant impact of fuel models on key safety parameters such as PCT, detailed and accurate fuel models should be employed when evaluating PCT in LOCA analysis. It is noteworthy that the ECCS evaluation models laid out in the Appendix K to 10CFR50 require a provision for predicting cladding swelling and rupture and require to assume that the inside of the cladding react with steam after the rupture. The metal-water reaction energy can have significantly large effect on the reflood PCT, especially when fuel failure occurs. Effects of applying an advanced fuel model on the PCT evaluation can be clearly seen when comparing the MARS and the FRAPTRAN results in both the one-way calculation and the feedback calculation. As long as MARS and FRAPTRAN are used respectively in the ranges where they have been validated, the coupled calculation results are expected to be valid and to reveal various aspects of phenomena which have not been discovered in previous uncoupled calculations by MARS or FRAPTRAN.

  11. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  12. HOXA1 and TALE proteins display cross-regulatory interactions and form a combinatorial binding code on HOXA1 targets.

    Science.gov (United States)

    De Kumar, Bony; Parker, Hugo J; Paulson, Ariel; Parrish, Mark E; Pushel, Irina; Singh, Narendra Pratap; Zhang, Ying; Slaughter, Brian D; Unruh, Jay R; Florens, Laurence; Zeitlinger, Julia; Krumlauf, Robb

    2017-09-01

    Hoxa1 has diverse functional roles in differentiation and development. We identify and characterize properties of regions bound by HOXA1 on a genome-wide basis in differentiating mouse ES cells. HOXA1-bound regions are enriched for clusters of consensus binding motifs for HOX, PBX, and MEIS, and many display co-occupancy of PBX and MEIS. PBX and MEIS are members of the TALE family and genome-wide analysis of multiple TALE members (PBX, MEIS, TGIF, PREP1, and PREP2) shows that nearly all HOXA1 targets display occupancy of one or more TALE members. The combinatorial binding patterns of TALE proteins define distinct classes of HOXA1 targets, which may create functional diversity. Transgenic reporter assays in zebrafish confirm enhancer activities for many HOXA1-bound regions and the importance of HOX-PBX and TGIF motifs for their regulation. Proteomic analyses show that HOXA1 physically interacts on chromatin with PBX, MEIS, and PREP family members, but not with TGIF, suggesting that TGIF may have an independent input into HOXA1-bound regions. Therefore, TALE proteins appear to represent a wide repertoire of HOX cofactors, which may coregulate enhancers through distinct mechanisms. We also discover extensive auto- and cross-regulatory interactions among the Hoxa1 and TALE genes, indicating that the specificity of HOXA1 during development may be regulated though a complex cross-regulatory network of HOXA1 and TALE proteins. This study provides new insight into a regulatory network involving combinatorial interactions between HOXA1 and TALE proteins. © 2017 De Kumar et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Mechanistic modelling of gaseous fission product behaviour in UO2 fuel by Rtop code

    International Nuclear Information System (INIS)

    Kanukova, V.D.; Khoruzhii, O.V.; Kourtchatov, S.Y.; Likhanskii, V.V.; Matveew, L.V.

    2002-01-01

    The current status of a mechanistic modelling by the RTOP code of the fission product behaviour in polycrystalline UO 2 fuel is described. An outline of the code and implemented physical models is presented. The general approach to code validation is discussed. It is exemplified by the results of validation of the models of fuel oxidation and grain growth. The different models of intragranular and intergranular gas bubble behaviour have been tested and the sensitivity of the code in the framework of these models has been analysed. An analysis of available models of the resolution of grain face bubbles is also presented. The possibilities of the RTOP code are presented through the example of modelling behaviour of WWER fuel over the course of a comparative WWER-PWR experiment performed at Halden and by comparison with Yanagisawa experiments. (author)

  14. Regulatory odour model development: Survey of modelling tools and datasets with focus on building effects

    DEFF Research Database (Denmark)

    Olesen, H. R.; Løfstrøm, P.; Berkowicz, R.

    dispersion models for estimating local concentration levels in general. However, the report focuses on some particular issues, which are relevant for subsequent work on odour due to animal production. An issue of primary concern is the effect that buildings (stables) have on flow and dispersion. The handling...... of building effects is a complicated problem, and a major part of the report is devoted to the treatment of building effects in dispersion models......A project within the framework of a larger research programme, Action Plan for the Aquatic Environment III (VMP III) aims towards improving an atmospheric dispersion model (OML). The OML model is used for regulatory applications in Denmark, and it is the candidate model to be used also in future...

  15. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P.; Vranca, L.; Vaclav, E. [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1995-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  16. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P; Vranca, L; Vaclav, E [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1996-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  17. Progress in nuclear well logging modeling using deterministic transport codes

    International Nuclear Information System (INIS)

    Kodeli, I.; Aldama, D.L.; Maucec, M.; Trkov, A.

    2002-01-01

    Further studies in continuation of the work presented in 2001 in Portoroz were performed in order to study and improve the performances, precission and domain of application of the deterministic transport codes with respect to the oil well logging analysis. These codes are in particular expected to complement the Monte Carlo solutions, since they can provide a detailed particle flux distribution in the whole geometry in a very reasonable CPU time. Real-time calculation can be envisaged. The performances of deterministic transport methods were compared to those of the Monte Carlo method. IRTMBA generic benchmark was analysed using the codes MCNP-4C and DORT/TORT. Centric as well as excentric casings were considered using 14 MeV point neutron source and NaI scintillation detectors. Neutron and gamma spectra were compared at two detector positions.(author)

  18. Implementation of JAERI's reflood model into TRAC-PF1/MOD1 code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1993-02-01

    Selected physical models of REFLA code, that is a reflood analysis code developed at JAERI, were implemented into the TRAC-PF1/MOD1 code in order to improve the predictive capability of the TRAC-PF1/MOD1 code for the core thermal hydraulic behaviors during the reflood phase in a PWR LOCA. Through comparisons of physical models between both codes, (1) Murao-Iguchi void fraction correlation, (2) the drag coefficient correlation acting to drops, (3) the correlation for wall heat transfer coefficient in the film boiling regime, (4) the quench velocity correlation and (5) heat transfer correlations for the dispersed flow regime were selected from the REFLA code to be implemented into the TRAC-PF1/MOD1 code. A method for the transformation of the void fraction correlation to the equivalent interfacial friction model was developed and the effect of the transformation method on the stability of the solution was discussed. Through assessment calculation using data from CCTF (Cylindrical Core Test Facility) flat power test, it was confirmed that the predictive capability of the TRAC code for the core thermal hydraulic behaviors during the reflood can be improved by the implementation of selected physical models of the REFLA code. Several user guidelines for the modified TRAC code were proposed based on the sensitivity studies on fluid cell number in the hydraulic calculation and on node number and effect of axial heat conduction in the heat conduction calculation of fuel rod. (author)

  19. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  20. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  1. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  2. Hybrid microscopic depletion model in nodal code DYN3D

    International Nuclear Information System (INIS)

    Bilodid, Y.; Kotlyar, D.; Shwageraus, E.; Fridman, E.; Kliem, S.

    2016-01-01

    Highlights: • A new hybrid method of accounting for spectral history effects is proposed. • Local concentrations of over 1000 nuclides are calculated using micro depletion. • The new method is implemented in nodal code DYN3D and verified. - Abstract: The paper presents a general hybrid method that combines the micro-depletion technique with correction of micro- and macro-diffusion parameters to account for the spectral history effects. The fuel in a core is subjected to time- and space-dependent operational conditions (e.g. coolant density), which cannot be predicted in advance. However, lattice codes assume some average conditions to generate cross sections (XS) for nodal diffusion codes such as DYN3D. Deviation of local operational history from average conditions leads to accumulation of errors in XS, which is referred as spectral history effects. Various methods to account for the spectral history effects, such as spectral index, burnup-averaged operational parameters and micro-depletion, were implemented in some nodal codes. Recently, an alternative method, which characterizes fuel depletion state by burnup and 239 Pu concentration (denoted as Pu-correction) was proposed, implemented in nodal code DYN3D and verified for a wide range of history effects. The method is computationally efficient, however, it has applicability limitations. The current study seeks to improve the accuracy and applicability range of Pu-correction method. The proposed hybrid method combines the micro-depletion method with a XS characterization technique similar to the Pu-correction method. The method was implemented in DYN3D and verified on multiple test cases. The results obtained with DYN3D were compared to those obtained with Monte Carlo code Serpent, which was also used to generate the XS. The observed differences are within the statistical uncertainties.

  3. Experimental data bases useful for quantification of model uncertainties in best estimate codes

    International Nuclear Information System (INIS)

    Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.

    1988-01-01

    A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs

  4. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    International Nuclear Information System (INIS)

    Andersson, Jenny; Edlund, O.; Hermann, J.; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User's Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  5. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jenny; Edlund, O; Hermann, J; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User`s Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  6. Modelling of the RA-1 reactor using a Monte Carlo code

    International Nuclear Information System (INIS)

    Quinteiro, Guillermo F.; Calabrese, Carlos R.

    2000-01-01

    It was carried out for the first time, a model of the Argentine RA-1 reactor using the MCNP Monte Carlo code. This model was validated using data for experimental neutron and gamma measurements at different energy ranges and locations. In addition, the resulting fluxes were compared with the data obtained using a 3D diffusion code. (author)

  7. Modelling of fluid-solid interaction using two stand-alone codes

    CSIR Research Space (South Africa)

    Grobler, Jan H

    2010-01-01

    Full Text Available A method is proposed for the modelling of fluid-solid interaction in applications where fluid forces dominate. Data are transferred between two stand-alone codes: a dedicated computational fluid dynamics (CFD) code capable of free surface modelling...

  8. A New Algorithm for Identifying Cis-Regulatory Modules Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Haitao Guo

    2017-01-01

    Full Text Available The discovery of cis-regulatory modules (CRMs is the key to understanding mechanisms of transcription regulation. Since CRMs have specific regulatory structures that are the basis for the regulation of gene expression, how to model the regulatory structure of CRMs has a considerable impact on the performance of CRM identification. The paper proposes a CRM discovery algorithm called ComSPS. ComSPS builds a regulatory structure model of CRMs based on HMM by exploring the rules of CRM transcriptional grammar that governs the internal motif site arrangement of CRMs. We test ComSPS on three benchmark datasets and compare it with five existing methods. Experimental results show that ComSPS performs better than them.

  9. A New Algorithm for Identifying Cis-Regulatory Modules Based on Hidden Markov Model

    Science.gov (United States)

    2017-01-01

    The discovery of cis-regulatory modules (CRMs) is the key to understanding mechanisms of transcription regulation. Since CRMs have specific regulatory structures that are the basis for the regulation of gene expression, how to model the regulatory structure of CRMs has a considerable impact on the performance of CRM identification. The paper proposes a CRM discovery algorithm called ComSPS. ComSPS builds a regulatory structure model of CRMs based on HMM by exploring the rules of CRM transcriptional grammar that governs the internal motif site arrangement of CRMs. We test ComSPS on three benchmark datasets and compare it with five existing methods. Experimental results show that ComSPS performs better than them. PMID:28497059

  10. Improving system modeling accuracy with Monte Carlo codes

    International Nuclear Information System (INIS)

    Johnson, A.S.

    1996-01-01

    The use of computer codes based on Monte Carlo methods to perform criticality calculations has become common-place. Although results frequently published in the literature report calculated k eff values to four decimal places, people who use the codes in their everyday work say that they only believe the first two decimal places of any result. The lack of confidence in the computed k eff values may be due to the tendency of the reported standard deviation to underestimate errors associated with the Monte Carlo process. The standard deviation as reported by the codes is the standard deviation of the mean of the k eff values for individual generations in the computer simulation, not the standard deviation of the computed k eff value compared with the physical system. A more subtle problem with the standard deviation of the mean as reported by the codes is that all the k eff values from the separate generations are not statistically independent since the k eff of a given generation is a function of k eff of the previous generation, which is ultimately based on the starting source. To produce a standard deviation that is more representative of the physical system, statistically independent values of k eff are needed

  11. The APS SASE FEL: modeling and code comparison

    International Nuclear Information System (INIS)

    Biedron, S. G.

    1999-01-01

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL

  12. Modeling of the YALINA booster facility by the Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Kondev, F.; Kiyavitskaya, H.; Serafimovich, I.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2007-01-01

    The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics arameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  13. Challenges for modeling global gene regulatory networks during development: insights from Drosophila.

    Science.gov (United States)

    Wilczynski, Bartek; Furlong, Eileen E M

    2010-04-15

    Development is regulated by dynamic patterns of gene expression, which are orchestrated through the action of complex gene regulatory networks (GRNs). Substantial progress has been made in modeling transcriptional regulation in recent years, including qualitative "coarse-grain" models operating at the gene level to very "fine-grain" quantitative models operating at the biophysical "transcription factor-DNA level". Recent advances in genome-wide studies have revealed an enormous increase in the size and complexity or GRNs. Even relatively simple developmental processes can involve hundreds of regulatory molecules, with extensive interconnectivity and cooperative regulation. This leads to an explosion in the number of regulatory functions, effectively impeding Boolean-based qualitative modeling approaches. At the same time, the lack of information on the biophysical properties for the majority of transcription factors within a global network restricts quantitative approaches. In this review, we explore the current challenges in moving from modeling medium scale well-characterized networks to more poorly characterized global networks. We suggest to integrate coarse- and find-grain approaches to model gene regulatory networks in cis. We focus on two very well-studied examples from Drosophila, which likely represent typical developmental regulatory modules across metazoans. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  14. Modular Modeling System (MMS) code: a versatile power plant analysis package

    International Nuclear Information System (INIS)

    Divakaruni, S.M.; Wong, F.K.L.

    1987-01-01

    The basic version of the Modular Modeling System (MMS-01), a power plant systems analysis computer code jointly developed by the Nuclear Power and the Coal Combustion Systems Divisions of the Electric Power Research Institute (EPRI), has been released to the utility power industry in April 1983 at a code release workshop held in Charlotte, North Carolina. Since then, additional modules have been developed to analyze the Pressurized Water Reactors (PWRs) and the Boiling Water Reactors (BWRs) when the safety systems are activated. Also, a selected number of modules in the MMS-01 library have been modified to allow the code users more flexibility in constructing plant specific systems for analysis. These new PWR and BWR modules constitute the new MMS library, and it includes the modifications to the MMS-01 library. A year and half long extensive code qualification program of this new version of the MMS code at EPRI and the contractor sites, back by further code testing in an user group environment is culminating in the MMS-02 code release announcement seminar. At this seminar, the results of user group efforts and the code qualification program will be presented in a series of technical sessions. A total of forty-nine papers will be presented to describe the new code features and the code qualification efforts. For the sake of completion, an overview of the code is presented to include the history of the code development, description of the MMS code and its structure, utility engineers involvement in MMS-01 and MMS-02 validations, the enhancements made in the last 18 months to the code, and finally the perspective on the code future in the fossil and nuclear industry

  15. The modeling of core melting and in-vessel corium relocation in the APRIL code

    Energy Technology Data Exchange (ETDEWEB)

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  16. Development of CAP code for nuclear power plant containment: Lumped model

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon, E-mail: sjhong90@fnctech.com [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Ha, Sang Jun [Central Research Institute, Korea Hydro & Nuclear Power Company, Ltd., 70, 1312-gil, Yuseong-daero, Yuseong-gu, Daejeon 305-343 (Korea, Republic of)

    2015-09-15

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP.

  17. Development of CAP code for nuclear power plant containment: Lumped model

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul; Ha, Sang Jun

    2015-01-01

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP

  18. Modelling of Cold Water Hammer with WAHA code

    International Nuclear Information System (INIS)

    Gale, J.; Tiselj, I.

    2003-01-01

    The Cold Water Hammer experiment described in the present paper is a simple facility where overpressure accelerates a column of liquid water into the steam bubble at the closed vertical end of the pipe. Severe water hammer with high pressure peak occurs when the vapor bubble condenses and the liquid column hits the closed end of the pipe. Experimental data of Forschungszentrum Rossendorf are being used to test the newly developed computer code WAHA and the computer code RELAP5. Results show that a small amount of noncondensable air in the steam bubble significantly affects the magnitude of the calculated pressure peak, while the wall friction and condensation rate only slightly affect the simulated phenomena. (author)

  19. A predictive transport modeling code for ICRF-heated tokamaks

    International Nuclear Information System (INIS)

    Phillips, C.K.; Hwang, D.Q.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5

  20. Modelling of the Rod Control System in the coupled code RELAP5-QUABOX/CUBBOX

    International Nuclear Information System (INIS)

    Bencik, V.; Feretic, D.; Grgic, D.

    1999-01-01

    There is a general agreement that for many light water reactor transient calculations, it is necessary to use a multidimensional neutron kinetics model coupled to sophisticated thermal-hydraulic models in order to obtain satisfactory results. These calculations are needed for a variety of applications for licensing safety analyses, probabilistic risk assessment, operational support, and training. At FER, Zagreb, a coupling of 3D neutronics code QUABOX/CUBBOX and system code RELAP5 was performed. In the paper the Rod Control System model in the RELAP5 part of the coupled code is presented. A first testing of the model was performed by calculation of reactor trip from full power for NPP Krsko. Results of 3D neutronics calculation obtained by coupled code QUABOX/CUBBOX were compared with point kinetics calculation performed with RELAP5 stand alone code.(author)

  1. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...... coding is proposed, which utilizes cross-band correlation to estimate the Laplacian parameters more accurately. Experimental results show that the proposed noise model can improve the rate-distortion (RD) performance....

  2. Japanese encephalitis virus non-coding RNA inhibits activation of interferon by blocking nuclear translocation of interferon regulatory factor 3.

    Science.gov (United States)

    Chang, Ruey-Yi; Hsu, Ta-Wen; Chen, Yen-Lin; Liu, Shu-Fan; Tsai, Yi-Jer; Lin, Yun-Tong; Chen, Yi-Shiuan; Fan, Yi-Hsin

    2013-09-27

    Noncoding RNA (ncRNA) plays a critical role in modulating a broad range of diseases. All arthropod-borne flaviviruses produce short fragment ncRNA (sfRNA) collinear with highly conserved regions of the 3'-untranslated region (UTR) in the viral genome. We show that the molar ratio of sfRNA to genomic RNA in Japanese encephalitis virus (JEV) persistently infected cells is greater than that in acutely infected cells, indicating an sfRNA role in establishing persistent infection. Transfecting excess quantities of sfRNA into JEV-infected cells reduced interferon-β (IFN-β) promoter activity by 57% and IFN-β mRNA levels by 52%, compared to mock-transfected cells. Transfection of sfRNA into JEV-infected cells also reduced phosphorylation of interferon regulatory factor-3 (IRF-3), the IFN-β upstream regulator, and blocked roughly 30% of IRF-3 nuclear localization. Furthermore, JEV-infected sfRNA transfected cells produced 23% less IFN-β-stimulated apoptosis than mock-transfected groups did. Taken together, these results suggest that sfRNA plays a role against host-cell antiviral responses, prevents cells from undergoing apoptosis, and thus contributes to viral persistence. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Large-scale modeling of condition-specific gene regulatory networks by information integration and inference.

    Science.gov (United States)

    Ellwanger, Daniel Christian; Leonhardt, Jörn Florian; Mewes, Hans-Werner

    2014-12-01

    Understanding how regulatory networks globally coordinate the response of a cell to changing conditions, such as perturbations by shifting environments, is an elementary challenge in systems biology which has yet to be met. Genome-wide gene expression measurements are high dimensional as these are reflecting the condition-specific interplay of thousands of cellular components. The integration of prior biological knowledge into the modeling process of systems-wide gene regulation enables the large-scale interpretation of gene expression signals in the context of known regulatory relations. We developed COGERE (http://mips.helmholtz-muenchen.de/cogere), a method for the inference of condition-specific gene regulatory networks in human and mouse. We integrated existing knowledge of regulatory interactions from multiple sources to a comprehensive model of prior information. COGERE infers condition-specific regulation by evaluating the mutual dependency between regulator (transcription factor or miRNA) and target gene expression using prior information. This dependency is scored by the non-parametric, nonlinear correlation coefficient η(2) (eta squared) that is derived by a two-way analysis of variance. We show that COGERE significantly outperforms alternative methods in predicting condition-specific gene regulatory networks on simulated data sets. Furthermore, by inferring the cancer-specific gene regulatory network from the NCI-60 expression study, we demonstrate the utility of COGERE to promote hypothesis-driven clinical research. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. A vectorized Monte Carlo code for modeling photon transport in SPECT

    International Nuclear Information System (INIS)

    Smith, M.F.; Floyd, C.E. Jr.; Jaszczak, R.J.

    1993-01-01

    A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT

  5. Evaluation of the analysis models in the ASTRA nuclear design code system

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Nam Jin; Park, Chang Jea; Kim, Do Sam; Lee, Kyeong Taek; Kim, Jong Woon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2000-11-15

    In the field of nuclear reactor design, main practice was the application of the improved design code systems. During the process, a lot of basis and knowledge were accumulated in processing input data, nuclear fuel reload design, production and analysis of design data, et al. However less efforts were done in the analysis of the methodology and in the development or improvement of those code systems. Recently, KEPO Nuclear Fuel Company (KNFC) developed the ASTRA (Advanced Static and Transient Reactor Analyzer) code system for the purpose of nuclear reactor design and analysis. In the code system, two group constants were generated from the CASMO-3 code system. The objective of this research is to analyze the analysis models used in the ASTRA/CASMO-3 code system. This evaluation requires indepth comprehension of the models, which is important so much as the development of the code system itself. Currently, most of the code systems used in domestic Nuclear Power Plant were imported, so it is very difficult to maintain and treat the change of the situation in the system. Therefore, the evaluation of analysis models in the ASTRA nuclear reactor design code system in very important.

  6. Nodal kinetics model upgrade in the Penn State coupled TRAC/NEM codes

    International Nuclear Information System (INIS)

    Beam, Tara M.; Ivanov, Kostadin N.; Baratta, Anthony J.; Finnemann, Herbert

    1999-01-01

    The Pennsylvania State University currently maintains and does development and verification work for its own versions of the coupled three-dimensional kinetics/thermal-hydraulics codes TRAC-PF1/NEM and TRAC-BF1/NEM. The subject of this paper is nodal model enhancements in the above mentioned codes. Because of the numerous validation studies that have been performed on almost every aspect of these codes, this upgrade is done without a major code rewrite. The upgrade consists of four steps. The first two steps are designed to improve the accuracy of the kinetics model, based on the nodal expansion method. The polynomial expansion solution of 1D transverse integrated diffusion equation is replaced with a solution, which uses a semi-analytic expansion. Further the standard parabolic polynomial representation of the transverse leakage in the above 1D equations is replaced with an improved approximation. The last two steps of the upgrade address the code efficiency by improving the solution of the time-dependent NEM equations and implementing a multi-grid solver. These four improvements are implemented into the standalone NEM kinetics code. Verification of this code was accomplished based on the original verification studies. The results show that the new methods improve the accuracy and efficiency of the code. The verification of the upgraded NEM model in the TRAC-PF1/NEM and TRAC-BF1/NEM coupled codes is underway

  7. Identification of genes for small non-coding RNAs that belong to the regulon of the two-component regulatory system CiaRH in Streptococcus

    Directory of Open Access Journals (Sweden)

    Hakenbeck Regine

    2010-11-01

    Full Text Available Abstract Background Post-transcriptional regulation by small RNAs (sRNAs in bacteria is now recognized as a wide-spread regulatory mechanism modulating a variety of physiological responses including virulence. In Streptococcus pneumoniae, an important human pathogen, the first sRNAs to be described were found in the regulon of the CiaRH two-component regulatory system. Five of these sRNAs were detected and designated csRNAs for cia-dependent small RNAs. CiaRH pleiotropically affects β-lactam resistance, autolysis, virulence, and competence development by yet to be defined molecular mechanisms. Since CiaRH is highly conserved among streptococci, it is of interest to determine if csRNAs are also included in the CiaRH regulon in this group of organisms consisting of commensal as well as pathogenic species. Knowledge on the participation of csRNAs in CiaRH-dependent regulatory events will be the key to define the physiological role of this important control system. Results Genes for csRNAs were predicted in streptococcal genomes and data base entries other than S. pneumoniae by searching for CiaR-activated promoters located in intergenic regions that are followed by a transcriptional terminator. 61 different candidate genes were obtained specifying csRNAs ranging in size from 51 to 202 nt. Comparing these genes among each other revealed 40 different csRNA types. All streptococcal genomes harbored csRNA genes, their numbers varying between two and six. To validate these predictions, S. mitis, S. oralis, and S. sanguinis were subjected to csRNA-specific northern blot analysis. In addition, a csRNA gene from S. thermophilus plasmid pST0 introduced into S. pneumoniae was also tested. Each of the csRNAs was detected on these blots and showed the anticipated sizes. Thus, the method applied here is able to predict csRNAs with high precision. Conclusions The results of this study strongly suggest that genes for small non-coding RNAs, csRNAs, are part of

  8. Preliminary Consideration for the Development of Regulatory Level 2 PSA Model

    International Nuclear Information System (INIS)

    Lee, Chang-Ju

    2006-01-01

    In order to assess the validity of PSA (probabilistic safety assessment) results and to establish regulatory requirements for relevant safety issues most of the regulators want to develop an independent and convenient risk assessment model including Level 2 PSA area. As this model and framework should be implicitly independent on the licensee's PSA model, it has a primary objective directly for applying to the risk-informed regulatory affairs and for supporting those kinds of works. According this, the regulator can take an objective view for the uncertainty of risk information made by the licensee and keep up the capability and decision-making framework for overall risk assessment results. In addition, the regulatory model may be used to verify and validate the operational risk levels of all engineered safety features of nuclear power plants (NPPs). An issue for plant-specific application of safety goals was previously identified in the US NRC's risk-informed regulatory guidance development activities, and discussed in many Commission papers, e. g. SECY-97-287, which identifies the goal for large early release frequency (LERF). LERF defines a containment performance criteria derived from the quantitative health objectives. As we know, the LERF was chosen to assess risk significance in Regulatory Guide 1.174 (2002) again, which provides one measure of the performance of the containment barrier, and represents a surrogate for early health effects

  9. Mutual information and the fidelity of response of gene regulatory models

    International Nuclear Information System (INIS)

    Tabbaa, Omar P; Jayaprakash, C

    2014-01-01

    We investigate cellular response to extracellular signals by using information theory techniques motivated by recent experiments. We present results for the steady state of the following gene regulatory models found in both prokaryotic and eukaryotic cells: a linear transcription-translation model and a positive or negative auto-regulatory model. We calculate both the information capacity and the mutual information exactly for simple models and approximately for the full model. We find that (1) small changes in mutual information can lead to potentially important changes in cellular response and (2) there are diminishing returns in the fidelity of response as the mutual information increases. We calculate the information capacity using Gillespie simulations of a model for the TNF-α-NF-κ B network and find good agreement with the measured value for an experimental realization of this network. Our results provide a quantitative understanding of the differences in cellular response when comparing experimentally measured mutual information values of different gene regulatory models. Our calculations demonstrate that Gillespie simulations can be used to compute the mutual information of more complex gene regulatory models, providing a potentially useful tool in synthetic biology. (paper)

  10. Mining Gene Regulatory Networks by Neural Modeling of Expression Time-Series.

    Science.gov (United States)

    Rubiolo, Mariano; Milone, Diego H; Stegmayer, Georgina

    2015-01-01

    Discovering gene regulatory networks from data is one of the most studied topics in recent years. Neural networks can be successfully used to infer an underlying gene network by modeling expression profiles as times series. This work proposes a novel method based on a pool of neural networks for obtaining a gene regulatory network from a gene expression dataset. They are used for modeling each possible interaction between pairs of genes in the dataset, and a set of mining rules is applied to accurately detect the subjacent relations among genes. The results obtained on artificial and real datasets confirm the method effectiveness for discovering regulatory networks from a proper modeling of the temporal dynamics of gene expression profiles.

  11. The future of genome-scale modeling of yeast through integration of a transcriptional regulatory network

    DEFF Research Database (Denmark)

    Liu, Guodong; Marras, Antonio; Nielsen, Jens

    2014-01-01

    regulatory information is necessary to improve the accuracy and predictive ability of metabolic models. Here we review the strategies for the reconstruction of a transcriptional regulatory network (TRN) for yeast and the integration of such a reconstruction into a flux balance analysis-based metabolic model......Metabolism is regulated at multiple levels in response to the changes of internal or external conditions. Transcriptional regulation plays an important role in regulating many metabolic reactions by altering the concentrations of metabolic enzymes. Thus, integration of the transcriptional....... While many large-scale TRN reconstructions have been reported for yeast, these reconstructions still need to be improved regarding the functionality and dynamic property of the regulatory interactions. In addition, mathematical modeling approaches need to be further developed to efficiently integrate...

  12. Parallelization of simulation code for liquid-gas model of lattice-gas fluid

    International Nuclear Information System (INIS)

    Kawai, Wataru; Ebihara, Kenichi; Kume, Etsuo; Watanabe, Tadashi

    2000-03-01

    A simulation code for hydrodynamical phenomena which is based on the liquid-gas model of lattice-gas fluid is parallelized by using MPI (Message Passing Interface) library. The parallelized code can be applied to the larger size of the simulations than the non-parallelized code. The calculation times of the parallelized code on VPP500 (Vector-Parallel super computer with dispersed memory units), AP3000 (Scalar-parallel server with dispersed memory units), and a workstation cluster decreased in inverse proportion to the number of processors. (author)

  13. Validation of the containment code Sirius: interpretation of an explosion experiment on a scale model

    International Nuclear Information System (INIS)

    Blanchet, Y.; Obry, P.; Louvet, J.; Deshayes, M.; Phalip, C.

    1979-01-01

    The explicit 2-D axisymmetric Langrangian code SIRIUS, developed at the CEA/DRNR, Cadarache, deals with transient compressive flows in deformable primary tanks with more or less complex internal component geometries. This code has been subjected to a two-year intensive validation program on scale model experiments and a number of improvements have been incorporated. This paper presents a recent calculation of one of these experiments using the SIRIUS code, and the comparison with experimental results shows the encouraging possibilities of this Lagrangian code

  14. A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding

    Science.gov (United States)

    Cuevas, Joshua; Dawson, Bryan L.

    2018-01-01

    This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…

  15. Coupling of 3D neutronics models with the system code ATHLET

    International Nuclear Information System (INIS)

    Langenbuch, S.; Velkov, K.

    1999-01-01

    The system code ATHLET for plant transient and accident analysis has been coupled with 3D neutronics models, like QUABOX/CUBBOX, for the realistic evaluation of some specific safety problems under discussion. The considerations for the coupling approach and its realization are discussed. The specific features of the coupled code system established are explained and experience from first applications is presented. (author)

  16. HADES. A computer code for fast neutron cross section from the Optical Model

    International Nuclear Information System (INIS)

    Guasp, J.; Navarro, C.

    1973-01-01

    A FORTRAN V computer code for UNIVAC 1108/6 using a local Optical Model with spin-orbit interaction is described. The code calculates fast neutron cross sections, angular distribution, and Legendre moments for heavy and intermediate spherical nuclei. It allows for the possibility of automatic variation of potential parameters for experimental data fitting. (Author) 55 refs

  17. Implementation of the critical points model in a SFM-FDTD code working in oblique incidence

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)

    2011-06-22

    We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.

  18. Mathematical models and illustrative results for the RINGBEARER II monopole/dipole beam-propagation code

    International Nuclear Information System (INIS)

    Chambers, F.W.; Masamitsu, J.A.; Lee, E.P.

    1982-01-01

    RINGBEARER II is a linearized monopole/dipole particle simulation code for studying intense relativistic electron beam propagation in gas. In this report the mathematical models utilized for beam particle dynamics and pinch field computation are delineated. Difficulties encountered in code operations and some remedies are discussed. Sample output is presented detailing the diagnostics and the methods of display and analysis utilized

  19. FISIC - a full-wave code to model ion cyclotron resonance heating of tokamak plasmas

    International Nuclear Information System (INIS)

    Kruecken, T.

    1988-08-01

    We present a user manual for the FISIC code which solves the integrodifferential wave equation in the finite Larmor radius approximation in fully toroidal geometry to simulate ICRF heating experiments. The code models the electromagnetic wave field as well as antenna coupling and power deposition profiles in axisymmetric plasmas. (orig.)

  20. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  1. Automatic modeling for the Monte Carlo transport code Geant4

    International Nuclear Information System (INIS)

    Nie Fanzhi; Hu Liqin; Wang Guozhong; Wang Dianxi; Wu Yican; Wang Dong; Long Pengcheng; FDS Team

    2015-01-01

    Geant4 is a widely used Monte Carlo transport simulation package. Its geometry models could be described in Geometry Description Markup Language (GDML), but it is time-consuming and error-prone to describe the geometry models manually. This study implemented the conversion between computer-aided design (CAD) geometry models and GDML models. This method has been Studied based on Multi-Physics Coupling Analysis Modeling Program (MCAM). The tests, including FDS-Ⅱ model, demonstrated its accuracy and feasibility. (authors)

  2. A combined N-body and hydrodynamic code for modeling disk galaxies

    International Nuclear Information System (INIS)

    Schroeder, M.C.

    1989-01-01

    A combined N-body and hydrodynamic computer code for the modeling of two dimensional galaxies is described. The N-body portion of the code is used to calculate the motion of the particle component of a galaxy, while the hydrodynamics portion of the code is used to follow the motion and evolution of the fluid component. A complete description of the numerical methods used for each portion of the code is given. Additionally, the proof tests of the separate and combined portions of the code are presented and discussed. Finally, a discussion of the topics researched with the code and results obtained is presented. These include: the measurement of stellar relaxation times in disk galaxy simulations; the effects of two-armed spiral perturbations on stable axisymmetric disks; the effects of the inclusion of an instellar medium (ISM) on the stability of disk galaxies; and the effect of the inclusion of stellar evolution on disk galaxy simulations

  3. Subgroup A: nuclear model codes report to the Sixteenth Meeting of the WPEC

    International Nuclear Information System (INIS)

    Talou, P.; Chadwick, M.B.; Dietrich, F.S.; Herman, M.; Kawano, T.; Konig, A.; Oblozinsky, P.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.

  4. [Regulatory effect of Erbao granules on brain-gut peptide in juvenile animal model of anorexia].

    Science.gov (United States)

    Zhang, Y; Du, Y; Wang, S

    2000-10-01

    To study the regulatory effect of Erbao granules (EBG) on central and peripheral brain-gut peptide in juvenile animal model of anorexia. Juvenile rat model of anorexia was established by imitating the major cause of infantile anorexia and treated with EBG. The cholocystokinin-octapeptide (CCK-8) and beta-endorphin (beta-EP) concentration in hypothalamus, antrum pyloricum and peripheral blood were examined by radioimmunoassay. CCK-8 concentration in hypothalamus and plasma in the model rats increased (P anorexia model.

  5. Realistic edge field model code REFC for designing and study of isochronous cyclotron

    International Nuclear Information System (INIS)

    Ismail, M.

    1989-01-01

    The focussing properties and the requirements for isochronism in cyclotron magnet configuration are well-known in hard edge field model. The fact that they quite often change considerably in realistic field can be attributed mainly to the influence of the edge field. A solution to this problem requires a field model which allows a simple construction of equilibrium orbit and yield simple formulae. This can be achieved by using a fitted realistic edge field (Hudson et al 1975) in the region of the pole edge and such a field model is therefore called a realistic edge field model. A code REFC based on realistic edge field model has been developed to design the cyclotron sectors and the code FIELDER has been used to study the beam properties. In this report REFC code has been described along with some relevant explaination of the FIELDER code. (author). 11 refs., 6 figs

  6. Multivariate Hawkes process models of the occurrence of regulatory elements

    DEFF Research Database (Denmark)

    Carstensen, L; Sandelin, A; Winther, Ole

    2010-01-01

    distribution of the occurrences of these TREs along the genome. RESULTS: We present a model of TRE occurrences known as the Hawkes process. We illustrate the use of this model by analyzing two different publically available data sets. We are able to model, in detail, how the occurrence of one TRE is affected....... For each of the two data sets we provide two results: first, a qualitative description of the dependencies among the occurrences of the TREs, and second, quantitative results on the favored or avoided distances between the different TREs. CONCLUSIONS: The Hawkes process is a novel way of modeling the joint...

  7. The MELTSPREAD Code for Modeling of Ex-Vessel Core Debris Spreading Behavior, Code Manual – Version3-beta

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M. T. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input that describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of

  8. Improvement on reaction model for sodium-water reaction jet code and application analysis

    International Nuclear Information System (INIS)

    Itooka, Satoshi; Saito, Yoshinori; Okabe, Ayao; Fujimata, Kazuhiro; Murata, Shuuichi

    2000-03-01

    In selecting the reasonable DBL on steam generator (SG), it is necessary to improve analytical method for estimating the sodium temperature on failure propagation due to overheating. Improvement on sodium-water reaction (SWR) jet code (LEAP-JET ver.1.30) and application analysis to the water injection tests for confirmation of code propriety were performed. On the improvement of the code, a gas-liquid interface area density model was introduced to develop a chemical reaction model with a little dependence on calculation mesh size. The test calculation using the improved code (LEAP-JET ver.1.40) were carried out with conditions of the SWAT-3·Run-19 test and an actual scale SG. It is confirmed that the SWR jet behavior on the results and the influence to analysis result of a model are reasonable. For the application analysis to the water injection tests, water injection behavior and SWR jet behavior analyses on the new SWAT-1 (SWAT-1R) and SWAT-3 (SWAT-3R) tests were performed using the LEAP-BLOW code and the LEAP-JET code. In the application analysis of the LEAP-BLOW code, parameter survey study was performed. As the results, the condition of the injection nozzle diameter needed to simulate the water leak rate was confirmed. In the application analysis of the LEAP-JET code, temperature behavior of the SWR jet was investigated. (author)

  9. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  10. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  11. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  12. Development of seismic analysis model for HTGR core on commercial FEM code

    International Nuclear Information System (INIS)

    Tsuji, Nobumasa; Ohashi, Kazutaka

    2015-01-01

    The aftermath of the Great East Japan Earthquake prods to revise the design basis earthquake intensity severely. In aseismic design of block-type HTGR, the securement of structural integrity of core blocks and other structures which are made of graphite become more important. For the aseismic design of block-type HTGR, it is necessary to predict the motion of core blocks which are collided with adjacent blocks. Some seismic analysis codes have been developed in 1970s, but these codes are special purpose-built codes and have poor collaboration with other structural analysis code. We develop the vertical 2 dimensional analytical model on multi-purpose commercial FEM code, which take into account the multiple impacts and friction between block interfaces and rocking motion on contact with dowel pins of the HTGR core by using contact elements. This model is verified by comparison with the experimental results of 12 column vertical slice vibration test. (author)

  13. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    Science.gov (United States)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  14. Light water reactor fuel analysis code FEMAXI-7; model and structure

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Saitou, Hiroaki

    2011-03-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in both normal conditions and anticipated transient conditions. This code is an advanced version which has been produced by incorporating the former version FEMAXI-6 with numerous functional improvements and extensions. In FEMAXI-7, many new models have been added and parameters have been clearly arranged. Also, to facilitate effective maintenance and accessibility of the code, modularization of subroutines and functions have been attained, and quality comment descriptions of variables or physical quantities have been incorporated in the source code. With these advancements, the FEMAXI-7 code has been upgraded to a versatile analytical tool for high burnup fuel behavior analyses. This report describes in detail the design, basic theory and structure, models and numerical method, and improvements and extensions. (author)

  15. Light water reactor fuel analysis code FEMAXI-7. Model and structure

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2013-07-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in both normal conditions and anticipated transient conditions. This code is an advanced version which has been produced by incorporating the former version FEMAXI-6 with numerous functional improvements and extensions. In FEMAXI-7, many new models have been added and parameters have been clearly arranged. Also, to facilitate effective maintenance and accessibility of the code, modularization of subroutines and functions have been attained, and quality comment descriptions of variables or physical quantities have been incorporated in the source code. With these advancements, the FEMAXI-7 code has been upgraded to a versatile analytical tool for high burnup fuel behavior analyses. This report describes in detail the design, basic theory and structure, models and numerical method of FEMAXI-7, and its improvements and extensions. (author)

  16. Non-coding RNAs in Mesenchymal Stem Cell-Derived Extracellular Vesicles: Deciphering Regulatory Roles in Stem Cell Potency, Inflammatory Resolve, and Tissue Regeneration

    Directory of Open Access Journals (Sweden)

    Farah Fatima

    2017-10-01

    Full Text Available Extracellular vesicles (EVs are heterogeneous populations of nano- and micro-sized vesicles secreted by various cell types. There is mounting evidence that EVs have widespread roles in transporting proteins, lipids, and nucleic acids between cells and serve as mediators of intercellular communication. EVs secreted from stem cells could function as paracrine factors, and appear to mimic and recapitulate several features of their secreting cells. EV-mediated transport of regulatory RNAs provides a novel source of trans-regulation between cells. As such, stem cells have evolved unique forms of paracrine mechanisms for recapitulating their potencies with specialized functions by transporting non-coding RNAs (ncRNAs via EVs. This includes the dissemination of stem cell-derived EV-ncRNAs and their regulatory effects elicited in differentiation, self-renewal, pluripotency, and the induction of reparative programs. Here, we summarize and discuss the therapeutic effects of mesenchymal stem cell-derived EV-ncRNAs in the induction of intrinsic regenerative programs elicited through regulating several mechanisms. Among them, most noticeable are the EV-mediated enrichment of ncRNAs at the injury sites contributing the regulation of matrix remodeling, epithelial mesenchymal transitions, and attraction of fibroblasts. Additionally, we emphasize EV-mediated transmission of anti-inflammatory RNAs from stem cells to injury site that potentially orchestrate the resolution of the inflammatory responses and immune alleviation to better facilitate healing processes. Collectively, this knowledge indicates a high value and potential of EV-mediated RNA-based therapeutic approaches in regenerative medicine.

  17. Network modeling reveals prevalent negative regulatory relationships between signaling sectors in Arabidopsis immune signaling.

    Directory of Open Access Journals (Sweden)

    Masanao Sato

    Full Text Available Biological signaling processes may be mediated by complex networks in which network components and network sectors interact with each other in complex ways. Studies of complex networks benefit from approaches in which the roles of individual components are considered in the context of the network. The plant immune signaling network, which controls inducible responses to pathogen attack, is such a complex network. We studied the Arabidopsis immune signaling network upon challenge with a strain of the bacterial pathogen Pseudomonas syringae expressing the effector protein AvrRpt2 (Pto DC3000 AvrRpt2. This bacterial strain feeds multiple inputs into the signaling network, allowing many parts of the network to be activated at once. mRNA profiles for 571 immune response genes of 22 Arabidopsis immunity mutants and wild type were collected 6 hours after inoculation with Pto DC3000 AvrRpt2. The mRNA profiles were analyzed as detailed descriptions of changes in the network state resulting from the genetic perturbations. Regulatory relationships among the genes corresponding to the mutations were inferred by recursively applying a non-linear dimensionality reduction procedure to the mRNA profile data. The resulting static network model accurately predicted 23 of 25 regulatory relationships reported in the literature, suggesting that predictions of novel regulatory relationships are also accurate. The network model revealed two striking features: (i the components of the network are highly interconnected; and (ii negative regulatory relationships are common between signaling sectors. Complex regulatory relationships, including a novel negative regulatory relationship between the early microbe-associated molecular pattern-triggered signaling sectors and the salicylic acid sector, were further validated. We propose that prevalent negative regulatory relationships among the signaling sectors make the plant immune signaling network a "sector

  18. Context discovery using attenuated Bloom codes: model description and validation

    NARCIS (Netherlands)

    Liu, F.; Heijenk, Geert

    A novel approach to performing context discovery in ad-hoc networks based on the use of attenuated Bloom filters is proposed in this report. In order to investigate the performance of this approach, a model has been developed. This document describes the model and its validation. The model has been

  19. Intercept Centering and Time Coding in Latent Difference Score Models

    Science.gov (United States)

    Grimm, Kevin J.

    2012-01-01

    Latent difference score (LDS) models combine benefits derived from autoregressive and latent growth curve models allowing for time-dependent influences and systematic change. The specification and descriptions of LDS models include an initial level of ability or trait plus an accumulation of changes. A limitation of this specification is that the…

  20. Atmospheric radiative transfer modeling: a summary of the AER codes

    Energy Technology Data Exchange (ETDEWEB)

    Clough, S.A. [Atmospheric and Environmental Research (AER) Inc., 131 Hartwell Avenue, Lexington, MA 02421-3126 (United States); Shephard, M.W. [Atmospheric and Environmental Research (AER) Inc., 131 Hartwell Avenue, Lexington, MA 02421-3126 (United States)]. E-mail: mshephar@aer.com; Mlawer, E.J. [Atmospheric and Environmental Research (AER) Inc., 131 Hartwell Avenue, Lexington, MA 02421-3126 (United States); Delamere, J.S. [Atmospheric and Environmental Research (AER) Inc., 131 Hartwell Avenue, Lexington, MA 02421-3126 (United States); Iacono, M.J. [Atmospheric and Environmental Research (AER) Inc., 131 Hartwell Avenue, Lexington, MA 02421-3126 (United States); Cady-Pereira, K. [Atmospheric and Environmental Research (AER) Inc., 131 Hartwell Avenue, Lexington, MA 02421-3126 (United States); Boukabara, S. [Atmospheric and Environmental Research (AER) Inc., 131 Hartwell Avenue, Lexington, MA 02421-3126 (United States); Brown, P.D. [Atmospheric and Environmental Research (AER) Inc., 131 Hartwell Avenue, Lexington, MA 02421-3126 (United States)

    2005-03-01

    The radiative transfer models developed at AER are being used extensively for a wide range of applications in the atmospheric sciences. This communication is intended to provide a coherent summary of the various radiative transfer models and associated databases publicly available from AER (http://www.rtweb.aer.com). Among the communities using the models are the remote sensing community (e.g. TES, IASI), the numerical weather prediction community (e.g. ECMWF, NCEP GFS, WRF, MM5), and the climate community (e.g. ECHAM5). Included in this communication is a description of the central features and recent updates for the following models: the line-by-line radiative transfer model (LBLRTM); the line file creation program (LNFL); the longwave and shortwave rapid radiative transfer models, RRTM{sub L}W and RRTM{sub S}W; the Monochromatic Radiative Transfer Model (MonoRTM); the MT{sub C}KD Continuum; and the Kurucz Solar Source Function. LBLRTM and the associated line parameter database (e.g. HITRAN 2000 with 2001 updates) play a central role in the suite of models. The physics adopted for LBLRTM has been extensively analyzed in the context of closure experiments involving the evaluation of the model inputs (e.g. atmospheric state), spectral radiative measurements and the spectral model output. The rapid radiative transfer models are then developed and evaluated using the validated LBLRTM model.

  1. a model for quantity estimation for multi-coded team events

    African Journals Online (AJOL)

    Participation in multi-coded sports events often involves travel to international ... Medication use by Team south africa during the XXVIIIth olympiad: a model .... individual sports included in the programme (e.g. athletes involved in contact sports ...

  2. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  3. Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  4. Field-based tests of geochemical modeling codes usign New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1994-06-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  5. A mathematical model, and code HADES, for migration of radionuclides from a shallow repository

    International Nuclear Information System (INIS)

    Fraser, J.L.; Jarvis, R.G.

    1985-06-01

    The mathematical model is one-dimensional, to describe migration of radionuclides, by diffusion and advection, through several consecutive layers that can represent vault materials and surrounding ground. The solutions are evaluated by a computer code HADES

  6. Recent progress of an integrated implosion code and modeling of element physics

    International Nuclear Information System (INIS)

    Nagatomo, H.; Takabe, H.; Mima, K.; Ohnishi, N.; Sunahara, A.; Takeda, T.; Nishihara, K.; Nishiguchu, A.; Sawada, K.

    2001-01-01

    Physics of the inertial fusion is based on a variety of elements such as compressible hydrodynamics, radiation transport, non-ideal equation of state, non-LTE atomic process, and relativistic laser plasma interaction. In addition, implosion process is not in stationary state and fluid dynamics, energy transport and instabilities should be solved simultaneously. In order to study such complex physics, an integrated implosion code including all physics important in the implosion process should be developed. The details of physics elements should be studied and the resultant numerical modeling should be installed in the integrated code so that the implosion can be simulated with available computer within realistic CPU time. Therefore, this task can be basically separated into two parts. One is to integrate all physics elements into a code, which is strongly related to the development of hydrodynamic equation solver. We have developed 2-D integrated implosion code which solves mass, momentum, electron energy, ion energy, equation of states, laser ray-trace, laser absorption radiation, surface tracing and so on. The reasonable results in simulating Rayleigh-Taylor instability and cylindrical implosion are obtained using this code. The other is code development on each element physics and verification of these codes. We had progress in developing a nonlocal electron transport code and 2 and 3 dimension radiation hydrodynamic code. (author)

  7. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1)).......Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually...... abort) completely unrelated with m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However...

  8. Radiation heat transfer model for the SCDAP code

    International Nuclear Information System (INIS)

    Sohal, M.S.

    1984-01-01

    A radiation heat transfer model has been developed for severe fuel damage analysis which accounts for anisotropic effects of reflected radiation. The model simplifies the view factor calculation which results in significant savings in computational cost with little loss of accuracy. Radiation heat transfer rates calculated by the isotropic and anisotropic models compare reasonably well with those calculated by other models. The model is applied to an experimental nuclear rod bundle during a slow boiloff of the coolant liquid, a situation encountered during a loss of coolant accident with severe fuel damage. At lower temperatures and also lower temperature gradients in the core, the anisotropic effect was not found to be significant

  9. Regulatory T cell effects in antitumor laser immunotherapy: a mathematical model and analysis

    Science.gov (United States)

    Dawkins, Bryan A.; Laverty, Sean M.

    2016-03-01

    Regulatory T cells (Tregs) have tremendous influence on treatment outcomes in patients receiving immunotherapy for cancerous tumors. We present a mathematical model incorporating the primary cellular and molecular components of antitumor laser immunotherapy. We explicitly model developmental classes of dendritic cells (DCs), cytotoxic T cells (CTLs), primary and metastatic tumor cells, and tumor antigen. Regulatory T cells have been shown to kill antigen presenting cells, to influence dendritic cell maturation and migration, to kill activated killer CTLs in the tumor microenvironment, and to influence CTL proliferation. Since Tregs affect explicitly modeled cells, but we do not explicitly model dynamics of Treg themselves, we use model parameters to analyze effects of Treg immunosuppressive activity. We will outline a systematic method for assigning clinical outcomes to model simulations and use this condition to associate simulated patient treatment outcome with Treg activity.

  10. Assessment of critical flow models of RELAP5-MOD2 and CATHARE codes

    International Nuclear Information System (INIS)

    Hao Laomi; Zhu Zhanchuan

    1992-01-01

    The critical flow tests for the long and short nozzles conducted on the SUPER MOBY-DICK facility were analyzed using the RELAP5-MOD2 and CATHARE 1.3 codes to assess the critical flow models of two codes. The critical mass flux calculated for two nozzles are given. The CATHARE code has used the thermodynamic nonequilibrium sound velocity of the two-phase fluid as the critical flow criterion, and has the better interphase transfer models and calculates the critical flow velocities with the completely implicit solution. Therefore, it can well calculate the critical flowrate and can describe the effect of the geometry L/D on the critical flowrate

  11. MIGFRAC - a code for modelling of radionuclide transport in fracture media

    International Nuclear Information System (INIS)

    Satyanarayana, S.V.M.; Mohankumar, N.; Sasidhar, P.

    2002-05-01

    Radionuclides migrate through diffusion process from radioactive waste disposal facilities into fractures present in the host rock. The transport phenomenon is aided by the circulating ground waters. To model the transport of radionuclides in the charnockite rock formations present at Kalpakkam, a numerical code - MIGFRAC has been developed at SHINE Group, IGCAR. The code has been subjected to rigorous tests and the results of the build up of radionuclide concentrations are validated with a test case up to a distance of 100 meter along the fracture. The report discusses the model, code features and the results obtained up to a distance of 400 meter are presented. (author)

  12. Modelling of blackout sequence at Atucha-1 using the MARCH3 code

    International Nuclear Information System (INIS)

    Baron, J.; Bastianelli, B.

    1997-01-01

    This paper presents the modelling of a complete blackout at the Atucha-1 NPP as preliminary phase for a Level II safety probabilistic analysis. The MARCH3 code of the STCP (Source Term Code Package) is used, based on a plant model made in accordance with particularities of the plant design. The analysis covers all the severe accident phases. The results allow to view the time sequence of the events, and provide the basis for source term studies. (author). 6 refs., 2 figs

  13. RELAP5/MOD3 code manual. Volume 4, Models and correlations

    International Nuclear Information System (INIS)

    1995-08-01

    The RELAP5 code has been developed for best-estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I presents modeling theory and associated numerical schemes; Volume II details instructions for code application and input data preparation; Volume III presents the results of developmental assessment cases that demonstrate and verify the models used in the code; Volume IV discusses in detail RELAP5 models and correlations; Volume V presents guidelines that have evolved over the past several years through the use of the RELAP5 code; Volume VI discusses the numerical scheme used in RELAP5; and Volume VII presents a collection of independent assessment calculations

  14. An object-oriented framework for magnetic-fusion modeling and analysis codes

    International Nuclear Information System (INIS)

    Cohen, R H; Yang, T Y Brian.

    1999-01-01

    The magnetic-fusion energy (MFE) program, like many other scientific and engineering activities, has a need to efficiently develop complex modeling codes which combine detailed models of components to make an integrated model of a device, as well as a rich supply of legacy code that could provide the component models. There is also growing recognition in many technical fields of the desirability of steerable software: computer programs whose functionality can be changed by the user as it is run. This project had as its goals the development of two key pieces of infrastructure that are needed to combine existing code modules, written mainly in Fortran, into flexible, steerable, object-oriented integrated modeling codes for magnetic- fusion applications. These two pieces are (1) a set of tools to facilitate the interfacing of Fortran code with a steerable object-oriented framework (which we have chosen to be based on PythonlW3, an object-oriented interpreted language), and (2) a skeleton for the integrated modeling code which defines the relationships between the modules. The first of these activities obviously has immediate applicability to a spectrum of projects; the second is more focussed on the MFE application, but may be of value as an example for other applications

  15. In silico modeling of epigenetic-induced changes in photoreceptor cis-regulatory elements.

    Science.gov (United States)

    Hossain, Reafa A; Dunham, Nicholas R; Enke, Raymond A; Berndsen, Christopher E

    2018-01-01

    DNA methylation is a well-characterized epigenetic repressor of mRNA transcription in many plant and vertebrate systems. However, the mechanism of this repression is not fully understood. The process of transcription is controlled by proteins that regulate recruitment and activity of RNA polymerase by binding to specific cis-regulatory sequences. Cone-rod homeobox (CRX) is a well-characterized mammalian transcription factor that controls photoreceptor cell-specific gene expression. Although much is known about the functions and DNA binding specificity of CRX, little is known about how DNA methylation modulates CRX binding affinity to genomic cis-regulatory elements. We used bisulfite pyrosequencing of human ocular tissues to measure DNA methylation levels of the regulatory regions of RHO , PDE6B, PAX6 , and LINE1 retrotransposon repeats. To describe the molecular mechanism of repression, we used molecular modeling to illustrate the effect of DNA methylation on human RHO regulatory sequences. In this study, we demonstrate an inverse correlation between DNA methylation in regulatory regions adjacent to the human RHO and PDE6B genes and their subsequent transcription in human ocular tissues. Docking of CRX to the DNA models shows that CRX interacts with the grooves of these sequences, suggesting changes in groove structure could regulate binding. Molecular dynamics simulations of the RHO promoter and enhancer regions show changes in the flexibility and groove width upon epigenetic modification. Models also demonstrate changes in the local dynamics of CRX binding sites within RHO regulatory sequences which may account for the repression of CRX-dependent transcription. Collectively, these data demonstrate epigenetic regulation of CRX binding sites in human retinal tissue and provide insight into the mechanism of this mode of epigenetic regulation to be tested in future experiments.

  16. Fission Product Transport Models Adopted in REFPAC Code for LOCA Conditions in PWR and WWER NPPS

    International Nuclear Information System (INIS)

    Strupczewski, A.

    2003-01-01

    The report presents assumptions and physical models used for calculations of fission product releases from nuclear reactors, their behavior inside the containment and leakages to the environment after large break loss of coolant accident LB LOCA. They are the basis of code REFPAC (RElease of Fission Products under Accident Conditions), designed primarily to represent significant physical processes occurring after LB LOCA. The code describes these processes using three different models. Model 1 corresponds to established US and Russian practice, Model 2 includes all conservative assumptions that are in agreement with the actual state-of-the-art, and Model 3 incorporates formulae and parameter values actually used in EU practice. (author)

  17. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  18. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  19. Application of the Self-Regulatory Model in Dealing with Encopresis.

    Science.gov (United States)

    Grimes, Lynn

    1983-01-01

    Behavioral techniques along with a self-regulation methodology were used successfully to decrease encopretic behaviors in a 9-year-old male. Kanfer's self-regulatory model appears to be generalizable to any child with the cognitive ability to understand that he or she has a problem and to make decisions about treatment. (Author/PN)

  20. A primer on thermodynamic-based models for deciphering transcriptional regulatory logic.

    Science.gov (United States)

    Dresch, Jacqueline M; Richards, Megan; Ay, Ahmet

    2013-09-01

    A rigorous analysis of transcriptional regulation at the DNA level is crucial to the understanding of many biological systems. Mathematical modeling has offered researchers a new approach to understanding this central process. In particular, thermodynamic-based modeling represents the most biophysically informed approach aimed at connecting DNA level regulatory sequences to the expression of specific genes. The goal of this review is to give biologists a thorough description of the steps involved in building, analyzing, and implementing a thermodynamic-based model of transcriptional regulation. The data requirements for this modeling approach are described, the derivation for a specific regulatory region is shown, and the challenges and future directions for the quantitative modeling of gene regulation are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Boolean Dynamic Modeling Approaches to Study Plant Gene Regulatory Networks: Integration, Validation, and Prediction.

    Science.gov (United States)

    Velderraín, José Dávila; Martínez-García, Juan Carlos; Álvarez-Buylla, Elena R

    2017-01-01

    Mathematical models based on dynamical systems theory are well-suited tools for the integration of available molecular experimental data into coherent frameworks in order to propose hypotheses about the cooperative regulatory mechanisms driving developmental processes. Computational analysis of the proposed models using well-established methods enables testing the hypotheses by contrasting predictions with observations. Within such framework, Boolean gene regulatory network dynamical models have been extensively used in modeling plant development. Boolean models are simple and intuitively appealing, ideal tools for collaborative efforts between theorists and experimentalists. In this chapter we present protocols used in our group for the study of diverse plant developmental processes. We focus on conceptual clarity and practical implementation, providing directions to the corresponding technical literature.

  2. Evaluation of Urban air quality models for regulatory use: Refinement of an approach

    International Nuclear Information System (INIS)

    Downton, M.W.; Dennis, R.L.

    1985-01-01

    Statistical measures for evaluating the performance of urban air quality models have recently been strongly recommended by several investigators. Problems that were encountered in the use of recommended performance measures in an evaluation of three versions of an urban photochemical model are described. The example demonstrates the importance of designing an evaluation to take into account the way in which the model will be used in regulatory practice, and then choosing performance measures on the basis of that design. The evaluation illustrates some limitations and possible pitfalls in the use and interpretation of statistical measures of model performance. Drawing on this experience, a procedure for evaluation of air quality models for regulatory use is suggested

  3. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Hamlin, Nathaniel D., E-mail: nh322@cornell.edu [438 Rhodes Hall, Cornell University, Ithaca, NY, 14853 (United States); Seyler, Charles E., E-mail: ces7@cornell.edu [Cornell University, Ithaca, NY, 14853 (United States)

    2014-12-15

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  4. An Eulerian transport-dispersion model of passive effluents: the Difeul code

    International Nuclear Information System (INIS)

    Wendum, D.

    1994-11-01

    R and D has decided to develop an Eulerian diffusion model easy to adapt to meteorological data coming from different sources: for instance the ARPEGE code of Meteo-France or the MERCURE code of EDF. We demand this in order to be able to apply the code in independent cases: a posteriori studies of accidental releases from nuclear power plants ar large or medium scale, simulation of urban pollution episodes within the ''Reactive Atmospheric Flows'' research project. For simplicity reasons, the numerical formulation of our code is the same as the one used in Meteo-France's MEDIA model. The numerical tests presented in this report show the good performance of those schemes. In order to illustrate the method by a concrete example a fictitious release from Saint-Laurent has been simulated at national scale: the results of this simulation agree quite well with those of the trajectory model DIFTRA. (author). 6 figs., 4 tabs

  5. UCODE, a computer code for universal inverse modeling

    Science.gov (United States)

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating

  6. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  7. Bayesian error analysis model for reconstructing transcriptional regulatory networks

    OpenAIRE

    Sun, Ning; Carroll, Raymond J.; Zhao, Hongyu

    2006-01-01

    Transcription regulation is a fundamental biological process, and extensive efforts have been made to dissect its mechanisms through direct biological experiments and regulation modeling based on physical–chemical principles and mathematical formulations. Despite these efforts, transcription regulation is yet not well understood because of its complexity and limitations in biological experiments. Recent advances in high throughput technologies have provided substantial amounts and diverse typ...

  8. Description and assessment of structural and temperature models in the FRAP-T6 code

    International Nuclear Information System (INIS)

    Siefken, L.J.

    1983-01-01

    The FRAP-T6 code was developed at the Idaho National Engineering Laboratory (INEL) for the purpose of calculating the transient performance of light water reactor fuel rods during reactor transients ranging from mild operational transients to severe hypothetical loss-of-coolant accidents. An important application of the FRAP-T6 code is to calculate the structural performance of fuel rod cladding. The capabilities of the FRAP-T6 code are assessed by comparisons of code calculations with the measurements of several hundred in-pile experiments on fuel rods. The results of the assessments show that the code accurately and efficiently models the structural and thermal response of fuel rods

  9. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    Science.gov (United States)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  10. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    Energy Technology Data Exchange (ETDEWEB)

    Blyth, Taylor S. [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States)

    2017-04-01

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  11. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  12. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC

  13. Modeling radiation belt dynamics using a 3-D layer method code

    Science.gov (United States)

    Wang, C.; Ma, Q.; Tao, X.; Zhang, Y.; Teng, S.; Albert, J. M.; Chan, A. A.; Li, W.; Ni, B.; Lu, Q.; Wang, S.

    2017-08-01

    A new 3-D diffusion code using a recently published layer method has been developed to analyze radiation belt electron dynamics. The code guarantees the positivity of the solution even when mixed diffusion terms are included. Unlike most of the previous codes, our 3-D code is developed directly in equatorial pitch angle (α0), momentum (p), and L shell coordinates; this eliminates the need to transform back and forth between (α0,p) coordinates and adiabatic invariant coordinates. Using (α0,p,L) is also convenient for direct comparison with satellite data. The new code has been validated by various numerical tests, and we apply the 3-D code to model the rapid electron flux enhancement following the geomagnetic storm on 17 March 2013, which is one of the Geospace Environment Modeling Focus Group challenge events. An event-specific global chorus wave model, an AL-dependent statistical plasmaspheric hiss wave model, and a recently published radial diffusion coefficient formula from Time History of Events and Macroscale Interactions during Substorms (THEMIS) statistics are used. The simulation results show good agreement with satellite observations, in general, supporting the scenario that the rapid enhancement of radiation belt electron flux for this event results from an increased level of the seed population by radial diffusion, with subsequent acceleration by chorus waves. Our results prove that the layer method can be readily used to model global radiation belt dynamics in three dimensions.

  14. Development of the three dimensional flow model in the SPACE code

    International Nuclear Information System (INIS)

    Oh, Myung Taek; Park, Chan Eok; Kim, Shin Whan

    2014-01-01

    SPACE (Safety and Performance Analysis CodE) is a nuclear plant safety analysis code, which has been developed in the Republic of Korea through a joint research between the Korean nuclear industry and research institutes. The SPACE code has been developed with multi-dimensional capabilities as a requirement of the next generation safety code. It allows users to more accurately model the multi-dimensional flow behavior that can be exhibited in components such as the core, lower plenum, upper plenum and downcomer region. Based on generalized models, the code can model any configuration or type of fluid system. All the geometric quantities of mesh are described in terms of cell volume, centroid, face area, and face center, so that it can naturally represent not only the one dimensional (1D) or three dimensional (3D) Cartesian system, but also the cylindrical mesh system. It is possible to simulate large and complex domains by modelling the complex parts with a 3D approach and the rest of the system with a 1D approach. By 1D/3D co-simulation, more realistic conditions and component models can be obtained, providing a deeper understanding of complex systems, and it is expected to overcome the shortcomings of 1D system codes. (author)

  15. A new computer code for discrete fracture network modelling

    Science.gov (United States)

    Xu, Chaoshui; Dowd, Peter

    2010-03-01

    The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.

  16. The European Model Company Act: How to choose an efficient regulatory approach?

    DEFF Research Database (Denmark)

    Cleff, Evelyne Beatrix

    ) on the organization of company laws reflect an interesting paradigm shift. Whereas, previously company law was primarily focused on preventing abuse, there is now a trend towards legislation that promote commerce and satisfy the needs of business. This means that the goal of economic efficiency is having...... an increasing influence on the framing of company legislation, such as the choice between mandatory or default rules. This article introduces the project "European Company Law and the choice of Regulatory Method" which is carried out in collaboration with the European Model Company Act Group. The project aims...... to analyze the appropriateness of different regulatory methods which are available to achieve the regulatory goals.   ...

  17. Canadian and United States regulatory models compared: doses from atmospheric pathways

    International Nuclear Information System (INIS)

    Peterson, S-R.

    1997-01-01

    CANDU reactors sold offshore are licensed primarily to satisfy Canadian Regulations. For radioactive emissions during normal operation, the Canadian Standards Association's CAN/CSA-N288.1-M87 is used. This standard provides guidelines and methodologies for calculating a rate of radionuclide release that exposes a member of the public to the annual dose limit. To calculate doses from air concentrations, either CSA-N288.1 or the Regulatory Guide 1.109 of the United States Nuclear Regulatory Commission, which has already been used to license light-water reactors in these countries, may be used. When dose predictions from CSA-N288.1 are compared with those from the U.S. Regulatory Guides, the differences in projected doses raise questions about the predictions. This report explains differences between the two models for ingestion, inhalation, external and immersion doses

  18. Comparing the line broadened quasilinear model to Vlasov code

    International Nuclear Information System (INIS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-01-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations

  19. Comparing the line broadened quasilinear model to Vlasov code

    Energy Technology Data Exchange (ETDEWEB)

    Ghantous, K. [Laboratoire de Physique des Plasmas, Ecole Polytechnique, 91128 Palaiseau Cedex (France); Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States); Berk, H. L. [Institute for Fusion Studies, University of Texas, 2100 San Jacinto Blvd, Austin, Texas 78712-1047 (United States); Gorelenkov, N. N. [Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States)

    2014-03-15

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  20. Comparing the line broadened quasilinear model to Vlasov code

    Science.gov (United States)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  1. The aeroelastic code HawC - model and comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Thirstrup Petersen, J. [Risoe National Lab., The Test Station for Wind Turbines, Roskilde (Denmark)

    1996-09-01

    A general aeroelastic finite element model for simulation of the dynamic response of horizontal axis wind turbines is presented. The model has been developed with the aim to establish an effective research tool, which can support the general investigation of wind turbine dynamics and research in specific areas of wind turbine modelling. The model concentrates on the correct representation of the inertia forces in a form, which makes it possible to recognize and isolate effects originating from specific degrees of freedom. The turbine structure is divided into substructures, and nonlinear kinematic terms are retained in the equations of motion. Moderate geometric nonlinearities are allowed for. Gravity and a full wind field including 3-dimensional 3-component turbulence are included in the loading. Simulation results for a typical three bladed, stall regulated wind turbine are presented and compared with measurements. (au)

  2. Improved gap conductance model for the TRAC code

    International Nuclear Information System (INIS)

    Hatch, S.W.; Mandell, D.A.

    1980-01-01

    The purpose of the present work, as indicated earlier, is to improve the present constant fuel clad spacing in TRAC-P1A without significantly increasing the computer costs. It is realized that the simple model proposed may not be accurate enough for some cases, but for the initial calculations made the DELTAR model improves the predictions over the constant Δr results of TRAC-P1A and the additional computing costs are negligible

  3. Code modernization and modularization of APEX and SWAT watershed simulation models

    Science.gov (United States)

    SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...

  4. An Organismal Model for Gene Regulatory Networks in the Gut-Associated Immune Response

    Directory of Open Access Journals (Sweden)

    Katherine M. Buckley

    2017-10-01

    Full Text Available The gut epithelium is an ancient site of complex communication between the animal immune system and the microbial world. While elements of self-non-self receptors and effector mechanisms differ greatly among animal phyla, some aspects of recognition, regulation, and response are broadly conserved. A gene regulatory network (GRN approach provides a means to investigate the nature of this conservation and divergence even as more peripheral functional details remain incompletely understood. The sea urchin embryo is an unparalleled experimental model for detangling the GRNs that govern embryonic development. By applying this theoretical framework to the free swimming, feeding larval stage of the purple sea urchin, it is possible to delineate the conserved regulatory circuitry that regulates the gut-associated immune response. This model provides a morphologically simple system in which to efficiently unravel regulatory connections that are phylogenetically relevant to immunity in vertebrates. Here, we review the organism-wide cellular and transcriptional immune response of the sea urchin larva. A large set of transcription factors and signal systems, including epithelial expression of interleukin 17 (IL17, are important mediators in the activation of the early gut-associated response. Many of these have homologs that are active in vertebrate immunity, while others are ancient in animals but absent in vertebrates or specific to echinoderms. This larval model provides a means to experimentally characterize immune function encoded in the sea urchin genome and the regulatory interconnections that control immune response and resolution across the tissues of the organism.

  5. TIGER: Toolbox for integrating genome-scale metabolic models, expression data, and transcriptional regulatory networks

    Directory of Open Access Journals (Sweden)

    Jensen Paul A

    2011-09-01

    Full Text Available Abstract Background Several methods have been developed for analyzing genome-scale models of metabolism and transcriptional regulation. Many of these methods, such as Flux Balance Analysis, use constrained optimization to predict relationships between metabolic flux and the genes that encode and regulate enzyme activity. Recently, mixed integer programming has been used to encode these gene-protein-reaction (GPR relationships into a single optimization problem, but these techniques are often of limited generality and lack a tool for automating the conversion of rules to a coupled regulatory/metabolic model. Results We present TIGER, a Toolbox for Integrating Genome-scale Metabolism, Expression, and Regulation. TIGER converts a series of generalized, Boolean or multilevel rules into a set of mixed integer inequalities. The package also includes implementations of existing algorithms to integrate high-throughput expression data with genome-scale models of metabolism and transcriptional regulation. We demonstrate how TIGER automates the coupling of a genome-scale metabolic model with GPR logic and models of transcriptional regulation, thereby serving as a platform for algorithm development and large-scale metabolic analysis. Additionally, we demonstrate how TIGER's algorithms can be used to identify inconsistencies and improve existing models of transcriptional regulation with examples from the reconstructed transcriptional regulatory network of Saccharomyces cerevisiae. Conclusion The TIGER package provides a consistent platform for algorithm development and extending existing genome-scale metabolic models with regulatory networks and high-throughput data.

  6. Review of the SCDAP/RELAP5/MOD3.1 code structure and core T/H model before core damage

    International Nuclear Information System (INIS)

    Kim, See Darl; Kim, Dong Ha

    1998-04-01

    The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code is being developed at the INEL under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. NRC. As The current time, the SCDAP/RELAP5/MOD3.1 code is the result of merging the RELAP5/MOD3 and SCDAP models. The code models the coupled behavior of the reactor coolant system, core, fission product released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. Major purpose of the report is to provide information about the characteristics of SCDAP/RELAP5/MOD3.1 core T/H models for an integrated severe accident computer code being developed under the mid/long-term project. This report analyzes the overall code structure which consists of the input processor, transient controller, and plot file handler. The basic governing equations to simulate the thermohydraulics of the primary system are also described. As the focus is currently concentrated in the core, core nodalization parameters of the intact geometry and the phenomenological subroutines for the damaged core are summarized for the future usage. In addition, the numerical approach for the heat conduction model is investigated along with heat convection model. These studies could provide a foundation for input preparation and model improvement. (author). 6 refs., 3 tabs., 4 figs

  7. PWR hot leg natural circulation modeling with MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jae Hong; Lee, Jong In [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of)

    1997-12-31

    Previous MELCOR and SCDAP/RELAP5 nodalizations for simulating the counter-current, natural circulation behavior of vapor flow within the RCS hot legs and SG U-tubes when core damage progress can not be applied to the steady state and water-filled conditions during the initial period of accident progression because of the artificially high loss coefficients in the hot legs and SG U-tubes which were chosen from results of COMMIX calculation and the Westinghouse natural circulation experiments in a 1/7-scale facility for simulating steam natural circulation behavior in the vessel and circulation modeling which can be used both for the liquid flow condition at steady state and for the vapor flow condition at the later period of in-vessel core damage. For this, the drag forces resulting from the momentum exchange effects between the two vapor streams in the hot leg was modeled as a pressure drop by pump model. This hot leg natural circulation modeling of MELCOR was able to reproduce similar mass flow rates with those predicted by previous models. 6 refs., 2 figs. (Author)

  8. PWR hot leg natural circulation modeling with MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jae Hong; Lee, Jong In [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of)

    1998-12-31

    Previous MELCOR and SCDAP/RELAP5 nodalizations for simulating the counter-current, natural circulation behavior of vapor flow within the RCS hot legs and SG U-tubes when core damage progress can not be applied to the steady state and water-filled conditions during the initial period of accident progression because of the artificially high loss coefficients in the hot legs and SG U-tubes which were chosen from results of COMMIX calculation and the Westinghouse natural circulation experiments in a 1/7-scale facility for simulating steam natural circulation behavior in the vessel and circulation modeling which can be used both for the liquid flow condition at steady state and for the vapor flow condition at the later period of in-vessel core damage. For this, the drag forces resulting from the momentum exchange effects between the two vapor streams in the hot leg was modeled as a pressure drop by pump model. This hot leg natural circulation modeling of MELCOR was able to reproduce similar mass flow rates with those predicted by previous models. 6 refs., 2 figs. (Author)

  9. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  10. Once-through CANDU reactor models for the ORIGEN2 computer code

    International Nuclear Information System (INIS)

    Croff, A.G.; Bjerke, M.A.

    1980-11-01

    Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % 235 U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given

  11. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  12. The improvement of the heat transfer model for sodium-water reaction jet code

    International Nuclear Information System (INIS)

    Hashiguchi, Yoshirou; Yamamoto, Hajime; Kamoshida, Norio; Murata, Shuuichi

    2001-02-01

    For confirming the reasonable DBL (Design Base Leak) on steam generator (SG), it is necessary to evaluate phenomena of sodium-water reaction (SWR) in an actual steam generator realistically. The improvement of a heat transfer model on sodium-water reaction (SWR) jet code (LEAP-JET ver.1.40) and application analysis to the water injection tests for confirmation of propriety for the code were performed. On the improvement of the code, the heat transfer model between a inside fluid and a tube wall was introduced instead of the prior model which was heat capacity model including both heat capacity of the tube wall and inside fluid. And it was considered that the fluid of inside the heat exchange tube was able to treat as water or sodium and typical heat transfer equations used in SG design were also introduced in the new heat transfer model. Further additional work was carried out in order to improve the stability of the calculation for long calculation time. The test calculation using the improved code (LEAP-JET ver.1.50) were carried out with conditions of the SWAT-IR·Run-HT-2 test. It was confirmed that the SWR jet behavior on the result and the influence to the result of the heat transfer model were reasonable. And also on the improved code (LEAP-JET ver.1.50), user's manual was revised with additional I/O manual and explanation of the heat transfer model and new variable name. (author)

  13. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  14. Physical model and calculation code for fuel coolant interactions

    International Nuclear Information System (INIS)

    Goldammer, H.; Kottowski, H.

    1976-01-01

    A physical model is proposed to describe fuel coolant interactions in shock-tube geometry. According to the experimental results, an interaction model which divides each cycle into three phases is proposed. The first phase is the fuel-coolant-contact, the second one is the ejection and recently of the coolant, and the third phase is the impact and fragmentation. Physical background of these phases are illustrated in the first part of this paper. Mathematical expressions of the model are exposed in the second part. A principal feature of the computational method is the consistent application of the fourier-equation throughout the whole interaction process. The results of some calculations, performed for different conditions are compiled in attached figures. (Aoki, K.)

  15. Development and validation of the ENIGMA code for MOX fuel performance modelling

    International Nuclear Information System (INIS)

    Palmer, I.; Rossiter, G.; White, R.J.

    2000-01-01

    The ENIGMA fuel performance code has been under development in the UK since the mid-1980s with contributions made by both the fuel vendor (BNFL) and the utility (British Energy). In recent years it has become the principal code for UO 2 fuel licensing for both PWR and AGR reactor systems in the UK and has also been used by BNFL in support of overseas UO 2 and MOX fuel business. A significant new programme of work has recently been initiated by BNFL to further develop the code specifically for MOX fuel application. Model development is proceeding hand in hand with a major programme of MOX fuel testing and PIE studies, with the objective of producing a fuel modelling code suitable for mechanistic analysis, as well as for licensing applications. This paper gives an overview of the model developments being undertaken and of the experimental data being used to underpin and to validate the code. The paper provides a summary of the code development programme together with specific examples of new models produced. (author)

  16. Applicability evaluation on the conservative metal-water reaction(MWR) model implemented into the SPACE code

    International Nuclear Information System (INIS)

    Lee, Suk Ho; You, Sung Chang; Kim, Han Gon

    2011-01-01

    The SBLOCA (Small Break Loss-of-Coolant Accident) evaluation methodology for the APR1400 (Advanced Power Reactor 1400) is under development using the SPACE code. The goal of the development of this methodology is to set up a conservative evaluation methodology in accordance with Appendix K of 10CFR50 by the end of 2012. In order to develop the Appendix K version of the SPACE code, the code modification is considered through implementation of the code on the required evaluation models. For the conservative models required in the SPACE code, the metal-water reaction (MWR) model, the critical flow model, the Critical Heat Flux (CHF) model and the post-CHF model must be implemented in the code. At present, the integration of the model to generate the Appendix K version of SPACE is in its preliminary stage. Among them, the conservative MWR model and its code applicability are introduced in this paper

  17. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, M.D. [Sandia National Labs., Albuquerque, NM (United States); Khan, M.A. [IT Corp., Albuquerque, NM (United States)

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

  18. Finite mixture models for sensitivity analysis of thermal hydraulic codes for passive safety systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Nicola, Giancarlo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge Fondation EDF, Ecole Centrale Paris and Supelec, Paris (France); Yu, Yu [School of Nuclear Science and Engineering, North China Electric Power University, 102206 Beijing (China)

    2015-08-15

    Highlights: • Uncertainties of TH codes affect the system failure probability quantification. • We present Finite Mixture Models (FMMs) for sensitivity analysis of TH codes. • FMMs approximate the pdf of the output of a TH code with a limited number of simulations. • The approach is tested on a Passive Containment Cooling System of an AP1000 reactor. • The novel approach overcomes the results of a standard variance decomposition method. - Abstract: For safety analysis of Nuclear Power Plants (NPPs), Best Estimate (BE) Thermal Hydraulic (TH) codes are used to predict system response in normal and accidental conditions. The assessment of the uncertainties of TH codes is a critical issue for system failure probability quantification. In this paper, we consider passive safety systems of advanced NPPs and present a novel approach of Sensitivity Analysis (SA). The approach is based on Finite Mixture Models (FMMs) to approximate the probability density function (i.e., the uncertainty) of the output of the passive safety system TH code with a limited number of simulations. We propose a novel Sensitivity Analysis (SA) method for keeping the computational cost low: an Expectation Maximization (EM) algorithm is used to calculate the saliency of the TH code input variables for identifying those that most affect the system functional failure. The novel approach is compared with a standard variance decomposition method on a case study considering a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000.

  19. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    International Nuclear Information System (INIS)

    Tucker, M.D.; Khan, M.A.

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended

  20. Development of the next generation code system as an engineering modeling language. (2). Study with prototyping

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Uto, Nariaki; Kasahara, Naoto; Ishikawa, Makoto

    2003-04-01

    In the fast reactor development, numerical simulation using analytical codes plays an important role for complementing theory and experiment. It is necessary that the engineering models and analysis methods can be flexibly changed, because the phenomena to be investigated become more complicated due to the diversity of the needs for research. And, there are large problems in combining physical properties and engineering models in many different fields. Aiming to the realization of the next generation code system which can solve those problems, the authors adopted three methods, (1) Multi-language (SoftWIRE.NET, Visual Basic.NET and Fortran) (2) Fortran 90 and (3) Python to make a prototype of the next generation code system. As this result, the followings were confirmed. (1) It is possible to reuse a function of the existing codes written in Fortran as an object of the next generation code system by using Visual Basic.NET. (2) The maintainability of the existing code written by Fortran 77 can be improved by using the new features of Fortran 90. (3) The toolbox-type code system can be built by using Python. (author)

  1. Alternative conceptual models and codes for unsaturated flow in fractured tuff: Preliminary assessments for GWTT-95

    International Nuclear Information System (INIS)

    Ho, C.K.; Altman, S.J.; Arnold, B.W.

    1995-09-01

    Groundwater travel time (GWTT) calculations will play an important role in addressing site-suitability criteria for the potential high-level nuclear waste repository at Yucca Mountain,Nevada. In support of these calculations, Preliminary assessments of the candidate codes and models are presented in this report. A series of benchmark studies have been designed to address important aspects of modeling flow through fractured media representative of flow at Yucca Mountain. Three codes (DUAL, FEHMN, and TOUGH 2) are compared in these benchmark studies. DUAL is a single-phase, isothermal, two-dimensional flow simulator based on the dual mixed finite element method. FEHMN is a nonisothermal, multiphase, multidimensional simulator based primarily on the finite element method. TOUGH2 is anon isothermal, multiphase, multidimensional simulator based on the integral finite difference method. Alternative conceptual models of fracture flow consisting of the equivalent continuum model (ECM) and the dual permeability (DK) model are used in the different codes

  2. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC): gap analysis for high fidelity and performance assessment code development

    International Nuclear Information System (INIS)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-01-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  3. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  4. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Smith, A.B. [ed.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  5. Nuclear model codes and related software distributed by the OECD/NEA Data Bank

    International Nuclear Information System (INIS)

    Sartori, E.

    1993-01-01

    Software and data for nuclear energy applications is acquired, tested and distributed by several information centres; in particular, relevant computer codes are distributed internationally by the OECD/NEA Data Bank (France) and by ESTSC and EPIC/RSIC (United States). This activity is coordinated among the centres and is extended outside the OECD area through an arrangement with the IAEA. This article covers more specifically the availability of nuclear model codes and also those codes which further process their results into data sets needed for specific nuclear application projects. (author). 2 figs

  6. Code package {open_quotes}SVECHA{close_quotes}: Modeling of core degradation phenomena at severe accidents

    Energy Technology Data Exchange (ETDEWEB)

    Veshchunov, M.S.; Kisselev, A.E.; Palagin, A.V. [Nuclear Safety Institute, Moscow (Russian Federation)] [and others

    1995-09-01

    The code package SVECHA for the modeling of in-vessel core degradation (CD) phenomena in severe accidents is being developed in the Nuclear Safety Institute, Russian Academy of Science (NSI RAS). The code package presents a detailed mechanistic description of the phenomenology of severe accidents in a reactor core. The modules of the package were developed and validated on separate effect test data. These modules were then successfully implemented in the ICARE2 code and validated against a wide range of integral tests. Validation results have shown good agreement with separate effect tests data and with the integral tests CORA-W1/W2, CORA-13, PHEBUS-B9+.

  7. Implementing the WebSocket Protocol Based on Formal Modelling and Automated Code Generation

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2014-01-01

    with pragmatic annotations for automated code generation of protocol software. The contribution of this paper is an application of the approach as implemented in the PetriCode tool to obtain protocol software implementing the IETF WebSocket protocol. This demonstrates the scalability of our approach to real...... protocols. Furthermore, we perform formal verification of the CPN model prior to code generation, and test the implementation for interoperability against the Autobahn WebSocket test-suite resulting in 97% and 99% success rate for the client and server implementation, respectively. The tests show...

  8. A Secure Network Coding-based Data Gathering Model and Its Protocol in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qian Xiao

    2012-09-01

    Full Text Available To provide security for data gathering based on network coding in wireless sensor networks (WSNs, a secure network coding-based data gathering model is proposed, and a data-privacy preserving and pollution preventing (DPPaamp;PP protocol using network coding is designed. DPPaamp;PP makes use of a new proposed pollution symbol selection and pollution (PSSP scheme based on a new obfuscation idea to pollute existing symbols. Analyses of DPPaamp;PP show that it not only requires low overhead on computation and communication, but also provides high security on resisting brute-force attacks.

  9. Comparison for the interfacial and wall friction models in thermal-hydraulic system analysis codes

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Park, Jee Won; Chung, Bub Dong; Kim, Soo Hyung; Kim, See Dal

    2007-07-01

    The average equations employed in the current thermal hydraulic analysis codes need to be closed with the appropriate models and correlations to specify the interphase phenomena along with fluid/structure interactions. This includes both thermal and mechanical interactions. Among the closure laws, an interfacial and wall frictions, which are included in the momentum equations, not only affect pressure drops along the fluid flow, but also have great effects for the numerical stability of the codes. In this study, the interfacial and wall frictions are reviewed for the commonly applied thermal-hydraulic system analysis codes, i.e. RELAP5-3D, MARS-3D, TRAC-M, and CATHARE

  10. Savannah River Laboratory DOSTOMAN code: a compartmental pathways computer model of contaminant transport

    International Nuclear Information System (INIS)

    King, C.M.; Wilhite, E.L.; Root, R.W. Jr.

    1985-01-01

    The Savannah River Laboratory DOSTOMAN code has been used since 1978 for environmental pathway analysis of potential migration of radionuclides and hazardous chemicals. The DOSTOMAN work is reviewed including a summary of historical use of compartmental models, the mathematical basis for the DOSTOMAN code, examples of exact analytical solutions for simple matrices, methods for numerical solution of complex matrices, and mathematical validation/calibration of the SRL code. The review includes the methodology for application to nuclear and hazardous chemical waste disposal, examples of use of the model in contaminant transport and pathway analysis, a user's guide for computer implementation, peer review of the code, and use of DOSTOMAN at other Department of Energy sites. 22 refs., 3 figs

  11. Mechanical modelling of PCI with FRAGEMA and CEA finite element codes

    International Nuclear Information System (INIS)

    Joseph, J.; Bernard, Ph.; Atabek, R.; Chantant, M.

    1983-01-01

    In the framework of their common program, CEA and FRAGEMA have undertaken the mechanical modelization of PCI. In the first step two different codes, TITUS and VERDON, have been tested by FRAGEMA and CEA respectively. Whereas the two codes use a finite element method to describe the thermomechanical behaviour of a fuel element, input models are not the same for the two codes: to take into account the presence of cracks in UO 2 , an axisymmetric two dimensional mesh pattern and the Druecker-Prager criterion are used in VERDON and a 3D equivalent method in TITUS. Two rods have been studied with these two methods: PRISCA 04bis and PRISCA 104 which were ramped in SILOE. The results show that the stresses and strains are the same with the two codes. These methods are further applied to the complete series of the common ramp test rods program of FRAGEMA and CEA. (author)

  12. A model of a code of ethics for tissue banks operating in developing countries.

    Science.gov (United States)

    Morales Pedraza, Jorge

    2012-12-01

    Ethical practice in the field of tissue banking requires the setting of principles, the identification of possible deviations and the establishment of mechanisms that will detect and hinder abuses that may occur during the procurement, processing and distribution of tissues for transplantation. This model of a Code of Ethics has been prepared with the purpose of being used for the elaboration of a Code of Ethics for tissue banks operating in the Latin American and the Caribbean, Asia and the Pacific and the African regions in order to guide the day-to-day operation of these banks. The purpose of this model of Code of Ethics is to assist interested tissue banks in the preparation of their own Code of Ethics towards ensuring that the tissue bank staff support with their actions the mission and values associated with tissue banking.

  13. Sex determination strategies in 2012: towards a common regulatory model?

    Science.gov (United States)

    2012-01-01

    Sex determination is a complicated process involving large-scale modifications in gene expression affecting virtually every tissue in the body. Although the evolutionary origin of sex remains controversial, there is little doubt that it has developed as a process of optimizing metabolic control, as well as developmental and reproductive functions within a given setting of limited resources and environmental pressure. Evidence from various model organisms supports the view that sex determination may occur as a result of direct environmental induction or genetic regulation. The first process has been well documented in reptiles and fish, while the second is the classic case for avian species and mammals. Both of the latter have developed a variety of sex-specific/sex-related genes, which ultimately form a complete chromosome pair (sex chromosomes/gonosomes). Interestingly, combinations of environmental and genetic mechanisms have been described among different classes of animals, thus rendering the possibility of a unidirectional continuous evolutionary process from the one type of mechanism to the other unlikely. On the other hand, common elements appear throughout the animal kingdom, with regard to a) conserved key genes and b) a central role of sex steroid control as a prerequisite for ultimately normal sex differentiation. Studies in invertebrates also indicate a role of epigenetic chromatin modification, particularly with regard to alternative splicing options. This review summarizes current evidence from research in this hot field and signifies the need for further study of both normal hormonal regulators of sexual phenotype and patterns of environmental disruption. PMID:22357269

  14. A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model

    Science.gov (United States)

    Sadoski, Mark; McTigue, Erin M.; Paivio, Allan

    2012-01-01

    In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…

  15. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  16. Model of nuclear reactor type VVER-1000/V-320 built by computer code ATHLET-CD

    International Nuclear Information System (INIS)

    Georgiev, Yoto; Filipov, Kalin; Velev, Vladimir

    2014-01-01

    A model of nuclear reactor type VVER-1000 V-320 developed for computer code ATHLET-CD2.1A is presented. Validation of the has been made, in the analysis of the station blackout scenario with LOCA on fourth cold leg is shown. As the calculation has been completed, the results are checked through comparison with the results from the computer codes ATHLET-2.1A, ASTEC-2.1 and RELAP5mod3.2

  17. Radiation transport phenomena and modeling. Part A: Codes; Part B: Applications with examples

    International Nuclear Information System (INIS)

    Lorence, L.J. Jr.; Beutler, D.E.

    1997-09-01

    This report contains the notes from the second session of the 1997 IEEE Nuclear and Space Radiation Effects Conference Short Course on Applying Computer Simulation Tools to Radiation Effects Problems. Part A discusses the physical phenomena modeled in radiation transport codes and various types of algorithmic implementations. Part B gives examples of how these codes can be used to design experiments whose results can be easily analyzed and describes how to calculate quantities of interest for electronic devices

  18. Domain-specific modeling enabling full code generation

    CERN Document Server

    Kelly, Steven

    2007-01-01

    Domain-Specific Modeling (DSM) is the latest approach tosoftware development, promising to greatly increase the speed andease of software creation. Early adopters of DSM have been enjoyingproductivity increases of 500–1000% in production for over adecade. This book introduces DSM and offers examples from variousfields to illustrate to experienced developers how DSM can improvesoftware development in their teams. Two authorities in the field explain what DSM is, why it works,and how to successfully create and use a DSM solution to improveproductivity and quality. Divided into four parts, the book covers:background and motivation; fundamentals; in-depth examples; andcreating DSM solutions. There is an emphasis throughout the book onpractical guidelines for implementing DSM, including how toidentify the nece sary language constructs, how to generate fullcode from models, and how to provide tool support for a new DSMlanguage. The example cases described in the book are available thebook's Website, www.dsmbook....

  19. Model-Driven Engineering: Automatic Code Generation and Beyond

    Science.gov (United States)

    2015-03-01

    herein to any specific commercial product, process, or service by trade name, trade mark, manufacturer , or otherwise, does not necessarily constitute or...export of an Extensible Markup Language (XML) representation of the model. The XML Metadata Interchange (XMI) is an OMG standard for representing...overall company financial results for the past 3 years. What financial re- sults are you projecting for the next year? 1.2.5.2 Percentage of Gross

  20. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 1: Theory and Computational Model

    International Nuclear Information System (INIS)

    Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.

    1998-01-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included

  1. Installation of aerosol behavior model into multi-dimensional thermal hydraulic analysis code AQUA

    International Nuclear Information System (INIS)

    Kisohara, Naoyuki; Yamaguchi, Akira

    1997-12-01

    The safety analysis of FBR plant system for sodium leak phenomena needs to evaluate the deposition of the aerosol particle to the components in the plant, the chemical reaction of aerosol to humidity in the air and the effect of the combustion heat through aerosol to the structural component. For this purpose, ABC-INTG (Aerosol Behavior in Containment-INTeGrated Version) code has been developed and used until now. This code calculates aerosol behavior in the gas area of uniform temperature and pressure by 1 cell-model. Later, however, more detailed calculation of aerosol behavior requires the installation of aerosol model into multi-cell thermal hydraulic analysis code AQUA. AQUA can calculate the carrier gas flow, temperature and the distribution of the aerosol spatial concentration. On the other hand, ABC-INTG can calculate the generation, deposition to the wall and flower, agglomeration of aerosol particle and figure out the distribution of the aerosol particle size. Thus, the combination of these two codes enables to deal with aerosol model coupling the distribution of the aerosol spatial concentration and that of the aerosol particle size. This report describes aerosol behavior model, how to install the aerosol model to AQUA and new subroutine equipped to the code. Furthermore, the test calculations of the simple structural model were executed by this code, appropriate results were obtained. Thus, this code has prospect to predict aerosol behavior by the introduction of coupling analysis with multi-dimensional gas thermo-dynamics for sodium combustion evaluation. (J.P.N.)

  2. Light water reactor fuel analysis code FEMAXI-7. Model and structure [Revised edition

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Amaya, Masaki; Saitou, Hiroaki

    2014-03-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in both normal conditions and anticipated transient conditions. This code is an advanced version which has been produced by incorporating the former version FEMAXI-6 with numerous functional improvements and extensions. In FEMAXI-7, many new models have been added and parameters have been clearly arranged. Also, to facilitate effective maintenance and accessibility of the code, modularization of subroutines and functions have been attained, and quality comment descriptions of variables or physical quantities have been incorporated in the source code. With these advancements, the FEMAXI-7 code has been upgraded to a versatile analytical tool for high burnup fuel behavior analyses. This report is the revised edition of the first one which describes in detail the design, basic theory and structure, models and numerical method, and improvements and extensions. The first edition, JAEA-Data/Code 2010-035, was published in 2010. The first edition was extended by orderly addition and disposition of explanations of models and organized as the revised edition after three years interval. (author)

  3. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  4. Evaluation of dose calculation models for inhabited areas applicable in nuclear accident consequence assessment codes

    International Nuclear Information System (INIS)

    Katalin Eged; Zoltan Kis; Natalia Semioschkina; Gabriele Voigt

    2004-01-01

    One of the objectives of the EC project EVANET-TERRA is to provide suitable inputs to the RODOS system. This study gives an overview on urban dose calculation models with special emphasis on the RECLAIM-EDEM2M and TEMAS-urban codes. The TEMAS-urban code is more complex compared to the RECLAIM-EDEM2M code although both models use similar and some times even same model parameters. The database and the way of its data collection as used in RECLAIM-EDEM2M is recommended as a preferred option because it contains many data from local and regional measurements. However in a decision situation the outputs of the TEMASurban model may better help stake holders by providing a ranking of the surfaces to be decontaminated. (author)

  5. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    Science.gov (United States)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  6. Testing of the PELSHIE shielding code using Benchmark problems and other special shielding models

    International Nuclear Information System (INIS)

    Language, A.E.; Sartori, D.E.; De Beer, G.P.

    1981-08-01

    The PELSHIE shielding code for gamma rays from point and extended sources was written in 1971 and a revised version was published in October 1979. At Pelindaba the program is used extensively due to its flexibility and ease of use for a wide range of problems. The testing of PELSHIE results with the results of a range of models and so-called Benchmark problems is desirable to determine possible weaknesses in PELSHIE. Benchmark problems, experimental data, and shielding models, some of which were resolved by the discrete-ordinates method with the ANISN and DOT 3.5 codes, were used for the efficiency test. The description of the models followed the pattern of a classical shielding problem. After the intercomparison with six different models, the usefulness of the PELSHIE code was quantitatively determined [af

  7. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-07-01

    Full Text Available Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.

  8. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  9. Ecological models in support of regulatory risk assessments of pesticides: developing a strategy for the future.

    Science.gov (United States)

    Forbes, Valery E; Hommen, Udo; Thorbek, Pernille; Heimbach, Fred; Van den Brink, Paul J; Wogram, Jörn; Thulke, Hans-Hermann; Grimm, Volker

    2009-01-01

    This brief communication reports on the main findings of the LEMTOX workshop, held from 9 to 12 September 2007, at the Helmholtz Centre for Environmental Research (UFZ) in Leipzig, Germany. The workshop brought together a diverse group of stakeholders from academia, regulatory authorities, contract research organizations, and industry, representing Europe, the United States, and Asia, to discuss the role of ecological modeling in risk assessments of pesticides, particularly under the European regulatory framework. The following questions were addressed: What are the potential benefits of using ecological models in pesticide registration and risk assessment? What obstacles prevent ecological modeling from being used routinely in regulatory submissions? What actions are needed to overcome the identified obstacles? What recommendations should be made to ensure good modeling practice in this context? The workshop focused exclusively on population models, and discussion was focused on those categories of population models that link effects on individuals (e.g., survival, growth, reproduction, behavior) to effects on population dynamics. The workshop participants concluded that the overall benefits of ecological modeling are that it could bring more ecology into ecological risk assessment, and it could provide an excellent tool for exploring the importance of, and interactions among, ecological complexities. However, there are a number of challenges that need to be overcome before such models will receive wide acceptance for pesticide risk assessment, despite having been used extensively in other contexts (e.g., conservation biology). The need for guidance on Good Modeling Practice (on model development, analysis, interpretation, evaluation, documentation, and communication), as well as the need for case studies that can be used to explore the added value of ecological models for risk assessment, were identified as top priorities. Assessing recovery potential of exposed

  10. Modelling Chemical Equilibrium Partitioning with the GEMS-PSI Code

    Energy Technology Data Exchange (ETDEWEB)

    Kulik, D.; Berner, U.; Curti, E

    2004-03-01

    Sorption, co-precipitation and re-crystallisation are important retention processes for dissolved contaminants (radionuclides) migrating through the sub-surface. The retention of elements is usually measured by empirical partition coefficients (Kd), which vary in response to many factors: temperature, solid/liquid ratio, total contaminant loading, water composition, host-mineral composition, etc. The Kd values can be predicted for in-situ conditions from thermodynamic modelling of solid solution, aqueous solution or sorption equilibria, provided that stoichiometry, thermodynamic stability and mixing properties of the pure components are known (Example 1). Unknown thermodynamic properties can be retrieved from experimental Kd values using inverse modelling techniques (Example 2). An efficient, advanced tool for performing both tasks is the Gibbs Energy Minimization (GEM) approach, implemented in the user-friendly GEM-Selector (GEMS) program package, which includes the Nagra-PSI chemical thermodynamic database. The package is being further developed at PSI and used extensively in studies relating to nuclear waste disposal. (author)

  11. Modelling Chemical Equilibrium Partitioning with the GEMS-PSI Code

    International Nuclear Information System (INIS)

    Kulik, D.; Berner, U.; Curti, E.

    2004-01-01

    Sorption, co-precipitation and re-crystallisation are important retention processes for dissolved contaminants (radionuclides) migrating through the sub-surface. The retention of elements is usually measured by empirical partition coefficients (Kd), which vary in response to many factors: temperature, solid/liquid ratio, total contaminant loading, water composition, host-mineral composition, etc. The Kd values can be predicted for in-situ conditions from thermodynamic modelling of solid solution, aqueous solution or sorption equilibria, provided that stoichiometry, thermodynamic stability and mixing properties of the pure components are known (Example 1). Unknown thermodynamic properties can be retrieved from experimental Kd values using inverse modelling techniques (Example 2). An efficient, advanced tool for performing both tasks is the Gibbs Energy Minimization (GEM) approach, implemented in the user-friendly GEM-Selector (GEMS) program package, which includes the Nagra-PSI chemical thermodynamic database. The package is being further developed at PSI and used extensively in studies relating to nuclear waste disposal. (author)

  12. Fast dose assessment models, parameters and code under accident conditions for Qinshan Nuclear Power Plant

    International Nuclear Information System (INIS)

    Zhang, Z.Y.; Hu, E.B.; Meng, X.C.; Zhang, Y.; Yao, R.T.

    1993-01-01

    According to requirement of accident emergency plan for Qinshan Nuclear Power Plant, a Gaussian straight-line model was adopted for estimating radionuclide concentration in surface air. In addition, the effects of mountain body on atmospheric dispersion was considered. By combination of field atmospheric dispersion experiment and wind tunnel modeling test, necessary modifications have been done for some models and parameters. A computer code for assessment was written in Quick BASIC (V4.5) language. The radius of assessment region is 10 km and the code is applicable to early accident assessment. (1 tab.)

  13. Models and Correlations of Interfacial and Wall Frictions for the SPACE code

    International Nuclear Information System (INIS)

    Kim, Soo Hyung; Hwang, Moon Kyu; Chung, Bub Dong

    2010-04-01

    This report describes models and correlations for the interfacial and wall frictions implemented in the SPACE code which has the capability to predict thermal-hydraulic behavior of nuclear power plants. The interfacial and wall frictions are essential to solve the momentum conservation equations of gas, continuous liquid and droplet. The interfacial and wall frictions are dealt in the Chapter 2 and 3, respectively. In Chapter 4, selection criteria for models and correlations are explained. In Chapter 5, the origins of the selected models and correlations used in this code are examined to check whether they are in confliction with intellectual proprietary rights

  14. A new balance-of-plant model for the SASSYS-1 LMR systems analysis code

    International Nuclear Information System (INIS)

    Briggs, L.L.

    1991-01-01

    In this paper, a balance-of-plant model is developed for the SASSYS code. This model represents the balance of plant as a network of components. It interfaces with the existing SASSYS code through the water side of the steam generator. The network representation provides a discretization of the mass, momentum, and energy equations and the equation of state and allows a simultaneous solution for the changes in pressure, flow, and enthalpy throughout the waterside system. The model has been tested for several types of transients and been found to perform both accurately and efficiently

  15. REGULATORY POLICY AND OPTIMIZATION OF INVESTMENT RESOURCE ALLOCATION IN THE MODEL OF FUNCTIONING OF RECREATION INDUSTRY

    Directory of Open Access Journals (Sweden)

    Hanna Shevchenko

    2017-11-01

    Full Text Available The research objective is the rationale of the theoretical and methodical approach concerning the improvement of regulatory policy as well as the process of distribution of financial investments using the model of the functioning of a recreational sector of the national economy. The methodology of the study includes the use of optimal control theory for the model formation of the functioning of the recreational industry as well as determining the behaviour of regulatory authorities and capabilities to optimize the allocation of investment resources in the recreational sector of the national economy. Results. The issue of equilibration of regulatory policy in the recreational sector of the national economy is actualized, including the question of targeted distribution of state and external financial investments. Also, it is proved that regulatory policy should establish the frameworks that on the one hand, do not allow public authorities to exercise extra influence on the economy of recreation, on the other hand, to keep the behaviour of the recreational business entities within the limits of normal socio-economic activity – on the basis of analysis of the continuum “recreation – work” by means of modified Brennan-Buchanan model. It is revealed that even with the condition of the tax reduction, the situation when the population resting less and works more than in the background of a developed economy is observed. However, according to the optimistic forecast, eventually on condition when the economy is emerging from the shade, we will obtain an official mode of the work in which, while maintaining taxes on proposed more advantageous for the population level, ultimately the ratio leisure and work will be established which is corresponding to the principles of sustainable development. Practical value. On the basis of methodical principles of the theory of optimal control, the model of the functioning of the recreational industry under the

  16. An improved UO2 thermal conductivity model in the ELESTRES computer code

    International Nuclear Information System (INIS)

    Chassie, G.G.; Tochaie, M.; Xu, Z.

    2010-01-01

    This paper describes the improved UO 2 thermal conductivity model for use in the ELESTRES (ELEment Simulation and sTRESses) computer code. The ELESTRES computer code models the thermal, mechanical and microstructural behaviour of a CANDU® fuel element under normal operating conditions. The main purpose of the code is to calculate fuel temperatures, fission gas release, internal gas pressure, fuel pellet deformation, and fuel sheath strains for fuel element design and assessment. It is also used to provide initial conditions for evaluating fuel behaviour during high temperature transients. The thermal conductivity of UO 2 fuel is one of the key parameters that affect ELESTRES calculations. The existing ELESTRES thermal conductivity model has been assessed and improved based on a large amount of thermal conductivity data from measurements of irradiated and un-irradiated UO 2 fuel with different densities. The UO 2 thermal conductivity data cover 90% to 99% theoretical density of UO 2 , temperature up to 3027 K, and burnup up to 1224 MW·h/kg U. The improved thermal conductivity model, which is recommended for a full implementation in the ELESTRES computer code, has reduced the ELESTRES code prediction biases of temperature, fission gas release, and fuel sheath strains when compared with the available experimental data. This improved thermal conductivity model has also been checked with a test version of ELESTRES over the full ranges of fuel temperature, fuel burnup, and fuel density expected in CANDU fuel. (author)

  17. Two-phase wall friction model for the trace computer code

    International Nuclear Information System (INIS)

    Wang Weidong

    2005-01-01

    The wall drag model in the TRAC/RELAP5 Advanced Computational Engine computer code (TRACE) has certain known deficiencies. For example, in an annular flow regime, the code predicts an unphysical high liquid velocity compared to the experimental data. To address those deficiencies, a new wall frictional drag package has been developed and implemented in the TRACE code to model the wall drag for two-phase flow system code. The modeled flow regimes are (1) annular/mist, (2) bubbly/slug, and (3) bubbly/slug with wall nucleation. The new models use void fraction (instead of flow quality) as the correlating variable to minimize the calculation oscillation. In addition, the models allow for transitions between the three regimes. The annular/mist regime is subdivided into three separate regimes for pure annular flow, annular flow with entrainment, and film breakdown. For adiabatic two-phase bubbly/slug flows, the vapor phase primarily exists outside of the boundary layer, and the wall shear uses single-phase liquid velocity for friction calculation. The vapor phase wall friction drag is set to zero for bubbly/slug flows. For bubbly/slug flows with wall nucleation, the bubbles are presented within the hydrodynamic boundary layer, and the two-phase wall friction drag is significantly higher with a pronounced mass flux effect. An empirical correlation has been studied and applied to account for nucleate boiling. Verification and validation tests have been performed, and the test results showed a significant code improvement. (authors)

  18. A statistical methodology for quantification of uncertainty in best estimate code physical models

    International Nuclear Information System (INIS)

    Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh

    2007-01-01

    A novel uncertainty assessment methodology, based on a statistical non-parametric approach, is presented in this paper. It achieves quantification of code physical model uncertainty by making use of model performance information obtained from studies of appropriate separate-effect tests. Uncertainties are quantified in the form of estimated probability density functions (pdf's), calculated with a newly developed non-parametric estimator. The new estimator objectively predicts the probability distribution of the model's 'error' (its uncertainty) from databases reflecting the model's accuracy on the basis of available experiments. The methodology is completed by applying a novel multi-dimensional clustering technique based on the comparison of model error samples with the Kruskall-Wallis test. This takes into account the fact that a model's uncertainty depends on system conditions, since a best estimate code can give predictions for which the accuracy is affected by the regions of the physical space in which the experiments occur. The final result is an objective, rigorous and accurate manner of assigning uncertainty to coded models, i.e. the input information needed by code uncertainty propagation methodologies used for assessing the accuracy of best estimate codes in nuclear systems analysis. The new methodology has been applied to the quantification of the uncertainty in the RETRAN-3D void model and then used in the analysis of an independent separate-effect experiment. This has clearly demonstrated the basic feasibility of the approach, as well as its advantages in yielding narrower uncertainty bands in quantifying the code's accuracy for void fraction predictions

  19. Evaluation of Advanced Models for PAFS Condensation Heat Transfer in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Byoung-Uhn; Kim, Seok; Park, Yu-Sun; Kang, Kyung Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ahn, Tae-Hwan; Yun, Byong-Jo [Pusan National University, Busan (Korea, Republic of)

    2015-10-15

    The PAFS (Passive Auxiliary Feedwater System) is operated by the natural circulation to remove the core decay heat through the PCHX (Passive Condensation Heat Exchanger) which is composed of the nearly horizontal tubes. For validation of the cooling and operational performance of the PAFS, PASCAL (PAFS Condensing Heat Removal Assessment Loop) facility was constructed and the condensation heat transfer and natural convection phenomena in the PAFS was experimentally investigated at KAERI (Korea Atomic Energy Research Institute). From the PASCAL experimental result, it was found that conventional system analysis code underestimated the condensation heat transfer. In this study, advanced condensation heat transfer models which can treat the heat transfer mechanisms with the different flow regimes in the nearly horizontal heat exchanger tube were analyzed. The models were implemented in a thermal hydraulic safety analysis code, SPACE (Safety and Performance Analysis Code for Nuclear Power Plant), and it was evaluated with the PASCAL experimental data. With an aim of enhancing the prediction capability for the condensation phenomenon inside the PCHX tube of the PAFS, advanced models for the condensation heat transfer were implemented into the wall condensation model of the SPACE code, so that the PASCAL experimental result was utilized to validate the condensation models. Calculation results showed that the improved model for the condensation heat transfer coefficient enhanced the prediction capability of the SPACE code. This result confirms that the mechanistic modeling for the film condensation in the steam phase and the convection in the condensate liquid contributed to enhance the prediction capability of the wall condensation model of the SPACE code and reduce conservatism in prediction of condensation heat transfer.

  20. RsmV a small non-coding regulatory RNA in Pseudomonas aeruginosa that sequesters RsmA and RsmF from target mRNAs.

    Science.gov (United States)

    Janssen, Kayley H; Diaz, Manisha R; Gode, Cindy J; Wolfgang, Matthew C; Yahr, Timothy L

    2018-06-04

    The Gram-negative opportunistic pathogen Pseudomonas aeruginosa has distinct genetic programs that favor either acute or chronic virulence gene expression. Acute virulence is associated with twitching and swimming motility, expression of a type III secretion system (T3SS), and the absence of alginate, Psl, or Pel polysaccharide production. Traits associated with chronic infection include growth as a biofilm, reduced motility, and expression of a type VI secretion system (T6SS). The Rsm post-transcriptional regulatory system plays important roles in the inverse control of phenotypes associated with acute and chronic virulence. RsmA and RsmF are RNA-binding proteins that interact with target mRNAs to control gene expression at the post-transcriptional level. Previous work found that RsmA activity is controlled by at least three small, non-coding regulatory RNAs (RsmW, RsmY, and RsmZ). In this study, we took an in-silico approach to identify additional sRNAs that might function in the sequestration of RsmA and/or RsmF and identified RsmV, a 192 nt transcript with four predicted RsmA/RsmF consensus binding sites. RsmV is capable of sequestering RsmA and RsmF in vivo to activate translation of tssA1 , a component of the T6SS, and to inhibit T3SS gene expression. Each of the predicted RsmA/RsmF consensus binding sites contribute to RsmV activity. Electrophoretic mobility shifts assays show that RsmF binds RsmV with >10-fold higher affinity than RsmY and RsmZ. Gene expression studies revealed that the temporal expression pattern of RsmV differs from RsmW, RsmY, and RsmZ. These findings suggest that each sRNA may play distinct roles in controlling RsmA and RsmF activity. IMPORTANCE The CsrA/RsmA family of RNA-binding proteins play important roles in post-transcriptional control of gene expression. The activity of CsrA/RsmA proteins is controlled by small non-coding RNAs that function as decoys to sequester CsrA/RsmA from target mRNAs. Pseudomonas aeruginosa has two Csr

  1. Code Generation by Model Transformation : A Case Study in Transformation Modularity

    NARCIS (Netherlands)

    Hemel, Z.; Kats, L.C.L.; Visser, E.

    2008-01-01

    Preprint of paper published in: Theory and Practice of Model Transformations (ICMT 2008), Lecture Notes in Computer Science 5063; doi:10.1007/978-3-540-69927-9_13 The realization of model-driven software development requires effective techniques for implementing code generators for domain-specific

  2. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  3. Improvements to the nuclear model code GNASH for cross section calculations at higher energies

    International Nuclear Information System (INIS)

    Young, P.G.; Chadwick, M.B.

    1994-01-01

    The nuclear model code GNASH, which in the past has been used predominantly for incident particle energies below 20 MeV, has been modified extensively for calculations at higher energies. The model extensions and improvements are described in this paper, and their significance is illustrated by comparing calculations with experimental data for incident energies up to 160 MeV

  4. Time-domain modeling of electromagnetic diffusion with a frequency-domain code

    NARCIS (Netherlands)

    Mulder, W.A.; Wirianto, M.; Slob, E.C.

    2007-01-01

    We modeled time-domain EM measurements of induction currents for marine and land applications with a frequency-domain code. An analysis of the computational complexity of a number of numerical methods shows that frequency-domain modeling followed by a Fourier transform is an attractive choice if a

  5. OWL: A code for the two-center shell model with spherical Woods-Saxon potentials

    Science.gov (United States)

    Diaz-Torres, Alexis

    2018-03-01

    A Fortran-90 code for solving the two-center nuclear shell model problem is presented. The model is based on two spherical Woods-Saxon potentials and the potential separable expansion method. It describes the single-particle motion in low-energy nuclear collisions, and is useful for characterizing a broad range of phenomena from fusion to nuclear molecular structures.

  6. The modelling of wall condensation with noncondensable gases for the containment codes

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H. [Commissariat a l`Energi Atomique, Grenoble (France)

    1995-09-01

    This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.

  7. Modelling Brazilian tests with FRACOD2D (FRActure propagation CODe)

    International Nuclear Information System (INIS)

    Lanaro, Flavio; Sato, Toshinori; Rinne, Mikael; Stephansson, Ove

    2008-01-01

    This study focuses on the influence of initiated cracks on the stress distribution within rock samples subjected to tensile loading by traditional Brazilian testing. The numerical analyses show that the stress distribution is only marginally affected by the considered loading boundary conditions. On the other hand, the initiation and propagation of cracks produce a stress field that is very different from that assumed by considering the rock material as continuous, homogeneous, isotropic and elastic. In the models, stress concentrations at the bridges between the cracks were found to have tensile stresses much higher than the macroscopic direct tensile strength of the intact rock. This was possible thanks to the development of large stress gradients that can be carried by the rock between the cracks. The analysis of the deformation along the sample diameter perpendicular to the loading direction might enable one to determine the macroscopic direct tensile strength of the rock or, in a real case, of the weakest grains. The strength is indicated by the point where the stress-strain curves depart from linearity. (author)

  8. A review of MAAP4 code structure and core T/H model

    International Nuclear Information System (INIS)

    Song, Yong Mann; Park, Soo Yong

    1998-03-01

    The modular accident analysis program (MAAP) version 4 is a computer code that can simulate the response of LWR plants during severe accident sequences and includes models for all of the important phenomena which might occur during accident sequences. In this report, MAAP4 code structure and core thermal hydraulic (T/H) model which models the T/H behavior of the reactor core and the response of core components during all accident phases involving degraded cores are specifically reviewed and then reorganized. This reorganization is performed via getting the related models together under each topic whose contents and order are same with other two reports for MELCOR and SCDAP/RELAP5 to be simultaneously published. Major purpose of the report is to provide information about the characteristics of MAAP4 core T/H models for an integrated severe accident computer code development being performed under the one of on-going mid/long-term nuclear developing project. The basic characteristics of the new integrated severe accident code includes: 1) Flexible simulation capability of primary side, secondary side, and the containment under severe accident conditions, 2) Detailed plant simulation, 3) Convenient user-interfaces, 4) Highly modularization for easy maintenance/improvement, and 5) State-of-the-art model selection. In conclusion, MAAP4 code has appeared to be superior for 3) and 4) items but to be somewhat inferior for 1) and 2) items. For item 5), more efforts should be made in the future to compare separated models in detail with not only other codes but also recent world-wide work. (author). 17 refs., 1 tab., 12 figs

  9. Computational modeling identifies key gene regulatory interactions underlying phenobarbital-mediated tumor promotion

    Science.gov (United States)

    Luisier, Raphaëlle; Unterberger, Elif B.; Goodman, Jay I.; Schwarz, Michael; Moggs, Jonathan; Terranova, Rémi; van Nimwegen, Erik

    2014-01-01

    Gene regulatory interactions underlying the early stages of non-genotoxic carcinogenesis are poorly understood. Here, we have identified key candidate regulators of phenobarbital (PB)-mediated mouse liver tumorigenesis, a well-characterized model of non-genotoxic carcinogenesis, by applying a new computational modeling approach to a comprehensive collection of in vivo gene expression studies. We have combined our previously developed motif activity response analysis (MARA), which models gene expression patterns in terms of computationally predicted transcription factor binding sites with singular value decomposition (SVD) of the inferred motif activities, to disentangle the roles that different transcriptional regulators play in specific biological pathways of tumor promotion. Furthermore, transgenic mouse models enabled us to identify which of these regulatory activities was downstream of constitutive androstane receptor and β-catenin signaling, both crucial components of PB-mediated liver tumorigenesis. We propose novel roles for E2F and ZFP161 in PB-mediated hepatocyte proliferation and suggest that PB-mediated suppression of ESR1 activity contributes to the development of a tumor-prone environment. Our study shows that combining MARA with SVD allows for automated identification of independent transcription regulatory programs within a complex in vivo tissue environment and provides novel mechanistic insights into PB-mediated hepatocarcinogenesis. PMID:24464994

  10. Thermal-hydraulic and aerosol containment phenomena modelling in ASTEC severe accident computer code

    International Nuclear Information System (INIS)

    Kljenak, Ivo; Dapper, Maik; Dienstbier, Jiri; Herranz, Luis E.; Koch, Marco K.; Fontanet, Joan

    2010-01-01

    Transients in containment systems of different scales (Phebus.FP containment, KAEVER vessel, Battelle Model Containment, LACE vessel and VVER-1000 nuclear power plant containment) involving thermal-hydraulic phenomena and aerosol behaviour, were simulated with the computer integral code ASTEC. The results of the simulations in the first four facilities were compared with experimental results, whereas the results of the simulated accident in the VVER-1000 containment were compared to results obtained with the MELCOR code. The main purpose of the simulations was the validation of the CPA module of the ASTEC code. The calculated results support the applicability of the code for predicting in-containment thermal-hydraulic and aerosol phenomena during a severe accident in a nuclear power plant.

  11. Gap conductance model validation in the TASS/SMR-S code

    International Nuclear Information System (INIS)

    Ahn, Sang-Jun; Yang, Soo-Hyung; Chung, Young-Jong; Bae, Kyoo-Hwan; Lee, Won-Jae

    2011-01-01

    An advanced integral pressurized water reactor, SMART (System-Integrated Modular Advanced ReacTor) has been developed by KAERI (Korea Atomic Energy Research and Institute). The purposes of the SMART are sea water desalination and an electricity generation. For the safety evaluation and performance analysis of the SMART, TASS/SMR-S (Transient And Setpoint Simulation/System-integrated Modular Reactor) code, has been developed. In this paper, the gap conductance model for the calculation of gap conductance has been validated by using another system code, MARS code, and experimental results. In the validation, the behaviors of fuel temperature and gap width are selected as the major parameters. According to the evaluation results, the TASS/SMR-S code predicts well the behaviors of fuel temperatures and gap width variation, compared to the MARS calculation results and experimental data. (author)

  12. The implementation of a toroidal limiter model into the gyrokinetic code ELMFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Leerink, S.; Janhunen, S.J.; Kiviniemi, T.P.; Nora, M. [Euratom-Tekes Association, Helsinki University of Technology (Finland); Heikkinen, J.A. [Euratom-Tekes Association, VTT, P.O. Box 1000, FI-02044 VTT (Finland); Ogando, F. [Universidad Nacional de Educacion a Distancia, Madrid (Spain)

    2008-03-15

    The ELMFIRE full nonlinear gyrokinetic simulation code has been developed for calculations of plasma evolution and dynamics of turbulence in tokamak geometry. The code is applicable for calculations of strong perturbations in particle distribution function, rapid transients and steep gradients in plasma. Benchmarking against experimental reflectometry data from the FT2 tokamak is being discussed and in this paper a model for comparison and studying poloidal velocity is presented. To make the ELMFIRE code suitable for scrape-off layer simulations a simplified toroidal limiter model has been implemented. The model is be discussed and first results are presented. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  13. Implementation of a dry process fuel cycle model into the DYMOND code

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jeong, Chang Joon; Choi, Hang Bok

    2004-01-01

    For the analysis of a dry process fuel cycle, new modules were implemented into the fuel cycle analysis code DYMOND, which was developed by the Argonne National Laboratory. The modifications were made to the energy demand prediction model, a Canada Deuterium Uranium (CANDU) reactor, direct use of spent Pressurized Water Reactor (PWR) fuel in CANDU reactors (DUPIC) fuel cycle model, the fuel cycle calculation module, and the input/output modules. The performance of the modified DYMOND code was assessed for the postulated once-through fuel cycle models including both the PWR and CANDU reactor. This paper presents modifications of the DYMOND code and the results of sample calculations for the PWR once-through and DUPIC fuel cycles

  14. Inclusion of models to describe severe accident conditions in the fuel simulation code DIONISIO

    Energy Technology Data Exchange (ETDEWEB)

    Lemes, Martín; Soba, Alejandro [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Daverio, Hernando [Gerencia Reactores y Centrales Nucleares, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Denis, Alicia [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina)

    2017-04-15

    The simulation of fuel rod behavior is a complex task that demands not only accurate models to describe the numerous phenomena occurring in the pellet, cladding and internal rod atmosphere but also an adequate interconnection between them. In the last years several models have been incorporated to the DIONISIO code with the purpose of increasing its precision and reliability. After the regrettable events at Fukushima, the need for codes capable of simulating nuclear fuels under accident conditions has come forth. Heat removal occurs in a quite different way than during normal operation and this fact determines a completely new set of conditions for the fuel materials. A detailed description of the different regimes the coolant may exhibit in such a wide variety of scenarios requires a thermal-hydraulic formulation not suitable to be included in a fuel performance code. Moreover, there exist a number of reliable and famous codes that perform this task. Nevertheless, and keeping in mind the purpose of building a code focused on the fuel behavior, a subroutine was developed for the DIONISIO code that performs a simplified analysis of the coolant in a PWR, restricted to the more representative situations and provides to the fuel simulation the boundary conditions necessary to reproduce accidental situations. In the present work this subroutine is described and the results of different comparisons with experimental data and with thermal-hydraulic codes are offered. It is verified that, in spite of its comparative simplicity, the predictions of this module of DIONISIO do not differ significantly from those of the specific, complex codes.

  15. Development and implementation of the regulatory control of sources in Latin American Model Project countries

    International Nuclear Information System (INIS)

    Ferruz Cruz, P.

    2001-01-01

    After a general assessment of the situation regarding radiation safety and the radiation protection infrastructure in Latin American countries, several of them were invited to participate in a Model Project oriented, in some cases, towards establishing a mechanism for national regulatory control of radiation sources, and in others, towards upgrading their national control programme. All these activities aimed at reaching an effective and sustainable radiation protection infrastructure based on international basic safety standards. The paper presents a general overview of the current situation with regard to radiation protection within the Model Project countries in Latin America after almost five years of activities. It includes: the implementation of regulatory issues; the control of occupational, medical and public exposures; emergency response and waste safety issues. The paper also presents some lessons learned during implementation concerning the numerous activities involved in this interregional project. (author)

  16. Code-To-Code Benchmarking Of The Porflow And GoldSim Contaminant Transport Models Using A Simple 1-D Domain - 11191

    International Nuclear Information System (INIS)

    Hiergesell, R.; Taylor, G.

    2010-01-01

    An investigation was conducted to compare and evaluate contaminant transport results of two model codes, GoldSim and Porflow, using a simple 1-D string of elements in each code. Model domains were constructed to be identical with respect to cell numbers and dimensions, matrix material, flow boundary and saturation conditions. One of the codes, GoldSim, does not simulate advective movement of water; therefore the water flux term was specified as a boundary condition. In the other code, Porflow, a steady-state flow field was computed and contaminant transport was simulated within that flow-field. The comparisons were made solely in terms of the ability of each code to perform contaminant transport. The purpose of the investigation was to establish a basis for, and to validate follow-on work that was conducted in which a 1-D GoldSim model developed by abstracting information from Porflow 2-D and 3-D unsaturated and saturated zone models and then benchmarked to produce equivalent contaminant transport results. A handful of contaminants were selected for the code-to-code comparison simulations, including a non-sorbing tracer and several long- and short-lived radionuclides exhibiting both non-sorbing to strongly-sorbing characteristics with respect to the matrix material, including several requiring the simulation of in-growth of daughter radionuclides. The same diffusion and partitioning coefficients associated with each contaminant and the half-lives associated with each radionuclide were incorporated into each model. A string of 10-elements, having identical spatial dimensions and properties, were constructed within each code. GoldSim's basic contaminant transport elements, Mixing cells, were utilized in this construction. Sand was established as the matrix material and was assigned identical properties (e.g. bulk density, porosity, saturated hydraulic conductivity) in both codes. Boundary conditions applied included an influx of water at the rate of 40 cm/yr at one

  17. CD4+CD25+ regulatory T cells: II. Origin, disease models and clinical aspects

    DEFF Research Database (Denmark)

    Nielsen, Janne; Holm, Thomas Lindebo; Claesson, Mogens H

    2004-01-01

    Autoimmune diseases afflict approximately 5% of the population and reflect a failure in the immune system to discriminate between self and non-self resulting in the breakdown of self-tolerance. Regulatory CD4+CD25+ T cells (Treg cells) have been shown to play an important role in the maintenance ...... in disease models such as autoimmune gastritis and inflammatory bowel disease. Finally, we will consider some aspects of the therapeutic potential of Treg cells....

  18. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    In this report, we will present a descriptive and organizational framework for incremental and fundamental changes to regulatory and utility business models in the context of clean energy public policy goals. We will also discuss the regulated utility's role in providing value-added services that relate to distributed energy resources, identify the "openness" of customer information and utility networks necessary to facilitate change, and discuss the relative risks, and the shifting of risks, for utilities and customers.

  19. Aquelarre. A computer code for fast neutron cross sections from the statistical model

    International Nuclear Information System (INIS)

    Guasp, J.

    1974-01-01

    A Fortran V computer code for Univac 1108/6 using the partial statistical (or compound nucleus) model is described. The code calculates fast neutron cross sections for the (n, n'), (n, p), (n, d) and (n, α reactions and the angular distributions and Legendre moments.for the (n, n) and (n, n') processes in heavy and intermediate spherical nuclei. A local Optical Model with spin-orbit interaction for each level is employed, allowing for the width fluctuation and Moldauer corrections, as well as the inclusion of discrete and continuous levels. (Author) 67 refs

  20. Self-shielding models of MICROX-2 code: Review and updates

    International Nuclear Information System (INIS)

    Hou, J.; Choi, H.; Ivanov, K.N.

    2014-01-01

    Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study

  1. Development of the Monju core safety analysis numerical models by super-COPD code

    International Nuclear Information System (INIS)

    Yamada, Fumiaki; Minami, Masaki

    2010-12-01

    Japan Atomic Energy Agency constructed a computational model for safety analysis of Monju reactor core to be built into a modularized plant dynamics analysis code Super-COPD code, for the purpose of heat removal capability evaluation at the in total 21 defined transients in the annex to the construction permit application. The applicability of this model to core heat removal capability evaluation has been estimated by back to back result comparisons of the constituent models with conventionally applied codes and by application of the unified model. The numerical model for core safety analysis has been built based on the best estimate model validated by the actually measured plant behavior up to 40% rated power conditions, taking over safety analysis models of conventionally applied COPD and HARHO-IN codes, to be capable of overall calculations of the entire plant with the safety protection and control systems. Among the constituents of the analytical model, neutronic-thermal model, heat transfer and hydraulic models of PHTS, SHTS, and water/steam system are individually verified by comparisons with the conventional calculations. Comparisons are also made with the actually measured plant behavior up to 40% rated power conditions to confirm the calculation adequacy and conservativeness of the input data. The unified analytical model was applied to analyses of in total 8 anomaly events; reactivity insertion, abnormal power distribution, decrease and increase of coolant flow rate in PHTS, SHTS and water/steam systems. The resulting maximum values and temporal variations of the key parameters in safety evaluation; temperatures of fuel, cladding, in core sodium coolant and RV inlet and outlet coolant have negligible discrepancies against the existing analysis result in the annex to the construction permit application, verifying the unified analytical model. These works have enabled analytical evaluation of Monju core heat removal capability by Super-COPD utilizing the

  2. Off-take Model of the SPACE Code and Its Validation

    International Nuclear Information System (INIS)

    Oh, Myung Taek; Park, Chan Eok; Sohn, Jong Joo

    2011-01-01

    Liquid entrainment and vapor pull-through models of horizontal pipe have been implemented in the SPACE code. The model of SPACE accounts for the phase separation phenomena and computes the flux of mass and energy through an off-take attached to a horizontal pipe when stratified conditions occur in the horizontal pipe. This model is referred to as the off-take model. The importance of predicting the fluid conditions through an off-take in a small-break LOCA has been well known. In this case, the occurrence of the stratification can affect the break node void fraction and thus the break flow discharged from the primary system. In order to validate the off-take model newly developed for the SPACE code, a simulation of the HDU experiments has been performed. The main feature of the off-take model and its application results will be presented in this paper

  3. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    Directory of Open Access Journals (Sweden)

    Jensen Søren Holdt

    2005-01-01

    Full Text Available Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

  4. Basic Pilot Code Development for Two-Fluid, Three-Field Model

    International Nuclear Information System (INIS)

    Jeong, Jae Jun; Bae, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.; Ha, K. S.; Kang, D. H.

    2006-03-01

    A basic pilot code for one-dimensional, transient, two-fluid, three-field model has been developed. Using 9 conceptual problems, the basic pilot code has been verified. The results of the verification are summarized below: - It was confirmed that the basic pilot code can simulate various flow conditions (such as single-phase liquid flow, bubbly flow, slug/churn turbulent flow, annular-mist flow, and single-phase vapor flow) and transitions of the flow conditions. A mist flow was not simulated, but it seems that the basic pilot code can simulate mist flow conditions. - The pilot code was programmed so that the source terms of the governing equations and numerical solution schemes can be easily tested. - The mass and energy conservation was confirmed for single-phase liquid and single-phase vapor flows. - It was confirmed that the inlet pressure and velocity boundary conditions work properly. - It was confirmed that, for single- and two-phase flows, the velocity and temperature of non-existing phase are calculated as intended. - During the simulation of a two-phase flow, the calculation reaches a quasisteady state with small-amplitude oscillations. The oscillations seem to be induced by some numerical causes. The research items for the improvement of the basic pilot code are listed in the last section of this report

  5. Basic Pilot Code Development for Two-Fluid, Three-Field Model

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jae Jun; Bae, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.; Ha, K. S.; Kang, D. H

    2006-03-15

    A basic pilot code for one-dimensional, transient, two-fluid, three-field model has been developed. Using 9 conceptual problems, the basic pilot code has been verified. The results of the verification are summarized below: - It was confirmed that the basic pilot code can simulate various flow conditions (such as single-phase liquid flow, bubbly flow, slug/churn turbulent flow, annular-mist flow, and single-phase vapor flow) and transitions of the flow conditions. A mist flow was not simulated, but it seems that the basic pilot code can simulate mist flow conditions. - The pilot code was programmed so that the source terms of the governing equations and numerical solution schemes can be easily tested. - The mass and energy conservation was confirmed for single-phase liquid and single-phase vapor flows. - It was confirmed that the inlet pressure and velocity boundary conditions work properly. - It was confirmed that, for single- and two-phase flows, the velocity and temperature of non-existing phase are calculated as intended. - During the simulation of a two-phase flow, the calculation reaches a quasisteady state with small-amplitude oscillations. The oscillations seem to be induced by some numerical causes. The research items for the improvement of the basic pilot code are listed in the last section of this report.

  6. Gamma spectroscopy modelization intercomparison of the modelization results using two different codes (MCNP, and Pascalys-mercure)

    International Nuclear Information System (INIS)

    Luneville, L.; Chiron, M.; Toubon, H.; Dogny, S.; Huver, M.; Berger, L.

    2001-01-01

    The research performed in common these last 3 years by the French Atomic Commission CEA, COGEMA and Eurisys Mesures had for main subject the realization of a complete tool of modelization for the largest range of realistic cases, the Pascalys modelization software. The main purpose of the modelization was to calculate the global measurement efficiency, which delivers the most accurate relationship between the photons emitted by the nuclear source in volume, punctual or deposited form and the germanium hyper pure detector, which detects and analyzes the received photons. It has been stated since long time that experimental global measurement efficiency becomes more and more difficult to address especially for complex scene as we can find in decommissioning and dismantling or in case of high activities for which the use of high activity reference sources become difficult to use for both health physics point of view and regulations. The choice of a calculation code is fundamental if accurate modelization is searched. MCNP represents the reference code but its use is long time calculation consuming and then not practicable in line on the field. Direct line-of-sight point kernel code as the French Atomic Commission 3-D analysis Mercure code can represent the practicable compromise between the most accurate MCNP reference code and the realistic performances needed in modelization. The comparison between the results of Pascalys-Mercure and MCNP code taking in account the last improvements of Mercure in the low energy range where the most important errors can occur, is presented in this paper, Mercure code being supported in line by the recent Pascalys 3-D modelization scene software. The incidence of the intrinsic efficiency of the Germanium detector is also approached for the total efficiency of measurement. (authors)

  7. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  8. Implementation of a structural dependent model for the superalloy IN738LC in ABAQUS-code

    International Nuclear Information System (INIS)

    Wolters, J.; Betten, J.; Penkalla, H.J.

    1994-05-01

    Superalloys, mainly consisting of nickel, are used for applications in aerospace as well as in stationary gas turbines. In the temperature range above 800 C the blades, which are manufactured of these superalloys, are subjected to high centrifugal forces and thermal induced loads. For computer based analysis of the thermo-mechanical behaviour of the blades models for the stress-strain behaviour are necessary. These models have to give a reliable description of the stress-strain behaviour, with emphasis on inelastic affects. The implementation of the model in finite element codes requires a numerical treatment of the constitutive equations with respect to the given interface of the used code. In this paper constitutive equations for the superalloy IN738LC are presented and the implementation in the finite element code ABAQUS with the numerical preparation of the model is described. In order to validate the model calculations were performed for simple uniaxial loading conditions as well as for a complete cross section of a turbine blade under combined thermal and mechanical loading. The achieved results were compared with those of additional calculations by using ABAQUS, including Norton's law, which was already implemented in this code. (orig.) [de

  9. A service-oriented architecture for integrating the modeling and formal verification of genetic regulatory networks

    Directory of Open Access Journals (Sweden)

    Page Michel

    2009-12-01

    Full Text Available Abstract Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks.

  10. Assessment of horizontal in-tube condensation models using MARS code. Part I: Stratified flow condensation

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Seong-Su [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Hong, Soon-Joon, E-mail: sjhong90@fnctech.com [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Park, Ju-Yeop; Seul, Kwang-Won [Korea Institute of Nuclear Safety, 19 Kuseong-dong, Yuseong-gu, Daejon (Korea, Republic of); Park, Goon-Cherl [Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer This study collected 11 horizontal in-tube condensation models for stratified flow. Black-Right-Pointing-Pointer This study assessed the predictive capability of the models for steam condensation. Black-Right-Pointing-Pointer Purdue-PCCS experiments were simulated using MARS code incorporated with models. Black-Right-Pointing-Pointer Cavallini et al. (2006) model predicts well the data for stratified flow condition. Black-Right-Pointing-Pointer Results of this study can be used to improve condensation model in RELAP5 or MARS. - Abstract: The accurate prediction of the horizontal in-tube condensation heat transfer is a primary concern in the optimum design and safety analysis of horizontal heat exchangers of passive safety systems such as the passive containment cooling system (PCCS), the emergency condenser system (ECS) and the passive auxiliary feed-water system (PAFS). It is essential to analyze and assess the predictive capability of the previous horizontal in-tube condensation models for each flow regime using various experimental data. This study assessed totally 11 condensation models for the stratified flow, one of the main flow regime encountered in the horizontal condenser, with the heat transfer data from the Purdue-PCCS experiment using the multi-dimensional analysis of reactor safety (MARS) code. From the assessments, it was found that the models by Akers and Rosson, Chato, Tandon et al., Sweeney and Chato, and Cavallini et al. (2002) under-predicted the data in the main condensation heat transfer region, on the contrary to this, the models by Rosson and Meyers, Jaster and Kosky, Fujii, Dobson and Chato, and Thome et al. similarly- or over-predicted the data, and especially, Cavallini et al. (2006) model shows good predictive capability for all test conditions. The results of this study can be used importantly to improve the condensation models in thermal hydraulic code, such as RELAP5 or MARS code.

  11. Development of PSA audit guideline and regulatory PSA model for SMART

    International Nuclear Information System (INIS)

    Cho, Namchul; Lee, Chang-Ju; Kim, I.S.

    2012-01-01

    SMART is under development for dual purposes of power generation and seawater desalination in Korea. It is an integral reactor type with a thermal power output of 330 MW and employs advanced design features such as a passive system for the removal of residual heat and also the setting of all the components of the primary system inside the reactor pressure vessel. It is essential to develop new probabilistic safety assessment (PSA) validation guidance for SMART. For the purpose of regulatory verification to the risk level of SMART, the insights and key issues on the PSA are identified with referring some worldwide safety guides as well as its design characteristics. Regulatory PSA model under the development for the design confirmation and its preliminary result are also described. (authors)

  12. Assessment of Ultimate Load Capacity for Pre-Stressed Concrete Containment Vessel Model of PWR Design With BARC Code ULCA

    International Nuclear Information System (INIS)

    Basha, S.M.; Singh, R.K.; Patnaik, R.; Ramanujam, S.; Kushwaha, H.S.; Venkat Raj, V.

    2002-01-01

    Ultimate load capacity assessment of nuclear containments has been a thrust research area for Indian Pressurised Heavy Water Reactor (PHWR) power programme. For containment safety assessment of Indian PHWRs a finite element code ULCA was developed at BARC, Trombay. This code has been extensively benchmarked with experimental results. The present paper highlights the analysis results for Prestressed Concrete Containment Vessel (PCCV) tested at Sandia National Labs, USA in a Round Robin analysis activity co-sponsored by Nuclear Power Engineering Corporation (NUPEC), Japan and the U.S Nuclear Regulatory Commission (NRC). Three levels of failure pressure predictions namely the upper bound, the most probable and the lower bound (all with 90% confidence) were made as per the requirements of the round robin analysis activity. The most likely failure pressure is predicted to be in the range of 2.95 Pd to 3.15 Pd (Pd= design pressure of 0.39 MPa for the PCCV model) depending on the type of liners used in the construction of the PCCV model. The lower bound value of the ultimate pressure of 2.80 Pd and the upper bound of the ultimate pressure of 3.45 Pd are also predicted from the analysis. These limiting values depend on the assumptions of the analysis for simulating the concrete-tendon interaction and the strain hardening characteristics of the steel members. The experimental test has been recently concluded at Sandia Laboratory and the peak pressure reached during the test is 3.3 Pd that is enveloped by our upper bound prediction of 3.45 Pd and is close to the predicted most likely pressure of 3.15 Pd. (authors)

  13. Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model

    Science.gov (United States)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their

  14. Overlaid Alice: a statistical model computer code including fission and preequilibrium models

    International Nuclear Information System (INIS)

    Blann, M.

    1976-01-01

    The most recent edition of an evaporation code originally written previously with frequent updating and improvement. This version replaces the version Alice described previously. A brief summary is given of the types of calculations which can be done. A listing of the code and the results of several sample calculations are presented

  15. Inference of gene regulatory networks with sparse structural equation models exploiting genetic perturbations.

    Directory of Open Access Journals (Sweden)

    Xiaodong Cai

    Full Text Available Integrating genetic perturbations with gene expression data not only improves accuracy of regulatory network topology inference, but also enables learning of causal regulatory relations between genes. Although a number of methods have been developed to integrate both types of data, the desiderata of efficient and powerful algorithms still remains. In this paper, sparse structural equation models (SEMs are employed to integrate both gene expression data and cis-expression quantitative trait loci (cis-eQTL, for modeling gene regulatory networks in accordance with biological evidence about genes regulating or being regulated by a small number of genes. A systematic inference method named sparsity-aware maximum likelihood (SML is developed for SEM estimation. Using simulated directed acyclic or cyclic networks, the SML performance is compared with that of two state-of-the-art algorithms: the adaptive Lasso (AL based scheme, and the QTL-directed dependency graph (QDG method. Computer simulations demonstrate that the novel SML algorithm offers significantly better performance than the AL-based and QDG algorithms across all sample sizes from 100 to 1,000, in terms of detection power and false discovery rate, in all the cases tested that include acyclic or cyclic networks of 10, 30 and 300 genes. The SML method is further applied to infer a network of 39 human genes that are related to the immune function and are chosen to have a reliable eQTL per gene. The resulting network consists of 9 genes and 13 edges. Most of the edges represent interactions reasonably expected from experimental evidence, while the remaining may just indicate the emergence of new interactions. The sparse SEM and efficient SML algorithm provide an effective means of exploiting both gene expression and perturbation data to infer gene regulatory networks. An open-source computer program implementing the SML algorithm is freely available upon request.

  16. Differential T cell response against BK virus regulatory and structural antigens: A viral dynamics modelling approach.

    Directory of Open Access Journals (Sweden)

    Arturo Blazquez-Navarro

    2018-05-01

    Full Text Available BK virus (BKV associated nephropathy affects 1-10% of kidney transplant recipients, leading to graft failure in about 50% of cases. Immune responses against different BKV antigens have been shown to have a prognostic value for disease development. Data currently suggest that the structural antigens and regulatory antigens of BKV might each trigger a different mode of action of the immune response. To study the influence of different modes of action of the cellular immune response on BKV clearance dynamics, we have analysed the kinetics of BKV plasma load and anti-BKV T cell response (Elispot in six patients with BKV associated nephropathy using ODE modelling. The results show that only a small number of hypotheses on the mode of action are compatible with the empirical data. The hypothesis with the highest empirical support is that structural antigens trigger blocking of virus production from infected cells, whereas regulatory antigens trigger an acceleration of death of infected cells. These differential modes of action could be important for our understanding of BKV resolution, as according to the hypothesis, only regulatory antigens would trigger a fast and continuous clearance of the viral load. Other hypotheses showed a lower degree of empirical support, but could potentially explain the clearing mechanisms of individual patients. Our results highlight the heterogeneity of the dynamics, including the delay between immune response against structural versus regulatory antigens, and its relevance for BKV clearance. Our modelling approach is the first that studies the process of BKV clearance by bringing together viral and immune kinetics and can provide a framework for personalised hypotheses generation on the interrelations between cellular immunity and viral dynamics.

  17. Differential T cell response against BK virus regulatory and structural antigens: A viral dynamics modelling approach.

    Science.gov (United States)

    Blazquez-Navarro, Arturo; Schachtner, Thomas; Stervbo, Ulrik; Sefrin, Anett; Stein, Maik; Westhoff, Timm H; Reinke, Petra; Klipp, Edda; Babel, Nina; Neumann, Avidan U; Or-Guil, Michal

    2018-05-01

    BK virus (BKV) associated nephropathy affects 1-10% of kidney transplant recipients, leading to graft failure in about 50% of cases. Immune responses against different BKV antigens have been shown to have a prognostic value for disease development. Data currently suggest that the structural antigens and regulatory antigens of BKV might each trigger a different mode of action of the immune response. To study the influence of different modes of action of the cellular immune response on BKV clearance dynamics, we have analysed the kinetics of BKV plasma load and anti-BKV T cell response (Elispot) in six patients with BKV associated nephropathy using ODE modelling. The results show that only a small number of hypotheses on the mode of action are compatible with the empirical data. The hypothesis with the highest empirical support is that structural antigens trigger blocking of virus production from infected cells, whereas regulatory antigens trigger an acceleration of death of infected cells. These differential modes of action could be important for our understanding of BKV resolution, as according to the hypothesis, only regulatory antigens would trigger a fast and continuous clearance of the viral load. Other hypotheses showed a lower degree of empirical support, but could potentially explain the clearing mechanisms of individual patients. Our results highlight the heterogeneity of the dynamics, including the delay between immune response against structural versus regulatory antigens, and its relevance for BKV clearance. Our modelling approach is the first that studies the process of BKV clearance by bringing together viral and immune kinetics and can provide a framework for personalised hypotheses generation on the interrelations between cellular immunity and viral dynamics.

  18. A study on the dependency between turbulent models and mesh configurations of CFD codes

    International Nuclear Information System (INIS)

    Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook

    2015-01-01

    This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream