WorldWideScience

Sample records for model source code

  1. Source coding model for repeated snapshot imaging

    CERN Document Server

    Li, Junhui; Yang, Dongyue; wu, Guohua; Yin, Longfei; Guo, Hong

    2016-01-01

    Imaging based on successive repeated snapshot measurement is modeled as a source coding process in information theory. The necessary number of measurement to maintain a certain level of error rate is depicted as the rate-distortion function of the source coding. Quantitative formula of the error rate versus measurement number relation is derived, based on the information capacity of imaging system. Second order fluctuation correlation imaging (SFCI) experiment with pseudo-thermal light verifies this formula, which paves the way for introducing information theory into the study of ghost imaging (GI), both conventional and computational.

  2. Using cryptology models for protecting PHP source code

    Science.gov (United States)

    Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen

    2013-10-01

    Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.

  3. Secondary neutron source modelling using MCNPX and ALEPH codes

    Science.gov (United States)

    Trakas, Christos; Kerkar, Nordine

    2014-06-01

    Monitoring the subcritical state and divergence of reactors requires the presence of neutron sources. But mainly secondary neutrons from these sources feed the ex-core detectors (SRD, Source Range Detector) whose counting rate is correlated with the level of the subcriticality of reactor. In cycle 1, primary neutrons are provided by sources activated outside of the reactor (e.g. Cf252); part of this source can be used for the divergence of cycle 2 (not systematic). A second family of neutron sources is used for the second cycle: the spontaneous neutrons of actinides produced after irradiation of fuel in the first cycle. Both families of sources are not sufficient to efficiently monitor the divergence of the second cycles and following ones, in most reactors. Secondary sources cluster (SSC) fulfil this role. In the present case, the SSC [Sb, Be], after activation in the first cycle (production of Sb124, unstable), produces in subsequent cycles a photo-neutron source by gamma (from Sb124)-neutron (on Be9) reaction. This paper presents the model of the process between irradiation in cycle 1 and cycle 2 results for SRD counting rate at the beginning of cycle 2, using the MCNPX code and the depletion chain ALEPH-V1 (coupling of MCNPX and ORIGEN codes). The results of this simulation are compared with two experimental results of the PWR 1450 MWe-N4 reactors. A good agreement is observed between these results and the simulations. The subcriticality of the reactors is about at -15,000 pcm. Discrepancies on the SRD counting rate between calculations and measurements are in the order of 10%, lower than the combined uncertainty of measurements and code simulation. This comparison validates the AREVA methodology, which allows having an SRD counting rate best-estimate for cycles 2 and next ones and optimizing the position of the SSC, depending on the geographic location of sources, main parameter for optimal monitoring of subcritical states.

  4. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  5. Code-to-code benchmark tests for 3D simulation models dedicated to the extraction region in negative ion sources

    Science.gov (United States)

    Nishioka, S.; Mochalskyy, S.; Taccogna, F.; Hatayama, A.; Fantz, U.; Minelli, P.

    2017-08-01

    The development of the kinetic particle model for the extraction region in negative hydrogen ion sources is indispensable and helpful to clarify the H- beam extraction physics. Recently, various 3D kinetic particle codes have been developed to study the extraction mechanism. Direct comparison between each other has not yet been done. Therefore, we have carried out a code-to-code benchmark activity to validate our codes. In the present study, the progress in this benchmark activity is summarized. At present, the reasonable agreement with the result by each code have been obtained using realistic plasma parameters at least for the following items; (1) Potential profile in the case of the vacuum condition (2) Temporal evolution of extracted current densities and profiles of electric potential in the case of the plasma consisting of only electrons and positive ions.

  6. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  7. Simple models of two-dimensional information sources and codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Shtarkov, Y. M.

    1998-01-01

    We consider discrete random fields which have simple descriptions of rows and columns. We present constructions which combine high entropy with simple means of generating the fields and analyzing the probability distribution. Hidden state Markov sources are an essential tool in the construction...

  8. An analytical model for source code distributability verification

    Institute of Scientific and Technical Information of China (English)

    Ayaz ISAZADEH; Jaber KARIMPOUR; Islam ELGEDAWY; Habib IZADKHAH

    2014-01-01

    One way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment. We argue that the execution speedup primarily depends on the concurrency degree between the identified segments as well as communication overhead between the segments. To guar-antee the best speedup, we have to obtain the maximum possible concurrency degree between the identified segments, taking communication overhead into consideration. Existing code distributor and multi-threading approaches do not fulfill such re-quirements;hence, they cannot provide expected distributability gains in advance. To overcome such limitations, we propose a novel approach for verifying the distributability of sequential object-oriented programs. The proposed approach enables users to see the maximum speedup gains before the actual distributability implementations, as it computes an objective function which is used to measure different distribution values from the same program, taking into consideration both remote and sequential calls. Experimental results showed that the proposed approach successfully determines the distributability of different real-life software applications compared with their real-life sequential and distributed implementations.

  9. Review of release models used in source-term codes

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jongsoon [Department of Nuclear Engineering, Chosen University, Kwangju (Korea, Republic of)

    1999-07-01

    Throughout this reviews, the limitations of current release models are identified and ways of improving them suggested, By incorporation recent experimental results, recommendations for future release modeling activities can be made. All release under review were compared with respect to the following six items: scenario, assumptions, mathematical formulations, solution method, radioactive decay chain considered, and geometry. The following nine models are considered for review: SOTEC and SCCEX (CNWRA), DOE/INTERA, TSPA (SNL), Vault Model (AECL), CCALIBRE (SKI), AREST (PNL), Risk Assessment (EPRI), TOSPAC (SNL). (author)

  10. Astrophysics Source Code Library

    CERN Document Server

    Allen, Alice; Berriman, Bruce; Hanisch, Robert J; Mink, Jessica; Teuben, Peter J

    2012-01-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  11. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Directory of Open Access Journals (Sweden)

    Dragutin KERMEK

    2016-04-01

    Full Text Available In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student assignments and projects that involve a lot of source code files these tools are not so effective. Also, issues may occur when source code is given to students in class so they can copy it. In such cases these tools do not provide satisfying results and reports. In this study, we present an improved process model for plagiarism detection when multiple student files exist and allowed source code is present. In the research in this paper we use the Sherlock detection tool, although the presented process model can be combined with any plagiarism detection engine. The proposed model is tested on assignments in three courses in two subsequent academic years.

  12. SENR, A Super-Efficient Code for Gravitational Wave Source Modeling: Latest Results

    Science.gov (United States)

    Ruchlin, Ian; Etienne, Zachariah; Baumgarte, Thomas

    2017-01-01

    The science we extract from gravitational wave observations will be limited by our theoretical understanding, so with the recent breakthroughs by LIGO, reliable gravitational wave source modeling has never been more critical. Due to efficiency considerations, current numerical relativity codes are very limited in their applicability to direct LIGO source modeling, so it is important to develop new strategies for making our codes more efficient. We introduce SENR, a Super-Efficient, open-development numerical relativity (NR) code aimed at improving the efficiency of moving-puncture-based LIGO gravitational wave source modeling by 100x. SENR builds upon recent work, in which the BSSN equations are evolved in static spherical coordinates, to allow dynamical coordinates with arbitrary spatial distributions. The physical domain is mapped to a uniform-resolution grid on which derivative operations are approximated using standard central finite difference stencils. The source code is designed to be human-readable, efficient, parallelized, and readily extensible. We present the latest results from the SENR code.

  13. Coded source neutron imaging

    Energy Technology Data Exchange (ETDEWEB)

    Bingham, Philip R [ORNL; Santos-Villalobos, Hector J [ORNL

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  14. Coded source neutron imaging

    Science.gov (United States)

    Bingham, Philip; Santos-Villalobos, Hector; Tobin, Ken

    2011-03-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100μm) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100μm and 10μm aperture hole diameters show resolutions matching the hole diameters.

  15. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    Science.gov (United States)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  16. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  17. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    Science.gov (United States)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  18. Development of a Field-Aligned Integrated Conductivity Model Using the SAMI2 Open Source Code

    Science.gov (United States)

    Hildebrandt, Kyle; Gearheart, Michael; West, Keith

    2003-03-01

    The SAMI2 open source code is a middle and low latitude ionspheric model developed by the Naval Research Lab for the dual purposes of research and education. At the time of this writing the source code has no component for the integrated magnetic field-aligned conductivity. The dependence of human activities on conditions in the space environment, such as communications, has grown and will continue to do so. With this growth comes higher financial stakes, as changes in the space environment have greater economic impact. In order to minimize the adverse effects of these changes, predictive models are being developed. Among the geophysical parameters that affect communications is the conductivity in the ionosphere. As part of the commitment of Texas A & M Univeristy-Commerce to build a strong undergraduate research program, a team consisting of two students and a faculty mentor are developing a model of the integrated field-aligned conductivity using the SAMI2 code. The current status of the research and preliminary results are presented as well as a summary of future work.

  19. Joint source channel coding using arithmetic codes

    CERN Document Server

    Bi, Dongsheng

    2009-01-01

    Based on the encoding process, arithmetic codes can be viewed as tree codes and current proposals for decoding arithmetic codes with forbidden symbols belong to sequential decoding algorithms and their variants. In this monograph, we propose a new way of looking at arithmetic codes with forbidden symbols. If a limit is imposed on the maximum value of a key parameter in the encoder, this modified arithmetic encoder can also be modeled as a finite state machine and the code generated can be treated as a variable-length trellis code. The number of states used can be reduced and techniques used fo

  20. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  1. DISCRETE DYNAMIC MODEL OF BEVEL GEAR – VERIFICATION THE PROGRAM SOURCE CODE FOR NUMERICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    Krzysztof TWARDOCH

    2014-06-01

    Full Text Available In the article presented a new model of physical and mathematical bevel gear to study the influence of design parameters and operating factors on the dynamic state of the gear transmission. Discusses the process of verifying proper operation of copyright calculation program used to determine the solutions of the dynamic model of bevel gear. Presents the block diagram of a computing algorithm that was used to create a program for the numerical simulation. The program source code is written in an interactive environment to perform scientific and engineering calculations, MATLAB

  2. Authorship Attribution of Source Code

    Science.gov (United States)

    Tennyson, Matthew F.

    2013-01-01

    Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…

  3. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  4. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  5. Source code retrieval using conceptual similarity

    NARCIS (Netherlands)

    Mishne, G.A.; de Rijke, M.

    2004-01-01

    We propose a method for retrieving segments of source code from a large repository. The method is based on conceptual modeling of the code, combining information extracted from the structure of the code and standard informationdistance measures. Our results show an improvement over traditional retri

  6. Astrophysics Source Code Library Enhancements

    CERN Document Server

    Hanisch, Robert J; Berriman, G Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert J; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Taylor, Mark; Teuben, Peter J; Wallin, John

    2014-01-01

    The Astrophysics Source Code Library (ASCL; ascl.net) is a free online registry of codes used in astronomy research; it currently contains over 900 codes and is indexed by ADS. The ASCL has recently moved a new infrastructure into production. The new site provides a true database for the code entries and integrates the WordPress news and information pages and the discussion forum into one site. Previous capabilities are retained and permalinks to ascl.net continue to work. This improvement offers more functionality and flexibility than the previous site, is easier to maintain, and offers new possibilities for collaboration. This presentation covers these recent changes to the ASCL.

  7. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  8. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from......A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  9. Distributed Joint Source-Channel Coding for arbitrary memoryless correlated sources and Source coding for Markov correlated sources using LDPC codes

    CERN Document Server

    Aggarwal, Vaneet

    2008-01-01

    In this paper, we give a distributed joint source channel coding scheme for arbitrary correlated sources for arbitrary point in the Slepian-Wolf rate region, and arbitrary link capacities using LDPC codes. We consider the Slepian-Wolf setting of two sources and one destination, with one of the sources derived from the other source by some correlation model known at the decoder. Distributed encoding and separate decoding is used for the two sources. We also give a distributed source coding scheme when the source correlation has memory to achieve any point in the Slepian-Wolf rate achievable region. In this setting, we perform separate encoding but joint decoding.

  10. Model Children's Code.

    Science.gov (United States)

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  11. Secure Source Coding with a Helper

    CERN Document Server

    Tandon, Ravi; Ramchandran, Kannan

    2009-01-01

    We consider a secure lossless source coding problem with a rate-limited helper. In particular, Alice observes an i.i.d. source $X^{n}$ and wishes to transmit this source losslessly to Bob at a rate $R_{x}$. A helper, say Helen, observes a correlated source $Y^{n}$ and transmits at a rate $R_{y}$ to Bob. A passive eavesdropper can observe the coded output of Alice. The equivocation $\\Delta$ is measured by the conditional entropy $H(X^{n}|J_{x})/n$, where $J_{x}$ is the coded output of Alice. We first completely characterize the rate-equivocation region for this secure source coding model, where we show that Slepian-Wolf type coding is optimal. We next study two generalizations of this model and provide single-letter characterizations for the respective rate-equivocation regions. In particular, we first consider the case of a two-sided helper where Alice also has access to the coded output of Helen. We show that for this case, Slepian-Wolf type coding is suboptimal and one can further decrease the information l...

  12. Quantum source-channel codes

    CERN Document Server

    Pastawski, Fernando; Wilming, Henrik

    2016-01-01

    Approximate quantum error-correcting codes are codes with "soft recovery guarantees" wherein information can be approximately recovered. In this article, we propose a complementary "soft code-spaces" wherein a weighted prior distribution is assumed over the possible logical input states. The performance for protecting information from noise is then evaluated in terms of entanglement fidelity. We apply a recent construction for approximate recovery maps, which come with a guaranteed lower-bounds on the decoding performance. These lower bound are straightforwardly obtained by evaluating entropies on marginals of the mixed state which represents the "soft code-space". As an example, we consider thermal states of the transverse field Ising model at criticality and provide numerical evidence that the entanglement fidelity admits non-trivial recoverability from local errors. This provides the first concrete interpretation of a bonafide conformal field theory as a quantum error-correcting code. We further suggest, t...

  13. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao;

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... of system properties, and producing inputs to be fed into these engines, interfacing with standard (SystemC) simulation platforms for HW/SW co-simulation, customisable source-code generation towards respecting coding standards and conventions and software performance-tuning optimisation through automated...

  14. Code Flows : Visualizing Structural Evolution of Source Code

    NARCIS (Netherlands)

    Telea, Alexandru; Auber, David

    2008-01-01

    Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual met

  15. Code flows : Visualizing structural evolution of source code

    NARCIS (Netherlands)

    Telea, Alexandru; Auber, David

    2008-01-01

    Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual met

  16. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  17. Source Code Generator Based on Dynamic Frames

    Directory of Open Access Journals (Sweden)

    Danijel Radošević

    2011-06-01

    Full Text Available Normal 0 21 false false false HR X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Obična tablica"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This paper presents the model of source code generator based on dynamic frames. The model is named as the SCT model because if its three basic components: Specification (S, which describes the application characteristics, Configuration (C, which describes the rules for building applications, and Templates (T, which refer to application building blocks. The process of code generation dynamically creates XML frames containing all building elements (S, C ant T until final code is produced. This approach is compared to existing XVCL frames based model for source code generating. The SCT model is described by both XML syntax and the appropriate graphical elements. The SCT model is aimed to build complete applications, not just skeletons. The main advantages of the presented model are its textual and graphic description, a fully configurable generator, and the reduced overhead of the generated source code. The presented SCT model is shown on development of web application example in order to demonstrate its features and justify our design choices.

  18. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  19. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    Science.gov (United States)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming

  20. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  1. Practices in Code Discoverability: Astrophysics Source Code Library

    CERN Document Server

    Allen, Alice; Nemiroff, Robert J; Shamir, Lior

    2012-01-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL (http://ascl.net) has on average added 19 new codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available ei...

  2. Iterative Reconstruction of Coded Source Neutron Radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Bingham, Philip R [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK)

    2012-01-01

    Use of a coded source facilitates high-resolution neutron imaging but requires that the radiographic data be deconvolved. In this paper, we compare direct deconvolution with two different iterative algorithms, namely, one based on direct deconvolution embedded in an MLE-like framework and one based on a geometric model of the neutron beam and a least squares formulation of the inverse imaging problem.

  3. Iterative Reconstruction of Coded Source Neutron Radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Bingham, Philip R [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK)

    2013-01-01

    Use of a coded source facilitates high-resolution neutron imaging through magnifications but requires that the radiographic data be deconvolved. A comparison of direct deconvolution with two different iterative algorithms has been performed. One iterative algorithm is based on a maximum likelihood estimation (MLE)-like framework and the second is based on a geometric model of the neutron beam within a least squares formulation of the inverse imaging problem. Simulated data for both uniform and Gaussian shaped source distributions was used for testing to understand the impact of non-uniformities present in neutron beam distributions on the reconstructed images. Results indicate that the model based reconstruction method will match resolution and improve on contrast over convolution methods in the presence of non-uniform sources. Additionally, the model based iterative algorithm provides direct calculation of quantitative transmission values while the convolution based methods must be normalized base on known values.

  4. Making your code citable with the Astrophysics Source Code Library

    Science.gov (United States)

    Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2016-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.

  5. Joint Source, Channel Coding, and Secrecy

    Directory of Open Access Journals (Sweden)

    Magli Enrico

    2007-01-01

    Full Text Available We introduce the concept of joint source coding, channel coding, and secrecy. In particular, we propose two practical joint schemes: the first one is based on error-correcting randomized arithmetic codes, while the second one employs turbo codes with compression, error protection, and securization capabilities. We provide simulation results on ideal binary data showing that the proposed schemes achieve satisfactory performance; they also eliminate the need for external compression and ciphering blocks with a significant potential computational advantage.

  6. Joint Source, Channel Coding, and Secrecy

    Directory of Open Access Journals (Sweden)

    Enrico Magli

    2007-09-01

    Full Text Available We introduce the concept of joint source coding, channel coding, and secrecy. In particular, we propose two practical joint schemes: the first one is based on error-correcting randomized arithmetic codes, while the second one employs turbo codes with compression, error protection, and securization capabilities. We provide simulation results on ideal binary data showing that the proposed schemes achieve satisfactory performance; they also eliminate the need for external compression and ciphering blocks with a significant potential computational advantage.

  7. Source Code Plagiarism--A Student Perspective

    Science.gov (United States)

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  8. Querying Source Code with Natural Language

    CERN Document Server

    Kimmig, Markus; Mezini, Mira

    2012-01-01

    One common task of developing or maintaining software is searching the source code for information like specific method calls or write accesses to certain fields. This kind of information is required to correctly implement new features and to solve bugs. This paper presents an approach for querying source code with natural language.

  9. Source Code Plagiarism--A Student Perspective

    Science.gov (United States)

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  10. Multiterminal source coding with complementary delivery

    CERN Document Server

    Kimura, Akisato

    2008-01-01

    A coding problem for correlated information sources is investigated. Messages emitted from two correlated sources are jointly encoded, and delivered to two decoders. Each decoder has access to one of the two messages to enable it to reproduce the other message. The rate-distortion function for the coding problem and its interesting properties are clarified.

  11. Astrophysics Source Code Library: Incite to Cite!

    CERN Document Server

    DuPrie, Kimberly; Berriman, Bruce; Hanisch, Robert J; Mink, Jessica; Nemiroff, Robert J; Shamir, Lior; Shortridge, Keith; Taylor, Mark B; Teuben, Peter; Wallin, John F

    2013-01-01

    The Astrophysics Source Code Library (ASCL, http://ascl.net/) is an online registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or froman identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.

  12. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  13. Minimum Redundancy Coding for Uncertain Sources

    CERN Document Server

    Baer, Michael B; Charalambous, Charalambos D

    2011-01-01

    Consider the set of source distributions within a fixed maximum relative entropy with respect to a given nominal distribution. Lossless source coding over this relative entropy ball can be approached in more than one way. A problem previously considered is finding a minimax average length source code. The minimizing players are the codeword lengths --- real numbers for arithmetic codes, integers for prefix codes --- while the maximizing players are the uncertain source distributions. Another traditional minimizing objective is the first one considered here, maximum (average) redundancy. This problem reduces to an extension of an exponential Huffman objective treated in the literature but heretofore without direct practical application. In addition to these, this paper examines the related problem of maximal minimax pointwise redundancy and the problem considered by Gawrychowski and Gagie, which, for a sufficiently small relative entropy ball, is equivalent to minimax redundancy. One can consider both Shannon-...

  14. Delayed Sequential Coding of Correlated Sources

    CERN Document Server

    Ma, Nan; Ishwar, Prakash

    2007-01-01

    Motivated by video coding applications, we study the problem of sequential coding of correlated sources with (noncausal) encoding and/or decoding frame-delays. The fundamental tradeoffs between individual frame rates, individual frame distortions, and encoding/decoding frame-delays are derived in terms of a single-letter information-theoretic characterization of the rate-distortion region for general inter-frame source correlations and certain types of (potentially frame-specific and coupled) single-letter fidelity criteria. For video sources which are spatially stationary memoryless and temporally Gauss--Markov, MSE frame distortions, and a sum-rate constraint, our results expose the optimality of differential predictive coding among all causal sequential coders. Somewhat surprisingly, causal sequential encoding with one-step delayed noncausal sequential decoding can exactly match the sum-rate-MSE performance of joint coding for all nontrivial MSE-tuples satisfying certain positive semi-definiteness conditio...

  15. Practices in source code sharing in astrophysics

    CERN Document Server

    Shamir, Lior; Allen, Alice; Berriman, Bruce; Teuben, Peter; Nemiroff, Robert J; Mink, Jessica; Hanisch, Robert J; DuPrie, Kimberly

    2013-01-01

    While software and algorithms have become increasingly important in astronomy, the majority of authors who publish computational astronomy research do not share the source code they develop, making it difficult to replicate and reuse the work. In this paper we discuss the importance of sharing scientific source code with the entire astrophysics community, and propose that journals require authors to make their code publicly available when a paper is published. That is, we suggest that a paper that involves a computer program not be accepted for publication unless the source code becomes publicly available. The adoption of such a policy by editors, editorial boards, and reviewers will improve the ability to replicate scientific results, and will also make the computational astronomy methods more available to other researchers who wish to apply them to their data.

  16. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    , and quantization. On this background we propose a new algorithm for optimization of predictive coding of AR sources for transmission across channels with loss. The optimization algorithm takes as its starting point a re-thinking of the source coding operation as an operation producing linear measurements....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... be modeled as auto-regressive processes. The coding of AR sources lends itself to linear predictive coding. We address the problem of joint source/channel coding in the setting of linear predictive coding of AR sources. We consider channels in which individual source coded signal samples can be lost during...

  17. Optimal Organizational Hierarchies: Source Coding: Disaster Relief

    CERN Document Server

    Murthy, G Rama

    2011-01-01

    ulticasting is an important communication paradigm for enabling the dissemination of information selectively. This paper considers the problem of optimal secure multicasting in a communication network captured through a graph (optimal is in an interesting sense) and provides a doubly optimal solution using results from source coding. It is realized that the solution leads to optimal design (in a well defined optimality sense) of organizational hierarchies captured through a graph. In this effort two novel concepts : prefix free path, graph entropy are introduced. Some results of graph entropy are provided. Also some results on Kraft inequality are discussed. As an application Hierarchical Hybrid Communication Network is utilized as a model of structured Mobile Adhoc network for utility in Disaster Management. Several new research problems that naturally emanate from this research are summarized.

  18. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Yidong Xia; Mitch Plummer; Robert Podgorney; Ahmad Ghassemi

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation angle for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.

  19. A Practical Approach to Lossy Joint Source-Channel Coding

    CERN Document Server

    Fresia, Maria

    2007-01-01

    This work is devoted to practical joint source channel coding. Although the proposed approach has more general scope, for the sake of clarity we focus on a specific application example, namely, the transmission of digital images over noisy binary-input output-symmetric channels. The basic building blocks of most state-of the art source coders are: 1) a linear transformation; 2) scalar quantization of the transform coefficients; 3) probability modeling of the sequence of quantization indices; 4) an entropy coding stage. We identify the weakness of the conventional separated source-channel coding approach in the catastrophic behavior of the entropy coding stage. Hence, we replace this stage with linear coding, that maps directly the sequence of redundant quantizer output symbols into a channel codeword. We show that this approach does not entail any loss of optimality in the asymptotic regime of large block length. However, in the practical regime of finite block length and low decoding complexity our approach ...

  20. Entropy Approximation in Lossy Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Marek Śmieja

    2015-05-01

    Full Text Available In this paper, we investigate a lossy source coding problem, where an upper limit on the permitted distortion is defined for every dataset element. It can be seen as an alternative approach to rate distortion theory where a bound on the allowed average error is specified. In order to find the entropy, which gives a statistical length of source code compatible with a fixed distortion bound, a corresponding optimization problem has to be solved. First, we show how to simplify this general optimization by reducing the number of coding partitions, which are irrelevant for the entropy calculation. In our main result, we present a fast and feasible for implementation greedy algorithm, which allows one to approximate the entropy within an additive error term of log2 e. The proof is based on the minimum entropy set cover problem, for which a similar bound was obtained.

  1. Radio frequency source coding made easy

    CERN Document Server

    Faruque, Saleh

    2015-01-01

    This book introduces Radio Frequency Source Coding to a broad audience. The author blends theory and practice to bring readers up-to-date in key concepts, underlying principles and practical applications of wireless communications. The presentation is designed to be easily accessible, minimizing mathematics and maximizing visuals.

  2. Using the Astrophysics Source Code Library

    Science.gov (United States)

    Allen, Alice; Teuben, P. J.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Wallin, J. F.

    2013-01-01

    The Astrophysics Source Code Library (ASCL) is a free on-line registry of source codes that are of interest to astrophysicists; with over 500 codes, it is the largest collection of scientist-written astrophysics programs in existence. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. An advisory committee formed in 2011 provides input and guides the development and expansion of the ASCL, and since January 2012, all accepted ASCL entries are indexed by ADS. Though software is increasingly important for the advancement of science in astrophysics, these methods are still often hidden from view or difficult to find. The ASCL (ascl.net/) seeks to improve the transparency and reproducibility of research by making these vital methods discoverable, and to provide recognition and incentive to those who write and release programs useful for astrophysics research. This poster provides a description of the ASCL, an update on recent additions, and the changes in the astrophysics community we are starting to see because of the ASCL.

  3. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  4. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  5. Side-information Scalable Source Coding

    CERN Document Server

    Tian, Chao

    2007-01-01

    The problem of side-information scalable (SI-scalable) source coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, while the receiver with low quality side information will have to receive further data in order to decode. We provide inner and outer bounds for general discrete memoryless sources. The achievable region is shown to be tight for the case that either of the decoders requires a lossless reconstruction, as well as the case with degraded deterministic distortion measures. Furthermore we show that the gap between the achievable region and the outer bounds can be bounded by a constant when square error distortion measure is used. The notion of perfectly scalable coding is introduced as both the stages operate on the Wyner-Ziv bound, and necessary and sufficient conditions are given for sources satisfying a mild support condi...

  6. Coded source imaging simulation with visible light

    Science.gov (United States)

    Wang, Sheng; Zou, Yubin; Zhang, Xueshuang; Lu, Yuanrong; Guo, Zhiyu

    2011-09-01

    A coded source could increase the neutron flux with high L/ D ratio. It may benefit a neutron imaging system with low yield neutron source. Visible light CSI experiments were carried out to test the physical design and reconstruction algorithm. We used a non-mosaic Modified Uniformly Redundant Array (MURA) mask to project the shadow of black/white samples on a screen. A cooled-CCD camera was used to record the image on the screen. Different mask sizes and amplification factors were tested. The correlation, Wiener filter deconvolution and Richardson-Lucy maximum likelihood iteration algorithm were employed to reconstruct the object imaging from the original projection. The results show that CSI can benefit the low flux neutron imaging with high background noise.

  7. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in...

  8. A FINE GRANULAR JOINT SOURCE CHANNEL CODING METHOD

    Institute of Scientific and Technical Information of China (English)

    Zhuo Li; Shen Lansun; Zhu Qing

    2003-01-01

    An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images. Finally, a fine granular joint source channel coding is proposed based on the source coding method, which not only utilizes the network resources efficiently, but guarantees the reliable transmission of video information.

  9. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    Science.gov (United States)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of

  10. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  11. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, S 1 and S 2 . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source S 1 over an additive white Gaussian noise (AWGN channel when the output of the other source S 2 is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source S 1 . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  12. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a fi

  13. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  14. Cheetah: Starspot modeling code

    Science.gov (United States)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  15. A MCTF video coding scheme based on distributed source coding principles

    Science.gov (United States)

    Tagliasacchi, Marco; Tubaro, Stefano

    2005-07-01

    Motion Compensated Temporal Filtering (MCTF) has proved to be an efficient coding tool in the design of open-loop scalable video codecs. In this paper we propose a MCTF video coding scheme based on lifting where the prediction step is implemented using PRISM (Power efficient, Robust, hIgh compression Syndrome-based Multimedia coding), a video coding framework built on distributed source coding principles. We study the effect of integrating the update step at the encoder or at the decoder side. We show that the latter approach allows to improve the quality of the side information exploited during decoding. We present the analytical results obtained by modeling the video signal along the motion trajectories as a first order auto-regressive process. We show that the update step at the decoder allows to half the contribution of the quantization noise. We also include experimental results with real video data that demonstrate the potential of this approach when the video sequences are coded at low bitrates.

  16. Source Coding Using Families of Universal Hash Functions

    OpenAIRE

    Koga, Hiroki

    2007-01-01

    This correspondence is concerned with new connections between source coding and two kinds of families of hash functions known as the families of universal hash functions and N-strongly universal hash functions, where N ges 2 is an integer. First, it is pointed out that such families contain classes of well-known source codes such as bin codes and linear codes. Next, performance of a source coding scheme using either of the two kinds of families is evaluated. An upper bound on the expectation ...

  17. Mapping Initial Hydrostatic Models in Godunov Codes

    CERN Document Server

    Zingale, M A; Zu Hone, J; Calder, A C; Fryxell, B; Plewa, T; Truran, J W; Caceres, A; Olson, K; Ricker, P M; Riley, K; Rosner, R; Siegel, A; Timmes, F X; Vladimirova, N

    2002-01-01

    We look in detail at the process of mapping an astrophysical initial model from a stellar evolution code onto the computational grid of an explicit, Godunov type code while maintaining hydrostatic equilibrium. This mapping process is common in astrophysical simulations, when it is necessary to follow short-timescale dynamics after a period of long timescale buildup. We look at the effects of spatial resolution, boundary conditions, the treatment of the gravitational source terms in the hydrodynamics solver, and the initialization process itself. We conclude with a summary detailing the mapping process that yields the lowest ambient velocities in the mapped model.

  18. Development and implementation in the Monte Carlo code PENELOPE of a new virtual source model for radiotherapy photon beams and portal image calculation

    Science.gov (United States)

    Chabert, I.; Barat, E.; Dautremer, T.; Montagu, T.; Agelou, M.; Croc de Suray, A.; Garcia-Hernandez, J. C.; Gempp, S.; Benkreira, M.; de Carlan, L.; Lazaro, D.

    2016-07-01

    This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.

  19. Development and implementation in the Monte Carlo code PENELOPE of a new virtual source model for radiotherapy photon beams and portal image calculation.

    Science.gov (United States)

    Chabert, I; Barat, E; Dautremer, T; Montagu, T; Agelou, M; Croc de Suray, A; Garcia-Hernandez, J C; Gempp, S; Benkreira, M; de Carlan, L; Lazaro, D

    2016-07-21

    This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.

  20. An establishment of MELCOR code to generate source terms for off site consequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, S. H.; Han, S.; Ahn, K. I. [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    Since the Fukushima accident, an effective approach to a source term analysis for off site consequence analyses has been needed. The MELCOR code has the capability to assess the source term characteristics for this kind of demand. A comprehensive effort is required to use the MELCOR code for a source term analysis effectively. For this purpose, the following works are required: - Review and assess the MELCOR model relevant to source term characterization - Generate input files for source term analysis - Utilize the source term parameters This paper shows an effort to establish the MELCOR code to generate source terms for an off site consequence analysis.

  1. Semantic Source Coding for Flexible Lossy Image Compression

    National Research Council Canada - National Science Library

    Phoha, Shashi; Schmiedekamp, Mendel

    2007-01-01

    Semantic Source Coding for Lossy Video Compression investigates methods for Mission-oriented lossy image compression, by developing methods to use different compression levels for different portions...

  2. An Adaptive Joint Source/Channel Coding Using Error Correcting Arithmetic Codes

    Institute of Scientific and Technical Information of China (English)

    LIU Jun-qing; PANG Yu-ye; SUN Jun

    2007-01-01

    An approximately optimal adaptive arithmetic coding (AC) system using a forbidden symbol (FS) over noisy channels was proposed which allows one to jointly and adaptively design the source decoding and channel correcting in a single process, with superior performance compared with traditional separated techniques.The concept of adaptiveness is applied not only to the source model but also to the amount of coding redundancy.In addition,an improved branch metric computing algorithm and a faster sequential searching algorithm compared with the system proposed by Grangetto were proposed.The proposed system is tested in the case of image transmission over the AWGN channel, and compared with traditional separated system in terms of packet error rate and complexity.Both hard and soft decoding were taken into account.

  3. A FINE GRANULAR JOINT SOURCE CHANNEL CODING METHOD

    Institute of Scientific and Technical Information of China (English)

    ZhuoLi; ShenLanusun

    2003-01-01

    An improved FGS (Fine Granular Scalability) coding method is proposed in this letter,which is based on human visual characteristics.This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images.Finally,a fine granular joint source channel coding is proposed based on the source coding method,which not only utilizes the network resources efficiently,but guarantees the reliable transmission of video information.

  4. Universal coding for correlated sources with complementary delivery

    CERN Document Server

    Kimura, Akisato; Kuzuoka, Shigeaki

    2007-01-01

    This paper deals with a universal coding problem for a certain kind of multiterminal source coding system that we call the complementary delivery coding system. In this system, messages from two correlated sources are jointly encoded, and each decoder has access to one of the two messages to enable it to reproduce the other message. Both fixed-to-fixed length and fixed-to-variable length lossless coding schemes are considered. Explicit constructions of universal codes and bounds of the error probabilities are clarified via type-theoretical and graph-theoretical analyses. [[Keywords

  5. JOINT SOURCE-CHANNEL DECODING OF HUFFMAN CODES WITH LDPC CODES

    Institute of Scientific and Technical Information of China (English)

    Mei Zhonghui; Wu Lenan

    2006-01-01

    In this paper, we present a Joint Source-Channel Decoding algorithm (JSCD) for Low-Density Parity Check (LDPC) codes by modifying the Sum-Product Algorithm (SPA) to account for the source redundancy, which results from the neighbouring Huffman coded bits. Simulations demonstrate that in the presence of source redundancy, the proposed algorithm gives better performance than the Separate Source and Channel Decoding algorithm (SSCD).

  6. Trellis-based source and channel coding

    NARCIS (Netherlands)

    Van der Vleuten, R.J.

    1994-01-01

    This thesis concerns the efficient transmission of digital data, such as digitized sounds or images, from a source to its destination. To make the best use of the limited capacity of the source-destination channel, a source coder is used to delete the less significant information. To correct the occ

  7. Towards Preserving Model Coverage and Structural Code Coverage

    Directory of Open Access Journals (Sweden)

    Raimund Kirner

    2009-01-01

    Full Text Available Embedded systems are often used in safety-critical environments. Thus, thorough testing of them is mandatory. To achieve a required structural code-coverage criteria it is beneficial to derive the test data at a higher program-representation level than machine code. Higher program-representation levels include, beside the source-code level, languages of domain-specific modeling environments with automatic code generation. For a testing framework with automatic generation of test data this will enable high retargetability of the framework. In this article we address the challenge of ensuring that the structural code coverage achieved at a higher program representation level is preserved during the code generations and code transformations down to machine code. We define the formal properties that have to be fullfilled by a code transformation to guarantee preservation of structural code coverage. Based on these properties we discuss how to preserve code coverage achieved at source-code level. Additionally, we discuss how structural code coverage at model level could be preserved. The results presented in this article are aimed toward the integration of support for preserving structural code coverage into compilers and code generators.

  8. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  9. Towards Holography via Quantum Source-Channel Codes.

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-14

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  10. On Cascade Source Coding with A Side Information "Vending Machine"

    CERN Document Server

    Ahmadi, Behzad; Choudhuri, Chiranjib; Mitra, Urbashi

    2012-01-01

    The model of a side information "vending machine" accounts for scenarios in which acquiring side information is costly and thus should be done efficiently. In this paper, the three-node cascade source coding problem is studied under the assumption that a side information vending machine is available either at the intermediate or at the end node. In both cases, a single-letter characterization of the available trade-offs among the rate, the distortions in the reconstructions at the intermediate and at the end node, and the cost in acquiring the side information are derived under given conditions.

  11. Discovering Clusters of Plagiarism in Students’ Source Codes

    OpenAIRE

    L. Moussiades

    2016-01-01

    Plagiarism in students’ source codes constitutes an important drawback for the educational process. In addition, plagiarism detection in source codes is time consuming and tiresome task. Therefore, many approaches for plagiarism detection have been proposed. Most of the aforementioned approaches receive as input a set of source files and calculate a similarity between each pair of the input set. However, the tutor often needs to detect the clusters of plagiarism, i.e. clusters of students’ as...

  12. Coding for Correlated Sources with Unknown Parameters.

    Science.gov (United States)

    1980-05-01

    A 1 (pl) is used, and hence a coding technique which gives o mal performance is easily obtained. This is not the case with the F ’ian- Wolf problem...consider conditional distributions wl, w2, w3 defined by wl( OjO ) w1 (0O0) = w2 (lll) = .5, w1 (l) = w2 (0l0) = 1, and Wa3(UIV ) [ Wl (UIV ) +w 2(ujvl

  13. On Real-Time and Causal Secure Source Coding

    CERN Document Server

    Kaspi, Yonatan

    2012-01-01

    We investigate two source coding problems with secrecy constraints. In the first problem we consider real--time fully secure transmission of a memoryless source. We show that although classical variable--rate coding is not an option since the lengths of the codewords leak information on the source, the key rate can be as low as the average Huffman codeword length of the source. In the second problem we consider causal source coding with a fidelity criterion and side information at the decoder and the eavesdropper. We show that when the eavesdropper has degraded side information, it is optimal to first use a causal rate distortion code and then encrypt its output with a key.

  14. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  15. Optimal coding for qualitative sources on noiseless channels

    Directory of Open Access Journals (Sweden)

    Valeriu MUNTEANU

    2006-12-01

    Full Text Available In this paper we perform the encoding for sources which are only qualitatively characterized, that is, each message the source delivers possesses a certain quality, expressed as cost, importance or utility. The proposed encoding procedure is an optimal one, because it leads to maximum information per code word and it assures a minimum time for the transmission of the source information.

  16. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  17. Interactive Visual Mechanisms for Exploring Source Code Evolution

    NARCIS (Netherlands)

    Telea, Alexandru; Voinea, Lucian

    2005-01-01

    The Visual Code Navigator (VCN) is an ongoing effort to build a visual environment for interactive visualization of large source code bases. We present two techniques that extend the previous work done on the VCN. We propose an efficient and effective mechanism for specifying and visualizing queries

  18. Independent Source Coding for Control over Noiseless Channels

    DEFF Research Database (Denmark)

    da Silva, Eduardo; Derpich, Milan; Østergaard, Jan

    2010-01-01

    By focusing on a class of source coding schemes built around entropy coded dithered quantizers, we develop a framework to deal with average data-rate constraints in a tractable manner that combines ideas from both information and control theories. We focus on a situation where a noisy linear system...

  19. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    Science.gov (United States)

    2010-12-01

    and standar - dizing them through the ISO/IEC process should eliminate many of the problems encountered at the NIST SATE and also increase the...view the source code, using both structured and unstruct ons of secure coding rules discovered. However, manua than automated analysis, and the

  20. Source-channel optimized trellis codes for bitonal image transmission over AWGN channels.

    Science.gov (United States)

    Kroll, J M; Phamdo, N

    1999-01-01

    We consider the design of trellis codes for transmission of binary images over additive white Gaussian noise (AWGN) channels. We first model the image as a binary asymmetric Markov source (BAMS) and then design source-channel optimized (SCO) trellis codes for the BAMS and AWGN channel. The SCO codes are shown to be superior to Ungerboeck's codes by approximately 1.1 dB (64-state code, 10(-5) bit error probability), We also show that a simple "mapping conversion" method can be used to improve the performance of Ungerboeck's codes by approximately 0.4 dB (also 64-state code and 10 (-5) bit error probability). We compare the proposed SCO system with a traditional tandem system consisting of a Huffman code, a convolutional code, an interleaver, and an Ungerboeck trellis code. The SCO system significantly outperforms the tandem system. Finally, using a facsimile image, we compare the image quality of an SCO code, an Ungerboeck code, and the tandem code, The SCO code yields the best reconstructed image quality at 4-5 dB channel SNR.

  1. Statistical physics, optimization and source coding

    Indian Academy of Sciences (India)

    Riccardo Zecchina

    2005-06-01

    The combinatorial problem of satisfying a given set of constraints that depend on N discrete variables is a fundamental one in optimization and coding theory. Even for instances of randomly generated problems, the question ``does there exist an assignment to the variables that satisfies all constraints?" may become extraordinarily difficult to solve in some range of parameters where a glass phase sets in. We shall provide a brief review of the recent advances in the statistical mechanics approach to these satisfiability problems and show how the analytic results have helped to design a new class of message-passing algorithms – the survey propagation (SP) algorithms – that can efficiently solve some combinatorial problems considered intractable. As an application, we discuss how the packing properties of clusters of solutions in randomly generated satisfiability problems can be exploited in the design of simple lossy data compression algorithms.

  2. ASPT software source code: ASPT signal excision software package

    Science.gov (United States)

    Parliament, Hugh

    1992-08-01

    The source code for the ASPT Signal Excision Software Package which is part of the Adaptive Signal Processing Testbed (ASPT) is presented. The source code covers the programs 'excision', 'ab.out', 'd0.out', 'bd1.out', 'develop', 'despread', 'sorting', and 'convert'. These programs are concerned with collecting data, filtering out interference from a spread spectrum signal, analyzing the results, and developing and testing new filtering algorithms.

  3. A New Arithmetic Coding System Combining Source Channel Coding and MAP Decoding

    Institute of Scientific and Technical Information of China (English)

    PANG Yu-ye; SUN Jun; WANG Jia

    2007-01-01

    A new arithmetic coding system combining source channel coding and maximum a posteriori decoding were proposed.It combines source coding and error correction tasks into one unified process by introducing an adaptive forbidden symbol.The proposed system achieves fixed length code words by adaptively adjusting the probability of the forbidden symbol and adding tail digits of variable length.The corresponding improved MAP decoding metric was derived.The proposed system can improve the performance.Simulations were performed on AWGN channels with various noise levels by using both hard and soft decision with BPSK modulation.The results show its performance is slightly better than that of our adaptive arithmetic error correcting coding system using a forbidden symbol.

  4. Xenon poisoning calculation code for miniature neutron source reactor (MNSR)

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In line with the actual requirements and based upon the specific char acteristics of MNSR, a revised point-reactor model was adopted to model MNSR's xenon poisoning. The corresponding calculation code, MNSRXPCC (Xenon Poison ing Calculation Code for MNSR), was developed and tested by the Shanghai MNSR data.

  5. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  6. Zero-error source-channel coding with entanglement.

    NARCIS (Netherlands)

    Briët, J.; Buhrman, H.; Laurent, M.; Piovesan, T.; Scarpa, G.

    2013-01-01

    We study the use of quantum entanglement in the zero-error source-channel coding problem. Here, Alice and Bob are connected by a noisy classical one-way channel, and are given correlated inputs from a random source. Their goal is for Bob to learn Alice's input while using the channel as little as po

  7. Zero-error source-channel coding with entanglement.

    NARCIS (Netherlands)

    Briët, J.; Buhrman, H.; Laurent, M.; Piovesan, T.; Scarpa, G.

    2014-01-01

    We study the use of quantum entanglement in the zero-error source-channel coding problem. Here, Alice and Bob are connected by a noisy classical one-way channel, and are given correlated inputs from a random source. Their goal is for Bob to learn Alice's input while using the channel as little as po

  8. "Source Coding With a Side Information ""Vending Machine"""

    OpenAIRE

    Weissman, Tsachy; Permuter, Haim H.

    2011-01-01

    We study source coding in the presence of side information, when the system can take actions that affect the availability, quality, or nature of the side information. We begin by extending the Wyner-Ziv problem of source coding with decoder side information to the case where the decoder is allowed to choose actions affecting the side information. We then consider the setting where actions are taken by the encoder, based on its observation of the source. Actions may have costs that are commens...

  9. Discovering Clusters of Plagiarism in Students’ Source Codes

    Directory of Open Access Journals (Sweden)

    L. Moussiades

    2016-03-01

    Full Text Available Plagiarism in students’ source codes constitutes an important drawback for the educational process. In addition, plagiarism detection in source codes is time consuming and tiresome task. Therefore, many approaches for plagiarism detection have been proposed. Most of the aforementioned approaches receive as input a set of source files and calculate a similarity between each pair of the input set. However, the tutor often needs to detect the clusters of plagiarism, i.e. clusters of students’ assignments such as all assignments in a cluster derive from a common original. In this paper, we propose a novel plagiarism detection algorithm that receives as input a set of source codes and calculates the clusters of plagiarism. Experimental results show the efficiency of our approach and encourage us to further research.

  10. Toward the Automated Generation of Components from Existing Source Code

    Energy Technology Data Exchange (ETDEWEB)

    Quinlan, D; Yi, Q; Kumfert, G; Epperly, T; Dahlgren, T; Schordan, M; White, B

    2004-12-02

    A major challenge to achieving widespread use of software component technology in scientific computing is an effective migration strategy for existing, or legacy, source code. This paper describes initial work and challenges in automating the identification and generation of components using the ROSE compiler infrastructure and the Babel language interoperability tool. Babel enables calling interfaces expressed in the Scientific Interface Definition Language (SIDL) to be implemented in, and called from, an arbitrary combination of supported languages. ROSE is used to build specialized source-to-source translators that (1) extract a SIDL interface specification from information implicit in existing C++ source code and (2) transform Babel's output to include dispatches to the legacy code.

  11. Intuitive Source Code Visualization Tools for Improving Student Comprehension: BRICS

    CERN Document Server

    Pearson, Christopher; Coady, Yvonne

    2008-01-01

    Even relatively simple code analysis can be a daunting task for many first year students. Perceived complexity, coupled with foreign and harsh syntax, often outstrips the ability for students to take in what they are seeing in terms of their verbal memory. That is, first year students often lack the experience to encode critical building blocks in source code, and their interrelationships, into their own words. We believe this argues for the need for IDEs to provide additional support for representations that would appeal directly to visual memory. In this paper, we examine this need for intuitive source code visualization tools that are easily accessible to novice programmers, discuss the requirements for such a tool, and suggest a novel idea that takes advantage of human peripheral vision to achieve stronger overall code structure awareness.

  12. The source coding game with a cheating switcher

    CERN Document Server

    Palaiyanur, Hari; Sahai, Anant

    2007-01-01

    Motivated by the lossy compression of an active-vision video stream, we consider the problem of finding the rate-distortion function of an arbitrarily varying source (AVS) composed of a finite number of subsources with known distributions. Berger's paper `The Source Coding Game', \\emph{IEEE Trans. Inform. Theory}, 1971, solves this problem under the condition that the adversary is allowed only strictly causal access to the subsource realizations. We consider the case when the adversary has access to the subsource realizations non-causally. Using the type-covering lemma, this new rate-distortion function is determined to be the maximum of the IID rate-distortion function over a set of source distributions attainable by the adversary. We then extend the results to allow for partial or noisy observations of subsource realizations. We further explore the model by attempting to find the rate-distortion function when the adversary is actually helpful. Finally, a bound is developed on the uniform continuity of the I...

  13. Encoding of multi-alphabet sources by binary arithmetic coding

    Science.gov (United States)

    Guo, Muling; Oka, Takahumi; Kato, Shigeo; Kajiwara, Hiroshi; Kawamura, Naoto

    1998-12-01

    In case of encoding a multi-alphabet source, the multi- alphabet symbol sequence can be encoded directly by a multi- alphabet arithmetic encoder, or the sequence can be first converted into several binary sequences and then each binary sequence is encoded by binary arithmetic encoder, such as the L-R arithmetic coder. Arithmetic coding, however, requires arithmetic operations for each symbol and is computationally heavy. In this paper, a binary representation method using Huffman tree is introduced to reduce the number of arithmetic operations, and a new probability approximation for L-R arithmetic coding is further proposed to improve the coding efficiency when the probability of LPS (Least Probable Symbol) is near 0.5. Simulation results show that our proposed scheme has high coding efficacy and can reduce the number of coding symbols.

  14. Astrophysics Source Code Library: Here we grow again!

    CERN Document Server

    Allen, Alice; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Robitaille, Thomas; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Taylor, Mark; Teuben, Peter; Wallin, John

    2016-01-01

    The Astrophysics Source Code Library (ASCL) is a free online registry of research codes; it is indexed by ADS and Web of Science and has over 1300 code entries. Its entries are increasingly used to cite software; citations have been doubling each year since 2012 and every major astronomy journal accepts citations to the ASCL. Codes in the resource cover all aspects of astrophysics research and many programming languages are represented. In the past year, the ASCL added dashboards for users and administrators, started minting Digital Objective Identifiers (DOIs) for software it houses, and added metadata fields requested by users. This presentation covers the ASCL's growth in the past year and the opportunities afforded it as one of the few domain libraries for science research codes.

  15. Modeling Frequency Comb Sources

    Directory of Open Access Journals (Sweden)

    Li Feng

    2016-06-01

    Full Text Available Frequency comb sources have revolutionized metrology and spectroscopy and found applications in many fields. Stable, low-cost, high-quality frequency comb sources are important to these applications. Modeling of the frequency comb sources will help the understanding of the operation mechanism and optimization of the design of such sources. In this paper,we review the theoretical models used and recent progress of the modeling of frequency comb sources.

  16. Progressive encoding with non-linear source codes for compression of low-entropy sources

    OpenAIRE

    Ramírez Javega, Francisco; Lamarca Orozco, M. Meritxell; García Frías, Javier

    2010-01-01

    We propose a novel scheme for source coding of non-uniform memoryless binary sources based on progressively encoding the input sequence with non-linear encoders. At each stage, a number of source bits is perfectly recovered, and these bits are thus not encoded in the next stage. The last stage consists of an LDPC code acting as a source encoder over the bits that have not been recovered in the previous stages. Peer Reviewed

  17. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  18. Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lin Zhang

    2009-06-01

    Full Text Available Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency.

  19. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    -distortion function (RDF). We then study the special cases and applications of this result. We show that two well-studied source coding problems, i.e. remote vector Gaussian Wyner-Ziv problems with mean-squared error and mutual information constraints are in fact special cases of our results. Finally, we apply our......We consider a source coding problem with a network scenario in mind, and formulate it as a remote vector Gaussian Wyner-Ziv problem under covariance matrix distortions. We define a notion of minimum for two positive-definite matrices based on which we derive an explicit formula for the rate...... results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  20. Top ten reasons to register your code with the Astrophysics Source Code Library

    Science.gov (United States)

    Allen, Alice; DuPrie, Kimberly; Berriman, G. Bruce; Mink, Jessica D.; Nemiroff, Robert J.; Robitaille, Thomas; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Teuben, Peter J.; Wallin, John F.; Warmels, Rein

    2017-01-01

    With 1,400 codes, the Astrophysics Source Code Library (ASCL, ascl.net) is the largest indexed resource for codes used in astronomy research in existence. This free online registry was established in 1999, is indexed by Web of Science and ADS, and is citable, with citations to its entries tracked by ADS. Registering your code with the ASCL is easy with our online submissions system. Making your software available for examination shows confidence in your research and makes your research more transparent, reproducible, and falsifiable. ASCL registration allows your software to be cited on its own merits and provides a citation that is trackable and accepted by all astronomy journals and journals such as Science and Nature. Registration also allows others to find your code more easily. This presentation covers the benefits of registering astronomy research software with the ASCL.

  1. The Research of Unconditionally Secure Authentication Code For Multi-Source Network Coding

    Directory of Open Access Journals (Sweden)

    Hong Yang

    2011-03-01

    Full Text Available in a network system, network coding allows intermediate nodes to encode the received messages before forwarding them, thus network coding is vulnerable to pollution attacks. Besides, the attacks are amplified by the network coding process with the result that the whole network maybe polluted. In this paper, we proposed a novel unconditionally secure authentication code for multi-source network coding, which is robust against pollution attacks. For the authentication scheme based on theoretic strength, it is robust against those attackers that have unlimited computational resources, and the intermediate nodes therein can verify the integrity and origin of the encoded messages received without having to decode them, and the receiver nodes can check them out and discard the messages that fail the verification. By this way, the pollution is canceled out before reaching the destinations.

  2. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity......'s recording. This means that ignoring this correlation will be a waste of the scarce power and bandwidth resources. In this thesis, we study both information-theoretic and audio coding aspects of the coding problem in the above-mentioned framework. We formulate rate-distortion problems which take into account...... on the performance of the audio coding system. We derive explicit formulas for the rate-distortion functions, and design coding schemes that asymptotically achieve the performance bounds. We justify the Gaussianity assumption by showing that the results will still be relevant for non-Gaussian sources including audio...

  3. Minimum cost content distribution using network coding: Replication vs. coding at the source nodes

    CERN Document Server

    Huang, Shurui; Medard, Muriel

    2009-01-01

    Consider a large file that needs to be multicast over a network to a given set of terminals. Storing the file at a single server may result in server overload. Accordingly, there are distributed storage solutions that operate by dividing the file into pieces and placing copies of the pieces (replication) or coded versions of the pieces (coding) at multiple source nodes. Suppose that the cost of a given network coding based solution to this problem is defined as the sum of the storage cost and the cost of the flows required to support the multicast. In this work, we consider a network with a set of source nodes that can either contain subsets or coded versions of the pieces of the file and are interested in finding the storage capacities and flows at minimum cost. We provide succinct formulations of the corresponding optimization problems by using information measures. In particular, we show that when there are two source nodes, there is no loss in considering subset sources. For three source nodes, we derive ...

  4. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity....... However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones...

  5. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... decoding. A residual refinement step is also introduced to take advantage of correlation of DCT coefficients. Experimental results show that the proposed techniques robustly improve the coding efficiency of TDWZ DVC and for GOP=2 bit-rate savings up to 35% on WZ frames are achieved compared with DISCOVER....

  6. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  7. Evaluation of help model replacement codes

    Energy Technology Data Exchange (ETDEWEB)

    Whiteside, Tad [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, Thong [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, Gregory [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  8. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  9. Soft and Joint Source-Channel Decoding of Quasi-Arithmetic Codes

    Science.gov (United States)

    Guionnet, Thomas; Guillemot, Christine

    2004-12-01

    The issue of robust and joint source-channel decoding of quasi-arithmetic codes is addressed. Quasi-arithmetic coding is a reduced precision and complexity implementation of arithmetic coding. This amounts to approximating the distribution of the source. The approximation of the source distribution leads to the introduction of redundancy that can be exploited for robust decoding in presence of transmission errors. Hence, this approximation controls both the trade-off between compression efficiency and complexity and at the same time the redundancy ( excess rate) introduced by this suboptimality. This paper provides first a state model of a quasi-arithmetic coder and decoder for binary and[InlineEquation not available: see fulltext.]-ary sources. The design of an error-resilient soft decoding algorithm follows quite naturally. The compression efficiency of quasi-arithmetic codes allows to add extra redundancy in the form of markers designed specifically to prevent desynchronization. The algorithm is directly amenable for iterative source-channel decoding in the spirit of serial turbo codes. The coding and decoding algorithms have been tested for a wide range of channel signal-to-noise ratios (SNRs). Experimental results reveal improved symbol error rate (SER) and SNR performances against Huffman and optimal arithmetic codes.

  10. The source coding game with a cheating switcher

    CERN Document Server

    Palaiyanur, Hari; Sahai, Anant

    2007-01-01

    Berger's paper `The Source Coding Game', IEEE Trans. Inform. Theory, 1971, considers the problem of finding the rate-distortion function for an adversarial source comprised of multiple known IID sources. The adversary, called the `switcher', was allowed only causal access to the source realizations and the rate-distortion function was obtained through the use of a type covering lemma. In this paper, the rate-distortion function of the adversarial source is described, under the assumption that the switcher has non-causal access to all source realizations. The proof utilizes the type covering lemma and simple conditional, random `switching' rules. The rate-distortion function is once again the maximization of the R(D) function for a region of attainable IID distributions.

  11. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    to the computational and energy constraints at each camera as well as the limitations regarding intercamera communication. Our approach addresses these challenges by exploiting the sparsity of the visual descriptor histograms as well as their intraand inter-camera correlations. Our method couples distributed source...... coding of the sparse sources with a new joint recovery algorithm that incorporates multiple side information signals, where prior knowledge (low quality) of all the sparse sources is initially sent to exploit their correlations. Experimental evaluation using the histograms of shift-invariant feature...

  12. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  13. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  14. An Approach to Calculate Reusability in Source Code Using Metrics

    Directory of Open Access Journals (Sweden)

    Rohit Patidar

    2015-02-01

    Full Text Available Reusability is an only one best direction to increase developing productivity and maintainability of application. One must first search for good tested software component and reusable. Developed Application software by one programmer can be shown useful for others also component. This is proving that code specifics to application requirements can be also reused in develop projects related with same requirements. The main aim of this paper proposed a way for reusable module. An process that takes source code as a input that will helped to take the decision approximately which particular software, reusable artefacts should be reused or not.

  15. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  16. RELM (the Working Group for the Development of Region Earthquake Likelihood Models) and the Development of new, Open-Source, Java-Based (Object Oriented) Code for Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Field, E. H.

    2001-12-01

    Given problems with virtually all previous earthquake-forecast models for southern California, and a current lack of consensus on how such models should be constructed, a joint SCEC-USGS sponsored working group for the development of Regional Earthquake Likelihood Models (RELM) has been established (www.relm.org). The goals are as follows: 1) To develop and test a range of viable earthquake-potential models for southern California (not just one "consensus" model); 2) To examine and compare the implications of each model with respect to probabilistic seismic-hazard estimates (which will not only quantify existing hazard uncertainties, but will also indicate how future research should be focused in order to reduce the uncertainties); and 3) To design and document conclusive tests of each model with respect to existing and future geophysical observations. The variety of models under development reflects the variety of geophysical constraints available; these include geological fault information, historical seismicity, geodetic observations, stress-transfer interactions, and foreshock/aftershock statistics. One reason for developing and testing a range of models is to evaluate the extent to which any one can be exported to another region where the options are more limited. RELM is not intended to be a one-time effort. Rather, we are building an infrastructure that will facilitate an ongoing incorporation of new scientific findings into seismic-hazard models. The effort involves the development of several community models and databases, one of which is new Java-based code for probabilistic seismic hazard analysis (PSHA). Although several different PSHA codes presently exist, none are open source, well documented, and written in an object-oriented programming language (which is ideally suited for PSHA). Furthermore, we need code that is flexible enough to accommodate the wide range of models currently under development in RELM. The new code is being developed under

  17. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  18. Diversity and Coding Gain of Multi-Source Multi-Relay Cooperative Wireless Networks with Binary Network Coding

    CERN Document Server

    Di Renzo, Marco; Graziosi, Fabio

    2011-01-01

    In this paper, a multi-source multi-relay cooperative wireless network with binary network coding is studied. The system model encompasses: i) a demodulate-and-forward protocol at the relays, where the received packets are forwarded regardless of their reliability; and ii) a maximum-likelihood optimum decoder at the destination, which accounts for possible decoding errors at the relays. An asymptotically-tight and closed-form expression of the end-to-end error probability is derived, which clearly showcases diversity and coding gain of each source. Unlike other papers available in the literature, the proposed framework has three main distinguishable features: i) it is useful for general network topologies and arbitrary binary encoding vectors; ii) it shows how network code and two-hop forwarding protocol affect diversity and coding gain; and ii) it accounts for realistic fading channels and decoding errors at the relays. The framework provides three main conclusions: i) each source achieves a diversity order ...

  19. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    Science.gov (United States)

    Etienne, Zachariah B.; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L.

    2015-09-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores.

  20. Lossy Source Compression of Non-Uniform Binary Sources Using GQ-LDGM Codes

    CERN Document Server

    Cappellari, Lorenzo

    2010-01-01

    In this paper, we study the use of GF(q)-quantized LDGM codes for binary source coding. By employing quantization, it is possible to obtain binary codewords with a non-uniform distribution. The obtained statistics is hence suitable for optimal, direct quantization of non-uniform Bernoulli sources. We employ a message-passing algorithm combined with a decimation procedure in order to perform compression. The experimental results based on GF(q)-LDGM codes with regular degree distributions yield performances quite close to the theoretical rate-distortion bounds.

  1. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  2. The Role of Extrafoveal Vision in Source Code Comprehension.

    Science.gov (United States)

    Orlov, Pavel A; Bednarik, Roman

    2017-05-01

    Understanding software engineers' behaviour plays a vital role in the software development industry. It also provides helpful guidelines for teaching and learning. In this article, we conduct a study of the extrafoveal vision and its role in information processing. This is a new perspective on source code comprehension. Despite its major importance, the extrafoveal vision has been largely ignored by previous studies. The available research has been focused entirely on the foveal information processing and the gaze fixation position. In this work, we share the results of a gaze-contingent study of source code comprehension by expert ( N = 12) and novice ( N = 12) programmers in conditions of the restricted extrafoveal vision. The window-moving paradigm was employed to restrict the extrafoveal area of vision as participants comprehend two source code examples. The results indicate that the semantic preview allowed by the extrafoveal vision provides tangible benefits to expert programmers. When the experts could not use the semantic information from the extrafoveal area, their fixation duration increased to duration similar to novices. The experts' performance dropped in the restricted-view mode, and they required more time to solve the tasks.

  3. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi

    2012-06-02

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  4. Characteristic Analysis of Fire Modeling Codes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Jong Hoon [Kyeongmin College, Ujeongbu (Korea, Republic of)

    2004-04-15

    This report documents and compares key features of four zone models: CFAST, COMPBRN IIIE, MAGIC and the Fire Induced Vulnerability Evaluation (FIVE) methodology. CFAST and MAGIC handle multi-compartment, multi-fire problems, using many equations; COMPBRN and FIVE handle single compartment, single fire source problems, using simpler equation. The increased rigor of the formulation of CFAST and MAGIC does not mean that these codes are more accurate in every domain; for instance, the FIVE methodology uses a single zone approximation with a plume/ceiling jet sublayer, while the other models use a two-zone treatment without a plume/ceiling jet sublayer. Comparisons with enclosure fire data indicate that inclusion of plume/ceiling jet sublayer temperatures is more conservative, and generally more accurate than neglecting them. Adding a plume/ceiling jet sublayer to the two-zone models should be relatively straightforward, but it has not been done yet for any of the two-zone models. Such an improvement is in progress for MAGIC.

  5. Open source molecular modeling.

    Science.gov (United States)

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io.

  6. Robust video transmission with distributed source coded auxiliary channel.

    Science.gov (United States)

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  7. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Siohan Pierre

    2005-01-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  8. Progressive Syntax-Rich Coding of Multichannel Audio Sources

    Directory of Open Access Journals (Sweden)

    Dai Yang

    2003-09-01

    Full Text Available Being able to transmit the audio bitstream progressively is a highly desirable property for network transmission. MPEG-4 version 2 audio supports fine grain bit rate scalability in the generic audio coder (GAC. It has a bit-sliced arithmetic coding (BSAC tool, which provides scalability in the step of 1 Kbps per audio channel. There are also several other scalable audio coding methods, which have been proposed in recent years. However, these scalable audio tools are only available for mono and stereo audio material. Little work has been done on progressive coding of multichannel audio sources. MPEG advanced audio coding (AAC is one of the most distinguished multichannel digital audio compression systems. Based on AAC, we develop in this work a progressive syntax-rich multichannel audio codec (PSMAC. It not only supports fine grain bit rate scalability for the multichannel audio bitstream but also provides several other desirable functionalities. A formal subjective listening test shows that the proposed algorithm achieves an excellent performance at several different bit rates when compared with MPEG AAC.

  9. Acoustic emission source modeling

    Directory of Open Access Journals (Sweden)

    Hora P.

    2010-07-01

    Full Text Available The paper deals with the acoustic emission (AE source modeling by means of FEM system COMSOL Multiphysics. The following types of sources are used: the spatially concentrated force and the double forces (dipole. The pulse excitation is studied in both cases. As a material is used steel. The computed displacements are compared with the exact analytical solution of point sources under consideration.

  10. Generation of Java code from Alvis model

    Science.gov (United States)

    Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał

    2015-12-01

    Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.

  11. Code forking in open-source software: a requirements perspective

    CERN Document Server

    Ernst, Neil A; Mylopoulos, John

    2010-01-01

    To fork a project is to copy the existing code base and move in a direction different than that of the erstwhile project leadership. Forking provides a rapid way to address new requirements by adapting an existing solution. However, it can also create a plethora of similar tools, and fragment the developer community. Hence, it is not always clear whether forking is the right strategy. In this paper, we describe a mixed-methods exploratory case study that investigated the process of forking a project. The study concerned the forking of an open-source tool for managing software projects, Trac. Trac was forked to address differing requirements in an academic setting. The paper makes two contributions to our understanding of code forking. First, our exploratory study generated several theories about code forking in open source projects, for further research. Second, we investigated one of these theories in depth, via a quantitative study. We conjectured that the features of the OSS forking process would allow new...

  12. Users manual for doctext: Producing documentation from C source code

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W.

    1995-03-01

    One of the major problems that software library writers face, particularly in a research environment, is the generation of documentation. Producing good, professional-quality documentation is tedious and time consuming. Often, no documentation is produced. For many users, however, much of the need for documentation may be satisfied by a brief description of the purpose and use of the routines and their arguments. Even for more complete, hand-generated documentation, this information provides a convenient starting point. We describe here a tool that may be used to generate documentation about programs written in the C language. It uses a structured comment convention that preserves the original C source code and does not require any additional files. The markup language is designed to be an almost invisible structured comment in the C source code, retaining readability in the original source. Documentation in a form suitable for the Unix man program (nroff), LaTeX, and the World Wide Web can be produced.

  13. Preliminary Assessment of the Interfacial Source Terms in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sun Won; Kim, Jeong Woo; Kim, Su Hyong; Kim, Kyung Du [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2009-10-15

    The development program for a nuclear reactor safety analysis code which will be used by utility bodies has been launched supported by the Ministry of Knowledge Economy. The code, named as SPACE, has been designed to solve the multi-dimensional 3-field 2 phase equations. The target power plant type is restricted to PWR's and does not include advanced reactor types, like gas cooled or liquid metal reactors. KAERI, KOPEC, KNF, KEPRI and KHNP are participated in the development project. KAERI has been assigned to develop the physical models and correlations which are required as the closure relationships. The assigned work can be divided into four parts, i.e, (i) the flow regime determination, (ii) the wall heat transfer, (iii) the wall and interfacial friction, and (iv) the interfacial heat and mass transfer. The interfacial heat and mass transfer correlations used in RELAP5, TRAC-M, CATHARE, etc. are reviewed with respect to the simplicity and the range of validity. The recent suggestions are also reviewed. The intellectual property ownerships are proved before an adaptation to the development of the SPACE code. The selected models and correlations are already represented by reference. This paper shows the preliminary assessment results obtained by using the SPACE code.

  14. Economic aspects and models for building codes

    DEFF Research Database (Denmark)

    Bonke, Jens; Pedersen, Dan Ove; Johnsen, Kjeld

    It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study.......It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study....

  15. A robust CELP coder with source-dependent channel coding

    Science.gov (United States)

    Sukkar, Rafid A.; Kleijn, W. Bastiaan

    1990-01-01

    A CELP coder using Source Dependent Channel Encoding (SDCE) for optimal channel error protection is introduced. With SDCE, each of the CELP parameters are encoded by minimizing a perceptually meaningful error criterion under prevalent channel conditions. Unlike conventional channel coding schemes, SDCE allows for optimal balance between error detection and correction. The experimental results show that the CELP system is robust under various channel bit error rates and displays a graceful degradation in SSNR as the channel error rate increases. This is a desirable property to have in a coder since the exact channel conditions cannot usually be specified a priori.

  16. Mathematical models for the EPIC code

    Energy Technology Data Exchange (ETDEWEB)

    Buchanan, H.L.

    1981-06-03

    EPIC is a fluid/envelope type computer code designed to study the energetics and dynamics of a high energy, high current electron beam passing through a gas. The code is essentially two dimensional (x, r, t) and assumes an axisymmetric beam whose r.m.s. radius is governed by an envelope model. Electromagnetic fields, background gas chemistry, and gas hydrodynamics (density channel evolution) are all calculated self-consistently as functions of r, x, and t. The code is a collection of five major subroutines, each of which is described in some detail in this report.

  17. IllinoisGRMHD: An Open-Source, User-Friendly GRMHD Code for Dynamical Spacetimes

    CERN Document Server

    Etienne, Zachariah B; Haas, Roland; Moesta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois Numerical Relativity (ILNR) Group's dynamical spacetime, GRMHD code has proven itself as one of the most robust and reliable tools for theoretical modeling of such GRMHD phenomena. Despite the code's outstanding reputation, it was written "by experts and for experts" of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, whic...

  18. Multicode comparison of selected source-term computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.

    1989-04-01

    This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.

  19. Modern Code Reviews in Open-Source Projects: Which Problems Do They Fix?

    NARCIS (Netherlands)

    Beller, M.; Bacchelli, A.; Zaidman, A.E.; Juergens, E.

    2014-01-01

    Code review is the manual assessment of source code by humans, mainly intended to identify defects and quality problems. Modern Code Review (MCR), a lightweight variant of the code inspections investigated since the 1970s, prevails today both in industry and open-source software (OSS) systems. The

  20. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...... of side information at the decoder. For this problem, we derive lower and upper bounds on the rate-distortion function (RDF) for the Gaussian case, which in general do not coincide. We then provide some cases, where the RDF can be derived exactly. We also show that previous results on specific instances...... of this problem can be generalized using our results. We finally show that if the distortion measure is the mean squared error, or if it is replaced by a certain mutual information constraint, the optimal rate can be derived from our main result....

  1. M-ary Anti - Uniform Huffman Codes for Infinite Sources With Geometric Distribution

    OpenAIRE

    Tarniceriu, Daniela; Munteanu, Valeriu; Zaharia, Gheorghe,

    2013-01-01

    International audience; In this paper we consider the class of generalized antiuniform Huffman (AUH) codes for sources with infinite alphabet and geometric distribution. This distribution leads to infinite anti- uniform sources for some ranges of its parameters. Huffman coding of these sources results in AUH codes. We perform a generalization of binary Huffman encoding, using a M-letter code alphabet and prove that as a result of this encoding, sources with memory are obtained. For these sour...

  2. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  3. Model Description of TASS/SMR Code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Y. D.; Yang, S. H.; Kim, S. H.; Lee, S. W.; Kim, H. K.; Yoon, H. Y.; Lee, G. H.; Bae, K. H.; Chung, Y. J

    2005-12-15

    TASS/SMR(Transient And Setpoint Simulation/System-integrated Modular Reactor) code has been developed for the safety analysis of the SMART-P reactor. TASS/SMR code can be applied for the analysis of design base accidents including small break loss of coolant accident of the SMART research reactor. TASS/SMR code models the primary and secondary system using a node and flow path. A node represents the control volume which defines the fluid mass and energy. Flow path connects the nodes to define the momentum of the fluid. The mass and energy conservation equations are applied to the node and the momentum conservation equation applied to the flow path. In TASS/SMR, the governing equations are applied for both the primary and the secondary coolant system and are solved simultaneously. The governing equations of TASS/SMR are based on the drift-flux model so that the accidents or transients accompanying with two-phase flow can be analyzed. Also, the SMART-P reactor specific thermal-hydraulic models are incorporated, such as non-condensable gas model, helical steam generator heat transfer model, and passive residual heat removal system (PRHRS) heat transfer model. This technical report describes the governing equations, solution method, thermal hydraulics, reactor core, control system models used in TASS/SMR code. Also, the description for the steady state simulation, the minimum CHFR and hottest fuel temperature calculation methods are described in this report.

  4. Open Genetic Code: on open source in the life sciences.

    Science.gov (United States)

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  5. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Ramzan Naeem

    2007-01-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  6. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Naeem Ramzan

    2007-03-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  7. Neutron imaging with coded sources: new challenges and the implementation of a simultaneous iterative reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Bingham, Philip R [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK)

    2013-01-01

    The limitations in neutron flux and resolution (L/D) of current neutron imaging systems can be addressed with a Coded Source Imaging system with magnification (xCSI). More precisely, the multiple sources in an xCSI system can exceed the flux of a single pinhole system for several orders of magnitude, while maintaining a higher L/D with the small sources. Moreover, designing for an xCSI system reduces noise from neutron scattering, because the object is placed away from the detector to achieve magnification. However, xCSI systems are adversely affected by correlated noise such as non-uniform illumination of the neutron source, incorrect sampling of the coded radiograph, misalignment of the coded masks, mask transparency, and the imperfection of the system Point Spread Function (PSF). We argue that a model-based reconstruction algorithm can overcome these problems and describe the implementation of a Simultaneous Iterative Reconstruction Technique algorithm for coded sources. Design pitfalls that preclude a satisfactory reconstruction are documented.

  8. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons; Validacao do acoplamento de modelos mesh ao codigo Monte Carlo GEANT4 para simulacao de fontes de fotons internas

    Energy Technology Data Exchange (ETDEWEB)

    Caribe, Paulo Rauli Rafeson Vasconcelos, E-mail: raulycaribe@hotmail.com [Universidade Federal Rural de Pernambuco (UFRPE), Recife, PE (Brazil). Fac. de Fisica; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear

    2013-07-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV.

  9. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  10. Dynamic clustering of distributed source coding in wireless sensor networks

    Institute of Scientific and Technical Information of China (English)

    LIU Bing

    2009-01-01

    There are correlations of data in adjacent sensor nodes in wireless sensor networks (WSNs). Distributed source coding (DSC) is an idea to improve the energy efficiency in WSNs by compressing the sensor data with correlations to others. When utilizing the DSC, the network architecture that, deciding which nodes to transmit the side information and which nodes to compress according to the correlations, influences the compression efficiency significantly. Comparing with former schemes that have no adaptations, a dynamic clustering scheme is presented in this article, with which the network is partitioned to clusters adaptive to the topology and the degree of correlations. The simulation indicates that the proposed scheme has higher efficiency than static clustering schemes.

  11. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  12. Simulation of Electron Trajectories in the Multicusp Ion Source Using Geantn4 Monte Carlo Code

    Science.gov (United States)

    Khodadadi Azadboni, Fatemeh; Sedaghatizade, Mahmood

    2010-04-01

    To optimize the multicusp ion source, understanding of transport properties of electrons is indispensable. Since the transport of electrons in the multicusp ion source is a three-dimensional problem, we use the 3D computer code Geant4, to model the particle trajectories. The goal is to study the effect of electron injection into a cylindrical gas chamber and the electron trajectories. The role of the magnetic filter in contemporary negative ion sources is analyzed. The conditions in the magnetic filter adjacent to the plasma electrode optimum for the generation, formation, and extraction of an H- ion beam are found. The simulation results are in good agreement with the experimental data.

  13. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...

  14. A MATLAB based 3D modeling and inversion code for MT data

    Science.gov (United States)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  15. Optimal Linear Joint Source-Channel Coding with Delay Constraint

    CERN Document Server

    Johannesson, Erik; Bernhardsson, Bo; Ghulchak, Andrey

    2012-01-01

    The problem of joint source-channel coding is considered for a stationary remote (noisy) Gaussian source and a Gaussian channel. The encoder and decoder are assumed to be causal and their combined operations are subject to a delay constraint. It is shown that, under the mean-square error distortion metric, an optimal encoder-decoder pair from the linear and time-invariant (LTI) class can be found by minimization of a convex functional and a spectral factorization. The functional to be minimized is the sum of the well-known cost in a corresponding Wiener filter problem and a new term, which is induced by the channel noise and whose coefficient is the inverse of the channel's signal-to-noise ratio. This result is shown to also hold in the case of vector-valued signals, assuming parallel additive white Gaussian noise channels. It is also shown that optimal LTI encoders and decoders generally require infinite memory, which implies that approximations are necessary. A numerical example is provided, which compares ...

  16. Providing Source Code Level Portability Between CPU and GPU with MapCG

    Institute of Scientific and Technical Information of China (English)

    Chun-Tao Hong; De-Hao Chen; Yu-Bei Chen; Wen-Guang Chen; Wei-Min Zheng; Hai-Bo Lin

    2012-01-01

    Graphics processing units (GPU) have taken an important role in the general purpose computing market in recent years.At present,the common approach to programming GPU units is to write GPU specific code with low level GPU APIs such as CUDA.Although this approach can achieve good performance,it creates serious portability issues as programmers are required to write a specific version of the code for each potential target architecture.This results in high development and maintenance costs.We believe it is desirable to have a programming model which provides source code portability between CPUs and GPUs,as well as different GPUs.This would allow programmers to write one version of the code,which can be compiled and executed on either CPUs or GPUs efficiently without modification.In this paper,we propose MapCG,a MapReduce framework to provide source code level portability between CPUs and GPUs.In contrast to other approaches such as OpenCL,our framework,based on MapReduce,provides a high level programming model and makes programming much easier.We describe the design of MapCG,including the MapReduce-style high-level programming framework and the runtime system on the CPU and GPU.A prototype of the MapCG runtime,supporting multi-core CPUs and NVIDIA GPUs,was implemented. Our experimental results show that this implementation can execute the same source code efficiently on multi-core CPU platforms and GPUs,achieving an average speedup of 1.6~2.5x over previous implementations of MapReduce on eight commonly used applications.

  17. Coding conventions and principles for a National Land-Change Modeling Framework

    Science.gov (United States)

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  18. Optimal source codes for geometrically distributed integer alphabets

    Science.gov (United States)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  19. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  20. FLOWTRAN-TF v1. 2 source code

    Energy Technology Data Exchange (ETDEWEB)

    Aleman, S.E.; Cooper, R.E.; Flach, G.P.; Hamm, L.L.; Lee, S.; Smith, F.G. III.

    1993-02-01

    The FLOWTRAN-TF code development effort was initiated in early 1989 as a code to monitor production reactor cooling systems at the Savannah River Plant. This report is a documentation of the various codes that make up FLOWTRAN-TF.

  1. FLOWTRAN-TF v1.2 source code

    Energy Technology Data Exchange (ETDEWEB)

    Aleman, S.E.; Cooper, R.E.; Flach, G.P.; Hamm, L.L.; Lee, S.; Smith, F.G. III

    1993-02-01

    The FLOWTRAN-TF code development effort was initiated in early 1989 as a code to monitor production reactor cooling systems at the Savannah River Plant. This report is a documentation of the various codes that make up FLOWTRAN-TF.

  2. FLOWTRAN-TF v1. 2 source code

    Energy Technology Data Exchange (ETDEWEB)

    Aleman, S.E.; Cooper, R.E.; Flach, G.P.; Hamm, L.L.; Lee, S.; Smith, F.G. III.

    1993-02-01

    The FLOWTRAN-TF code development effort was initiated in early 1989 as a code to monitor production reactor cooling systems at the Savannah River Plant. This report is a documentation of the various codes that make up FLOWTRAN-TF.

  3. FLOWTRAN-TF v1.2 source code

    Energy Technology Data Exchange (ETDEWEB)

    Aleman, S.E.; Cooper, R.E.; Flach, G.P.; Hamm, L.L.; Lee, S.; Smith, F.G. III

    1993-02-01

    The FLOWTRAN-TF code development effort was initiated in early 1989 as a code to monitor production reactor cooling systems at the Savannah River Plant. This report is a documentation of the various codes that make up FLOWTRAN-TF.

  4. Modeling peripheral olfactory coding in Drosophila larvae.

    Directory of Open Access Journals (Sweden)

    Derek J Hoare

    Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.

  5. Video Coding and Modeling with Applications to ATM Multiplexing

    Science.gov (United States)

    Nguyen, Hien

    A new vector quantization (VQ) coding method based on optimized concentric shell partitioning of the image space is proposed. The advantages of using the concentric shell partition vector quantizer (CSPVQ) are that it is very fast and the image patterns found in each different subspace can be more effectively coded by using a codebook that is best matched to that particular subspace. For intra-frame coding, the CSPVQ is shown to have the same performance, if not better, than the optimized gain-shape VQ in terms of encoded picture quality while it definitely surpasses the gain-shape VQ in term of computational complexity. A variable bit rate (VBR) video coder for moving video is then proposed where the idea of CSPVQ is coupled with the idea of regular quadtree decomposition to further reduce the bit rate of the encoded picture sequence. The usefulness of a quadtree coding technique comes from the fact that different homogeneous regions occurring within an image can be compactly represented by various nodes in a quadtree. It is found that this image representation technique is particularly useful in providing a low bit rate video encoder without compromising the image quality when it is used in conjunction with the CSPVQ. The characteristics of the VBR coder's output as applied to ATM transmission are investigated. Three video models are used to study the performance of the ATM multiplexer. These models are the auto regressive (AR) model, the auto regressive hidden Markov model (AR-HMM), and the fluid flow uniform arrival and service (UAS) model. The AR model is allowed to have arbitrary order and is used to model a video source which has a constant amount of motion, that is, a stationary video source. The AR-HMM is a more general video model which is based on the idea of auto regressive hidden Markov chain formulated by Baum and is used to describe highly non-stationary sources. Hence, it is expected that the AR-HMM model may also be used top represent a video

  6. System Data Model (SDM) Source Code

    Science.gov (United States)

    2012-08-23

    satelites visible in this observation \\" />" \\ 13: "<Variable name= \\"recvTime \\" kind= \\"time...visible GPS satelites \\">" \\ 17: "<Qualifier name= \\"representation \\" value= \\"array \\" />" \\ 18: "</Variable>" \\ 19: "<Variable name= \\"PRN...34distance \\" format= \\"FLOAT32 \\" length= \\൤ \\" units= \\"km \\" description= \\"Pseudorange measurements for each visible GPS satelite \\">" \\

  7. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted...

  8. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    Vinju, J.J.; Shahi, A.

    2014-01-01

    This deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been investigated for Source

  9. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    Vinju, J.J.; Shahi, A.; Basten, H.J.S.

    2014-01-01

    This deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and a domain anal

  10. Source Coding When the Side Information May Be Delayed

    CERN Document Server

    Simeone, Osvaldo

    2011-01-01

    For memoryless sources, delayed side information at the decoder does not improve the rate-distortion function. However, this is not the case for more general sources with memory, as demonstrated by a number of works focusing on the special case of (delayed) feedforward. In this paper, a setting is studied in which the encoder is potentially uncertain about the delay with which measurements of the side information are acquired at the decoder. Assuming a hidden Markov model for the sources, at first, a single-letter characterization is given for the set-up where the side information delay is arbitrary and known at the encoder, and the reconstruction at the destination is required to be (near) lossless. Then, with delay equal to zero or one source symbol, a single-letter characterization is given of the rate-distortion region for the case where side information may be delayed or not, unbeknownst to the encoder. The characterization is further extended to allow for additional information to be sent when the side ...

  11. Finding Code Clones for Refactoring with Clone Metrics : A Case Study of Open Source Software

    OpenAIRE

    Choi, Eunjong; Yoshida, Norihiro; IshioTakashi; Inoue, Katsuro; Sano, Tateki

    2011-01-01

    A code clone is a code fragment that has identical or similar code fragments to it in the source code. Code clone has been regarded as one of the factors that makes software maintenance more difficult. Therefore, to refactor code clones into one method is promising way to reduce maintenance cost in the future. In our previous study, we proposed a method to extract code clones for refactoring using clone metrics. We had conducted an empirical study on Java application developed by NEC Corporat...

  12. PhpHMM Tool for Generating Speech Recogniser Source Codes Using Web Technologies

    Directory of Open Access Journals (Sweden)

    R. Krejčí

    2011-01-01

    Full Text Available This paper deals with the “phpHMM” software tool, which facilitates the development and optimisation of speech recognition algorithms. This tool is being developed in the Speech Processing Group at the Department of Circuit Theory, CTU in Prague, and it is used to generate the source code of a speech recogniser by means of the PHP scripting language and the MySQL database. The input of the system is a model of speech in a standard HTK format and a list of words to be recognised. The output consists of the source codes and data structures in C programming language, which are then compiled into an executable program. This tool is operated via a web interface.

  13. The Optimal Fix-Free Code for Anti-Uniform Sources

    Directory of Open Access Journals (Sweden)

    Ali Zaghian

    2015-03-01

    Full Text Available An \\(n\\ symbol source which has a Huffman code with codelength vector \\(L_{n}=(1,2,3,\\cdots,n-2,n-1,n-1\\ is called an anti-uniform source. In this paper, it is shown that for this class of sources, the optimal fix-free code and symmetric fix-free code is \\( C_{n}^{*}=(0,11,101,1001,\\cdots,1\\overbrace{0\\cdots0}^{n-2}1.

  14. Distributed and Cascade Lossy Source Coding with a Side Information "Vending Machine"

    CERN Document Server

    Ahmadi, Behzad

    2011-01-01

    Source coding with a side information "vending machine" is a recently proposed framework in which the statistical relationship between the side information and the source, instead of being given and fixed as in the classical Wyner-Ziv problem, can be controlled by the decoder. This control action is selected by the decoder based on the message encoded by the source node. Unlike conventional settings, the message can thus carry not only information about the source to be reproduced at the decoder, but also control information aimed at improving the quality of the side information. In this paper, the single-letter characterization of the trade-offs between rate, distortion and cost associated with the control actions is extended from the previously studied point-to-point set-up to two basic multiterminal models. First, a distributed source coding model is studied, in which an arbitrary number of encoders communicate over rate-limited links to a decoder, whose side information can be controlled. The control acti...

  15. Creating Models for the ORIGEN Codes

    Science.gov (United States)

    Louden, G. D.; Mathews, K. A.

    1997-10-01

    Our research focused on the development of a methodology for creating reactor-specific cross-section libraries for nuclear reactor and nuclear fuel cycle analysis codes available from the Radiation Safety Information Computational Center. The creation of problem-specific models allows more detailed anlaysis than is possible using the generic models provided with ORIGEN2 and ORIGEN-S. A model of the Ohio State University Research Reactor was created using the Coupled 1-D Shielding Analysis (SAS2H) module of the Modular Code System for Performing Standardized Computer Analysis for Licensing Evaluation (SCALE4.3). Six different reactor core models were compared to identify the effect of changing the SAS2H Larger Unit Cell on the predicted isotopic composition of spent fuel. Seven different power histories were then applied to a Core-Average model to determine the ability of ORIGEN-S to distinguish spent fuel produced under varying operating conditions. Several actinide and fission product concentrations were identified which were sensitive to the power history, however the majority of the isotope concentrations were not dependent on operating history.

  16. Mesoscale, Sources and Models: Sources for Nitrogen in the Atmosphere

    DEFF Research Database (Denmark)

    Hertel, O.

    1994-01-01

    Projektet Mesoscales, Sources and Models: Sources for Nitrogen in the Atmosphere er opdelt i 3 delprojekter: Sources - farmland, Sources - sea og Sources - biogenic nitrogen.......Projektet Mesoscales, Sources and Models: Sources for Nitrogen in the Atmosphere er opdelt i 3 delprojekter: Sources - farmland, Sources - sea og Sources - biogenic nitrogen....

  17. COMENTE+: A TOOL FOR IMPROVING SOURCE CODE DOCUMENTATION USING INFORMATION RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Julio Cezar Zanoni

    2014-01-01

    Full Text Available Document source code is seen as a boring time consuming task by several developers. However, a well-documented source code, allow developers to have a better visibility into what was and is being developed, helping, for example, the reuse of the code. This study presents a semi-automatic method for documentation of source code from the existing artifacts in a software project under development. The method aims to reduce developer’s workload, allowing them to work on other tasks of the project and/or ensure that the project deadlines will be met. The method, implemented in a tool, called Comente+, is capable of creating or updating comments into a source code from gathered information recovered from the project artifacts. To implement Comente+, we used an information retrieval approach. We performed some experiments with real data to validate this approach. For that, we created a special measure that estimates how well documented a source code is.

  18. Lensed: a code for the forward reconstruction of lenses and sources from strong lensing observations

    CERN Document Server

    Tessore, Nicolas; Metcalf, R Benton

    2015-01-01

    Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present Lensed, a new code which performs forward parametric modelling of strong lenses. Lensed takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimisation of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. Lensed is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we...

  19. LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations

    Science.gov (United States)

    Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton

    2016-12-01

    Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.

  20. Neutron spallation source and the Dubna Cascade Code

    Indian Academy of Sciences (India)

    V Kumar; H Kumawat; Uttam Goel; V S Barashenkov

    2003-03-01

    Neutron multiplicity per incident proton, /, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna Cascade Code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, < 1.2 GeV and differ maximum up to 25% at higher energy.

  1. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  2. A spectral synthesis code for rapid modelling of supernovae

    CERN Document Server

    Kerzendorf, Wolfgang E

    2014-01-01

    We present TARDIS - an open-source code for rapid spectral modelling of supernovae (SNe). Our goal is to develop a tool that is sufficiently fast to allow exploration of the complex parameter spaces of models for SN ejecta. This can be used to analyse the growing number of high-quality SN spectra being obtained by transient surveys. The code uses Monte Carlo methods to obtain a self-consistent description of the plasma state and to compute a synthetic spectrum. It has a modular design to facilitate the implementation of a range of physical approximations that can be compared to asses both accuracy and computational expediency. This will allow users to choose a level of sophistication appropriate for their application. Here, we describe the operation of the code and make comparisons with alternative radiative transfer codes of differing levels of complexity (SYN++, PYTHON, and ARTIS). We then explore the consequence of adopting simple prescriptions for the calculation of atomic excitation, focussing on four sp...

  3. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh

    2015-08-12

    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.

  4. Presenting an Alternative Source Code Plagiarism Detection Framework for Improving the Teaching and Learning of Programming

    Science.gov (United States)

    Hattingh, Frederik; Buitendag, Albertus A. K.; van der Walt, Jacobus S.

    2013-01-01

    The transfer and teaching of programming and programming related skills has become, increasingly difficult on an undergraduate level over the past years. This is partially due to the number of programming languages available as well as access to readily available source code over the Web. Source code plagiarism is common practice amongst many…

  5. MELCOR code source term characteristics for fast SBO scenario of OPR1000 plant

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seok Jung; Kim, Tae Woon; Park, Sun Hee; Ahn, Kwang Il [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    Off site consequence analysis in Level 3 PSA is mainly affected by source terms release characteristics of nuclear plant. The severe accidents analysis codes for quantifying the source terms release characteristics, such as MELCOR or MAAP, could be available to provide the detailed information of these characteristics to assess offsite consequence. To utilize these characteristics from severe accident analysis codes, MELCOR code was used in a specific severe accident scenario, i.e., fast station black out (SBO) for OPR1000 plant.

  6. Source Code Plagiarism Detection Method Using Protégé Built Ontologies

    Directory of Open Access Journals (Sweden)

    Ion SMEUREANU

    2013-01-01

    Full Text Available Software plagiarism is a growing and serious problem that affects computer science universities in particular and the quality of education in general. More and more students tend to copy their thesis’s software from older theses or internet databases. Checking source codes manually, to detect if they are similar or the same, is a laborious and time consuming job, maybe even impossible due to existence of large digital repositories. Ontology is a way of describing a document’s semantic, so it can be easily used for source code files too. OWL Web Ontology Language could find its applicability in describing both vocabulary and taxonomy of a programming language source code. SPARQL is a query language based on SQL that extracts saved or deducted information from ontologies. Our paper proposes a source code plagiarism detection method, based on ontologies created using Protégé editor, which can be applied in scanning students' theses' software source code.

  7. ER@CEBAF: Modeling code developments

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Roblin, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-04-13

    A proposal for a multiple-pass, high energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.

  8. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder

    Science.gov (United States)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  9. PetriCode: A Tool for Template-Based Code Generation from CPN Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Code generation is an important part of model driven methodologies. In this paper, we present PetriCode, a software tool for generating protocol software from a subclass of Coloured Petri Nets (CPNs). The CPN subclass is comprised of hierarchical CPN models describing a protocol system at different...

  10. Enhancements to the SSME transfer function modeling code

    Science.gov (United States)

    Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.

    1995-01-01

    This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an

  11. Cluster banding heat source model

    Institute of Scientific and Technical Information of China (English)

    Zhang Liguo; Ji Shude; Yang Jianguo; Fang Hongyuan; Li Yafan

    2006-01-01

    Concept of cluster banding heat source model is put forward for the problem of overmany increment steps in the process of numerical simulation of large welding structures, and expression of cluster banding heat source model is deduced based on energy conservation law.Because the expression of cluster banding heat source model deduced is suitable for random weld width, quantitative analysis of welding stress field for large welding structures which have regular welds can be made quickly.

  12. An Approach to Calculate Reusability in Source Code Using Metrics

    OpenAIRE

    Rohit Patidar; Prof. Virendra Singh

    2015-01-01

    Reusability is an only one best direction to increase developing productivity and maintainability of application. One must first search for good tested software component and reusable. Developed Application software by one programmer can be shown useful for others also component. This is proving that code specifics to application requirements can be also reused in develop projects related with same requirements. The main aim of this paper proposed a way for reusable module. An pro...

  13. Open Genetic Code: on open source in the life sciences

    NARCIS (Netherlands)

    Deibel, E.

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life

  14. Joint source-channel coding for a quantum multiple access channel

    Science.gov (United States)

    Wilde, Mark M.; Savov, Ivan

    2012-11-01

    Suppose that two senders each obtain one share of the output of a classical, bivariate, correlated information source. They would like to transmit the correlated source to a receiver using a quantum multiple access channel. In prior work, Cover, El Gamal and Salehi provided a combined source-channel coding strategy for a classical multiple access channel which outperforms the simpler ‘separation’ strategy where separate codebooks are used for the source coding and the channel coding tasks. In this paper, we prove that a coding strategy similar to the Cover-El Gamal-Salehi strategy and a corresponding quantum simultaneous decoder allow for the reliable transmission of a source over a quantum multiple access channel, as long as a set of information inequalities involving the Holevo quantity hold.

  15. Joint source-channel coding for a quantum multiple access channel

    CERN Document Server

    Wilde, Mark M

    2012-01-01

    Suppose that two senders each obtain one share of the output of a classical, bivariate, correlated information source. They would like to transmit the correlated source to a receiver using a quantum multiple access channel. In prior work, Cover, El Gamal, and Salehi provided a combined source-channel coding strategy for a classical multiple access channel which outperforms the simpler "separation" strategy where separate codebooks are used for the source coding and the channel coding tasks. In the present paper, we prove that a coding strategy similar to the Cover-El Gamal-Salehi strategy and a corresponding quantum simultaneous decoder allow for the reliable transmission of a source over a quantum multiple access channel, as long as a set of information inequalities involving the Holevo quantity hold.

  16. OSSMETER D3.2 – Report on Source Code Activity Metrics

    OpenAIRE

    Vinju, Jurgen; Shahi, Ashim

    2014-01-01

    This deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been investigated for Source Code Quality Analysis and initial language dependent and independent metrics have been prototyped. Task 3.2 builds partly on the results of Task 3.1 and partly introduces new infra-structure. This...

  17. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  18. Photovoltaic sources modeling

    CERN Document Server

    Petrone, Giovanni; Spagnuolo, Giovanni

    2016-01-01

    This comprehensive guide surveys all available models for simulating a photovoltaic (PV) generator at different levels of granularity, from cell to system level, in uniform as well as in mismatched conditions. Providing a thorough comparison among the models, engineers have all the elements needed to choose the right PV array model for specific applications or environmental conditions matched with the model of the electronic circuit used to maximize the PV power production.

  19. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Daehee Kim

    2016-07-01

    Full Text Available Code dissemination in wireless sensor networks (WSNs is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  20. The Astrophysics Source Code Library: Where do we go from here?

    CERN Document Server

    Allen, Alice; DuPrie, Kimberly; Hanisch, Robert J; Mink, Jessica; Nemiroff, Robert; Shamir, Lior; Shortridge, Keith; Taylor, Mark; Teuben, Peter; Wallin, John

    2013-01-01

    The Astrophysics Source Code Library, started in 1999, has in the past three years grown from a repository for 40 codes to a registry of over 700 codes that are now indexed by ADS. What comes next? We examine the future of the ASCL, the challenges facing it, the rationale behind its practices, and the need to balance what we might do with what we have the resources to accomplish.

  1. The analysis of thermal-hydraulic models in MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M. H.; Hur, C.; Kim, D. K.; Cho, H. J. [POhang Univ., of Science and TECHnology, Pohang (Korea, Republic of)

    1996-07-15

    The objective of the present work is to verify the prediction and analysis capability of MELCOR code about the progression of severe accidents in light water reactor and also to evaluate appropriateness of thermal-hydraulic models used in MELCOR code. Comparing the results of experiment and calculation with MELCOR code is carried out to achieve the above objective. Specially, the comparison between the CORA-13 experiment and the MELCOR code calculation was performed.

  2. Modeling Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  3. Distributed Joint Source-Channel Coding on a Multiple Access Channel with Side Information

    CERN Document Server

    Rajesh, R

    2008-01-01

    We consider the problem of transmission of several distributed sources over a multiple access channel (MAC) with side information at the sources and the decoder. Source-channel separation does not hold for this channel. Sufficient conditions are provided for transmission of sources with a given distortion. The source and/or the channel could have continuous alphabets (thus Gaussian sources and Gaussian MACs are special cases). Various previous results are obtained as special cases. We also provide several good joint source-channel coding schemes for a discrete/continuous source and discrete/continuous alphabet channel. Channels with feedback and fading are also considered. Keywords: Multiple access channel, side information, lossy joint source-channel coding, channels with feedback, fading channels.

  4. 50 CFR 23.24 - What code is used to show the source of the specimen?

    Science.gov (United States)

    2010-10-01

    ..., EXPORTATION, AND IMPORTATION OF WILDLIFE AND PLANTS (CONTINUED) CONVENTION ON INTERNATIONAL TRADE IN ENDANGERED SPECIES OF WILD FAUNA AND FLORA (CITES) Prohibitions, Exemptions, and Requirements § 23.24 What...-Convention, which should be used in conjunction with another code: Source of specimen Code (a)...

  5. Building guide : how to build Xyce from source code.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric Richard; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Mei, Ting; Verley, Jason C.

    2013-08-01

    While Xyce uses the Autoconf and Automake system to configure builds, it is often necessary to perform more than the customary %E2%80%9C./configure%E2%80%9D builds many open source users have come to expect. This document describes the steps needed to get Xyce built on a number of common platforms.

  6. Image Coding using Markov Models with Hidden States

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto

    1999-01-01

    The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same.......The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same....

  7. Animal models of source memory.

    Science.gov (United States)

    Crystal, Jonathon D

    2016-01-01

    Source memory is the aspect of episodic memory that encodes the origin (i.e., source) of information acquired in the past. Episodic memory (i.e., our memories for unique personal past events) typically involves source memory because those memories focus on the origin of previous events. Source memory is at work when, for example, someone tells a favorite joke to a person while avoiding retelling the joke to the friend who originally shared the joke. Importantly, source memory permits differentiation of one episodic memory from another because source memory includes features that were present when the different memories were formed. This article reviews recent efforts to develop an animal model of source memory using rats. Experiments are reviewed which suggest that source memory is dissociated from other forms of memory. The review highlights strengths and weaknesses of a number of animal models of episodic memory. Animal models of source memory may be used to probe the biological bases of memory. Moreover, these models can be combined with genetic models of Alzheimer's disease to evaluate pharmacotherapies that ultimately have the potential to improve memory.

  8. Modeling Vortex Generators in the Wind-US Code

    Science.gov (United States)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  9. Interfacial and Wall Transport Models for SPACE-CAP Code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Choi, Hoon; Ha, Sang Jun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2009-10-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code.

  10. Model code for energy conservation in new building construction

    Energy Technology Data Exchange (ETDEWEB)

    None

    1977-12-01

    In response to the recognized lack of existing consensus standards directed to the conservation of energy in building design and operation, the preparation and publication of such a standard was accomplished with the issuance of ASHRAE Standard 90-75 ''Energy Conservation in New Building Design,'' by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., in 1975. This standard addressed itself to recommended practices for energy conservation, using both depletable and non-depletable sources. A model code for energy conservation in building construction has been developed, setting forth the minimum regulations found necessary to mandate such conservation. The code addresses itself to the administration, design criteria, systems elements, controls, service water heating and electrical distribution and use, both for depletable and non-depletable energy sources. The technical provisions of the document are based on ASHRAE 90-75 and it is intended for use by state and local building officials in the implementation of a statewide energy conservation program.

  11. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  12. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  13. Visualizing the software system towards identifying the topic from source code using semantic clustering

    Directory of Open Access Journals (Sweden)

    Kanchan Sharma

    2014-03-01

    Full Text Available In software re-engineering, domain knowledge are valuable source of information for developers. Here, we describe how the coding standards are helpful for the identification of domain while writing the source code. Internal comments and logical identifier names in source code are the key source to find the concept and domain area for the application. One of the Information retrieval techniques, Latent Semantic Indexing (LSI uses this linguistic information such as identifier names and comments in source code to map it with the domain name. Based on the linguistic results from LSI engine, a clustering technique used to group source artifacts that use similar vocabulary and a way of representing complex system into simpler components. It works at the source code textual level and making it language independent. Prior research activity correlated the semantics with structural information and applied it at different level of abstraction. Based on the frequency of the domain terms labeling has been provided after discrete characterization of the clusters, using machine learning and visually explored. Visualization makes the concept detection much easier.

  14. What Does It Take to Develop a Million Lines of Open Source Code?

    Science.gov (United States)

    Fernandez-Ramil, Juan; Izquierdo-Cortazar, Daniel; Mens, Tom

    This article presents a preliminary and exploratory study of the relationship between size, on the one hand, and effort, duration and team size, on the other, for 11 Free/Libre/Open Source Software (FLOSS) projects with current size ranging between between 0.6 and 5.3 million lines of code (MLOC). Effort was operationalised based on the number of active committers per month. The extracted data did not fit well an early version of the closed-source cost estimation model COCOMO for proprietary software, overall suggesting that, at least to some extent, FLOSS communities are more productive than closed-source teams. This also motivated the need for FLOSS-specific effort models. As a first approximation, we evaluated 16 linear regression models involving different pairs of attributes. One of our experiments was to calculate the net size, that is, to remove any suspiciously large outliers or jumps in the growth trends. The best model we found involved effort against net size, accounting for 79 percent of the variance. This model was based on data excluding a possible outlier (Eclipse), the largest project in our sample. This suggests that different effort models may be needed for certain categories of FLOSS projects. Incidentally, for each of the 11 individual FLOSS projects we were able to model the net size trends with very high accuracy (R 2 ≥ 0.98). Of the 11 projects, 3 have grown superlinearly, 5 linearly and 3 sublinearly, suggesting that in the majority of the cases accumulated complexity is either well controlled or don’t constitute a growth constraining factor.

  15. Code Generation for Protocols from CPN models Annotated with Pragmatics

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael; Kindler, Ekkart

    Model-driven engineering (MDE) provides a foundation for automatically generating software based on models. Models allow software designs to be specified focusing on the problem domain and abstracting from the details of underlying implementation platforms. When applied in the context of formal...... modelling languages, MDE further has the advantage that models are amenable to model checking which allows key behavioural properties of the software design to be verified. The combination of formally verified models and automated code generation contributes to a high degree of assurance that the resulting...... of the same model and sufficiently detailed to serve as a basis for automated code generation when annotated with code generation pragmatics. Pragmatics are syntactical annotations designed to make the CPN models descriptive and to address the problem that models with enough details for generating code from...

  16. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt;

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...... encoding multiple sources. We focus on the case where node measurements are in form of noisy linearly mixed combinations of the sources and the acoustic channel mixing matrices are invertible. For this problem, we derive the rate-distortion function for vector Gaussian sources and under covariance...

  17. Conservation of concrete structures in fib model code 2010

    NARCIS (Netherlands)

    Matthews, S.L.; Ueda, T.; Bigaj-van Vliet, A.

    2012-01-01

    Chapter 9: Conservation of concrete structures forms part of fib Model Code 2010, the first draft of which was published for comment as fib Bulletins 55 and 56 (fib 2010). Numerous comments were received and considered by fib Special Activity Group 5 responsible for the preparation of fib Model Code

  18. Conservation of concrete structures in fib model code 2010

    NARCIS (Netherlands)

    Matthews, S.L.; Ueda, T.; Bigaj-van Vliet, A.

    2012-01-01

    Chapter 9: Conservation of concrete structures forms part of fib Model Code 2010, the first draft of which was published for comment as fib Bulletins 55 and 56 (fib 2010). Numerous comments were received and considered by fib Special Activity Group 5 responsible for the preparation of fib Model Code

  19. Comparing Fine-Grained Source Code Changes And Code Churn For Bug Prediction

    NARCIS (Netherlands)

    Giger, E.; Pinzger, M.; Gall, H.C.

    2011-01-01

    A significant amount of research effort has been dedicated to learning prediction models that allow project managers to efficiently allocate resources to those parts of a software system that most likely are bug-prone and therefore critical. Prominent measures for building bug prediction models are

  20. Distributed coding of multiview sparse sources with joint recovery [arXiv

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren;

    2016-01-01

    coding of the sparse sources with a new joint recovery algorithm that incorporates multiple side information signals, where prior knowledge (low quality) of all the sparse sources is initially sent to exploit their correlations. Experimental evaluation using the histograms of shift-invariant feature...

  1. M3: An Open Model for Measuring Code Artifacts

    NARCIS (Netherlands)

    Izmaylova, A.; Klint, P.; Shahi, A.; Vinju, J.J.

    2013-01-01

    In the context of the EU FP7 project ``OSSMETER'' we are developing an infra-structure for measuring source code. The goal of OSSMETER is to obtain insight in the quality of open-source projects from all possible perspectives, including product, process and community. This is a "white paper" on M3,

  2. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  3. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  4. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  5. Optimum source concepts for optical intersatellite links with RZ coding

    Science.gov (United States)

    Strasser, Martin M.; Winzer, Peter J.; Leeb, Walter R.

    2001-06-01

    We discuss several potential methods of generating optical RZ data signals, distinguishing between direct RZ modulation and modulation of a primary pulse train which is either generated by using a modelocked laser, by sinusoidally driving of an external modulator, or by gainswitching of a laser diode. We analyze the properties of each method with regard to the most critical aspects for space-borne laser communication systems such as repetition rate, duty cycle, extinction ratio, frequency chirp, timing jitter, robustness, complexity, commercial availability, and lifetime. Most modelocked lasers are highly sensitive to ambient perturbations, necessitating accurate temperature control and mechanical stabilization. Also, they typically provide pulses with less than 10% duty cycle, which can result in a decreased sensitivity of optically preamplified receivers. Directly modulated semiconductor lasers are compact and robust but suffer from large frequency chirp, which deteriorates the receiver sensitivity. One reliable RZ source is a conventional DFB semiconductor laser with two intensity modulators, one for pulse generation and one for data modulation. Both Mach-Zehnder modulators co-packaged with a laser diode or monolithically integrated electroabsorption modulators should be considered. These modulators can provide almost transform-limited pulses at high repetition rates and with duty cycles of about 30%. Robustness and lifetime are highly promising.

  6. Computational model of Amersham I-125 source model 6711 and Prosper Pd-103 source model MED3633 using MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Artur F.; Reis Junior, Juraci P.; Silva, Ademir X., E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Rosa, Luiz A.R. da, E-mail: lrosa@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Facure, Alessandro [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Cardoso, Simone C., E-mail: Simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear

    2011-07-01

    Brachytherapy is used in cancer treatment at shorter distances through the use of small encapsulated source of ionizing radiation. In such treatment, a radiation source is positioned directly into or near the target volume to be treated. In this study the Monte Carlo based MCNP code was used to model and simulate the I-125 Amersham Health source model 6711 and the Pd-103 Prospera source model MED3633 in order to obtain the dosimetric parameter dose rate constant ({Lambda}) . The sources geometries were modeled and implemented in MCNPX code. The dose rate constant is an important parameter prostate LDR brachytherapy's treatments planning. This study was based on American Association of Physicists in Medicine (AAPM) recommendations which were produced by its Task Group 43. The results obtained were 0.941 and 0.65 for the dose rate constants of I-125 and Pd-103 sources, respectively. They present good agreement with the literature values based on different Monte Carlo codes. (author)

  7. Design of network-coding based multi-edge type LDPC codes for multi-source relaying systems

    CERN Document Server

    Li, Jun; Malaney, Robert; Yuan, Jinhong

    2009-01-01

    In this paper we investigate a multi-source LDPC scheme for a Gaussian relay system, where M sources communicate with the destination under the help of a single relay (M-1-1 system). Since various distributed LDPC schemes in the cooperative single-source system, e.g. bilayer LDPC and bilayer multi-edge type LDPC (BMET-LDPC), have been designed to approach the Shannon limit, these schemes can be applied to the $M-1-1$ system by the relay serving each source in a round-robin fashion. However, such a direct application is not optimal due to the lack of potential joint processing gain. In this paper, we propose a network coded multi-edge type LDPC (NCMET-LDPC) scheme for the multi-source scenario. Through an EXIT analysis, we conclude that the NCMET-LDPC scheme achieves higher extrinsic mutual information, relative to a separate application of BMET-LDPC to each source. Our new NCMET-LDPC scheme thus achieves a higher threshold relative to existing schemes.

  8. Code Generation for Embedded Software for Modeling Clear Box Structures

    Directory of Open Access Journals (Sweden)

    V. Chandra Prakash

    2011-09-01

    Full Text Available Cleanroom software Engineering (CRSE recommended that the code related to the Application systems be generated either manually or through code generation models or represents the same as a hierarchy of clear box structures. CRSE has even advocated that the code be developed using the State models that models the internal behavior of the systems. No framework has been recommended by any Author using which the Clear boxes are designed using the code generation methods. Code Generation is one of the important quality issues addressed in cleanroom software engineering. It has been investigated that CRSE can be used for life cycle management of the embedded systems when the hardware-software co-design is in-built as part and parcel of CRSE by way of adding suitable models to CRSE and redefining the same. The design of Embedded Systems involves code generation in respect of hardware and Embedded Software. In this paper, a framework is proposed using which the embedded software is generated. The method is unique that it considers various aspects of the code generation which includes Code Segments, Code Functions, Classes, Globalization, Variable propagation etc. The proposed Framework has been applied to a Pilot project and the experimental results are presented.

  9. Cosmogenic photons strongly constrain UHECR source models

    CERN Document Server

    van Vliet, Arjen

    2016-01-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  10. Cosmogenic photons strongly constrain UHECR source models

    Science.gov (United States)

    van Vliet, Arjen

    2017-03-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  11. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  12. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  13. Modeling of the CTEx subcritical unit using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Avelino [Divisao de Defesa Quimica, Biologica e Nuclear. Centro Tecnologico do Exercito - CTEx, Guaratiba, Rio de Janeiro, RJ (Brazil); Silva, Ademir X. da, E-mail: ademir@con.ufrj.br [Programa de Engenharia Nuclear. Universidade Federal do Rio de Janeiro - UFRJ Centro de Tecnologia, Rio de Janeiro, RJ (Brazil); Rebello, Wilson F. [Secao de Engenharia Nuclear - SE/7 Instituto Militar de Engenharia - IME Rio de Janeiro, RJ (Brazil); Cunha, Victor L. Lassance [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    The present work aims at simulating the subcritical unit of Army Technology Center (CTEx) namely ARGUS pile (subcritical uranium-graphite arrangement) by using the computational code MCNPX. Once such modeling is finished, it could be used in k-effective calculations for systems using natural uranium as fuel, for instance. ARGUS is a subcritical assembly which uses reactor-grade graphite as moderator of fission neutrons and metallic uranium fuel rods with aluminum cladding. The pile is driven by an Am-Be spontaneous neutron source. In order to achieve a higher value for k{sub eff}, a higher concentration of U235 can be proposed, provided it safely remains below one. (author)

  14. Automatic code generation from the OMT-based dynamic model

    Energy Technology Data Exchange (ETDEWEB)

    Ali, J.; Tanaka, J.

    1996-12-31

    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  15. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    Science.gov (United States)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  16. GYRE: An open-source stellar oscillation code based on a new Magnus Multiple Shooting Scheme

    CERN Document Server

    Townsend, R H D

    2013-01-01

    We present a new oscillation code, GYRE, which solves the stellar pulsation equations (both adiabatic and non-adiabatic) using a novel Magnus Multiple Shooting numerical scheme devised to overcome certain weaknesses of the usual relaxation and shooting schemes appearing in the literature. The code is accurate (up to 6th order in the number of grid points), robust, efficiently makes use of multiple processor cores and/or nodes, and is freely available in source form for use and distribution. We verify the code against analytic solutions and results from other oscillation codes, in all cases finding good agreement. Then, we use the code to explore how the asteroseismic observables of a 1.5 M_sun star change as it evolves through the red-giant bump.

  17. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  18. Source model for blasting vibration

    Institute of Scientific and Technical Information of China (English)

    DING; Hua(丁桦); ZHENG; Zhemin(郑哲敏)

    2002-01-01

    By analyzing and comparing the experimental data, the point source moment theory and the cavity theory, it is concluded that the vibrating signals away from the blasting explosive come mainly from the natural vibrations of the geological structures near the broken blasting area. The source impulses are not spread mainly by the inelastic properties (such as through media damping, as believed to be the case by many researchers) of the medium in the propagation pass, but by this structure. Then an equivalent source model for the blasting vibrations of a fragmenting blasting is proposed, which shows the important role of the impulse of the source's time function under certain conditions. For the purpose of numerical simulation, the model is realized in FEM, The finite element results are in good agreement with the experimental data.

  19. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  20. NEW ITERATIVE SUPER-TRELLIS DECODING WITH SOURCE A PRIORI INFORMATION FOR VLCS WITH TURBO CODES

    Institute of Scientific and Technical Information of China (English)

    Liu Jianjun; Tu Guofang; Wu Weiren

    2007-01-01

    A novel Joint Source and Channel Decoding (JSCD) scheme for Variable Length Codes (VLCs) concatenated with turbo codes utilizing a new super-trellis decoding algorithm is presented in this letter. The basic idea of our decoding algorithm is that source a priori information with the form of bit transition probabilities corresponding to the VLC tree can be derived directly from sub-state transitions in new composite-state represented super-trellis. A Maximum Likelihood (ML) decoding algorithm for VLC sequence estimations based on the proposed super-trellis is also described. Simulation results show that the new iterative decoding scheme can obtain obvious encoding gain especially for Reversible Variable Length Codes (RVLCs), when compared with the classical separated turbo decoding and the previous joint decoding not considering source statistical characteristics.

  1. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  2. Source reconstruction for neutron coded-aperture imaging: A sparse method.

    Science.gov (United States)

    Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang

    2017-08-01

    Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.

  3. Plug-in to Eclipse environment for VHDL source code editor with advanced formatting of text

    Science.gov (United States)

    Niton, B.; Pozniak, K. T.; Romaniuk, R. S.

    2011-10-01

    The paper describes an idea and realization of a smart plug-in to the Eclipse software environment. The plug-in is predicted for editing of the VHDL source code. It extends considerably the capabilities of the VEditor program, which bases on the open license. There are presented the results of the formatting procedures performed on chosen examples of the VHDL source codes. The work is a part of a bigger project of building smart programming environment for design of advanced photonic and electronic systems. The examples of such systems are quoted in references.

  4. Analysis of slender structures using mechanics of structure genome and open source codes

    Science.gov (United States)

    Liu, Xin

    Structural genome (SG) is the smallest mathematical building block that contains all constitutive information of a structure. SG is a bridge connecting the material microstructures and the macroscopic structural analysis, so mechanics of structure genome (MSG) is an approach that unifies micromechanics and structural mechanics. There are three types of SG: 1D SG, 2D SG and 3D SG depending on the heterogeneity of the original structure. Once the SG of a structure is identified, the effective properties for corresponding structures including beams, plates/shells, or 3D solids can be obtained by the homogenization analysis of the SG. These effective properties can be used for the macroscopic structural analysis. After macroscopic structural analysis, the global structural responses can be the input parameters to get the local displacements/stress/strain using dehomogenization analysis of the SG. The mechanics of structure genome is implemented in a general purpose composites software called SwiftComp(TM). Slender structures are widely used in various industries, such as civil, aerospace and structural engineering. Using MSG, a new structural analysis approach is used to study the slender structures. In order to let users easily get the access to the software using this approach, several open source codes have been modified and upgraded. Graphical user interface is built on Gmsh, and CalculiX is chosen as the structural solver. New beam elements have been added to CalculiX to make the better use of the capability of SwiftComp(TM). Both linear and quadratic beam elements for Euler-Bernoulli and Timoshenko beam models are derived. Several numerical models show that the results using MSG and its companion code SwiftComp(TM) agree well with the 3D finite element analysis. The MSG approach greatly simplified the modeling process while maintaining accuracy. Therefore, the MSG approach provides an alternative way to analyze structures, especially when the structures are

  5. SOURCES 4C : a code for calculating ([alpha],n), spontaneous fission, and delayed neutron sources and spectra.

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, W. B. (William B.); Perry, R. T. (Robert T.); Shores, E. F. (Erik F.); Charlton, W. S. (William S.); Parish, Theodore A.; Estes, G. P. (Guy P.); Brown, T. H. (Thomas H.); Arthur, Edward D. (Edward Dana),; Bozoian, Michael; England, T. R.; Madland, D. G.; Stewart, J. E. (James E.)

    2002-01-01

    SOURCES 4C is a computer code that determines neutron production rates and spectra from ({alpha},n) reactions, spontaneous fission, and delayed neutron emission due to radionuclide decay. The code is capable of calculating ({alpha},n) source rates and spectra in four types of problems: homogeneous media (i.e., an intimate mixture of a-emitting source material and low-Z target material), two-region interface problems (i.e., a slab of {alpha}-emitting source material in contact with a slab of low-Z target material), three-region interface problems (i.e., a thin slab of low-Z target material sandwiched between {alpha}-emitting source material and low-Z target material), and ({alpha},n) reactions induced by a monoenergetic beam of {alpha}-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 44 actinides. The ({alpha},n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 107 nuclide decay {alpha}-particle spectra, 24 sets of measured and/or evaluated ({alpha},n) cross sections and product nuclide level branching fractions, and functional {alpha}-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code provides the magnitude and spectra, if desired, of the resultant neutron source in addition to an analysis of the'contributions by each nuclide in the problem. LASTCALL, a graphical user interface, is included in the code package.

  6. Non-Uniform Contrast and Noise Correction for Coded Source Neutron Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Bingham, Philip R [ORNL

    2012-01-01

    Since the first application of neutron radiography in the 1930s, the field of neutron radiography has matured enough to develop several applications. However, advances in the technology are far from concluded. In general, the resolution of scintillator-based detection systems is limited to the $10\\mu m$ range, and the relatively low neutron count rate of neutron sources compared to other illumination sources restricts time resolved measurement. One path toward improved resolution is the use of magnification; however, to date neutron optics are inefficient, expensive, and difficult to develop. There is a clear demand for cost-effective scintillator-based neutron imaging systems that achieve resolutions of $1 \\mu m$ or less. Such imaging system would dramatically extend the application of neutron imaging. For such purposes a coded source imaging system is under development. The current challenge is to reduce artifacts in the reconstructed coded source images. Artifacts are generated by non-uniform illumination of the source, gamma rays, dark current at the imaging sensor, and system noise from the reconstruction kernel. In this paper, we describe how to pre-process the coded signal to reduce noise and non-uniform illumination, and how to reconstruct the coded signal with three reconstruction methods correlation, maximum likelihood estimation, and algebraic reconstruction technique. We illustrates our results with experimental examples.

  7. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  8. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  9. VizieR Online Data Catalog: Transiting planets search Matlab/Octave source code (Ofir+, 2014)

    Science.gov (United States)

    Ofir, A.

    2014-01-01

    The Matlab/Octave source code for Optimal BLS is made available here. Detailed descriptions of all inputs and outputs are given by comment lines in the file. Note: Octave does not currently support parallel for loops ("parfor"). Octave users therefore need to change the "parfor" command (line 217 of OptimalBLS.m) to "for". (7 data files).

  10. Validation of an electroseismic and seismoelectric modeling code, for layered earth models, by the explicit homogeneous space solutions

    NARCIS (Netherlands)

    Grobbe, N.; Slob, E.C.

    2013-01-01

    We have developed an analytically based, energy fluxnormalized numerical modeling code (ESSEMOD), capable of modeling the wave propagation of all existing ElectroSeismic and SeismoElectric source-receiver combinations in horizontally layered configurations. We compare the results of several of these

  11. A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

    CERN Document Server

    Feizi, Soheil

    2011-01-01

    We propose a joint source-channel-network coding scheme, based on compressive sensing principles, for wireless networks with AWGN channels (that may include multiple access and broadcast), with sources exhibiting temporal and spatial dependencies. Our goal is to provide a reconstruction of sources within an allowed distortion level at each receiver. We perform joint source-channel coding at each source by randomly projecting source values to a lower dimensional space. We consider sources that satisfy the restricted eigenvalue (RE) condition as well as more general sources for which the randomness of the network allows a mapping to lower dimensional spaces. Our approach relies on using analog random linear network coding. The receiver uses compressive sensing decoders to reconstruct sources. Our key insight is the fact that, compressive sensing and analog network coding both preserve the source characteristics required for compressive sensing decoding.

  12. MODEL OF LASER-TIG HYBRID WELDING HEAT SOURCE

    Institute of Scientific and Technical Information of China (English)

    Chen Yanbin; Li Liqun; Feng Xiaosong; Fang Junfei

    2004-01-01

    The welding mechanism of laser-TIG hybrid welding process is analyzed. With the variation of arc current, the welding process is divided into two patterns: deep-penetration welding and heat conductive welding. The heat flow model of hybrid welding is presented. As to deep-penetration welding, the heat source includes a surface heat flux and a volume heat flux. The heat source of heat conductive welding is composed of two Gaussian distribute surface heat sources. With this heat source model, a temperature field is calculated. The finite element code MARC is employed for this purpose. The calculation results show a good agreement with the experimental data.

  13. A unified model of the standard genetic code.

    Science.gov (United States)

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  14. Fusion safety codes International modeling with MELCOR and ATHENA- INTRA

    CERN Document Server

    Marshall, T; Topilski, L; Merrill, B

    2002-01-01

    For a number of years, the world fusion safety community has been involved in benchmarking their safety analyses codes against experiment data to support regulatory approval of a next step fusion device. This paper discusses the benchmarking of two prominent fusion safety thermal-hydraulic computer codes. The MELCOR code was developed in the US for fission severe accident safety analyses and has been modified for fusion safety analyses. The ATHENA code is a multifluid version of the US-developed RELAP5 code that is also widely used for fusion safety analyses. The ENEA Fusion Division uses ATHENA in conjunction with the INTRA code for its safety analyses. The INTRA code was developed in Germany and predicts containment building pressures, temperatures and fluid flow. ENEA employs the French-developed ISAS system to couple ATHENA and INTRA. This paper provides a brief introduction of the MELCOR and ATHENA-INTRA codes and presents their modeling results for the following breaches of a water cooling line into the...

  15. RHOCUBE: 3D density distributions modeling code

    Science.gov (United States)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  16. Delay-limited Source and Channel Coding of Quasi-Stationary Sources over Block Fading Channels: Design and Scaling Laws

    CERN Document Server

    Joda, Roghayeh

    2012-01-01

    In this paper, delay-limited transmission of quasi-stationary sources over block fading channels are considered. Considering distortion outage probability as the performance measure, two source and channel coding schemes with power adaptive transmission are presented. The first one is optimized for fixed rate transmission, and hence enjoys simplicity of implementation. The second one is a high performance scheme, which also benefits from optimized rate adaptation with respect to source and channel states. In high SNR regime, the performance scaling laws in terms of outage distortion exponent and asymptotic outage distortion gain are derived, where two schemes with fixed transmission power and adaptive or optimized fixed rates are considered as benchmarks for comparisons. Various analytical and numerical results are provided which demonstrate a superior performance for source and channel optimized rate and power adaptive scheme. It is also observed that from a distortion outage perspective, the fixed rate adap...

  17. Coded moderator approach for fast neutron source detection and localization at standoff

    Science.gov (United States)

    Littell, Jennifer; Lukosi, Eric; Hayward, Jason; Milburn, Robert; Rowan, Allen

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  18. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  19. LMFBR models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1983-06-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  20. LMFBR models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1981-10-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  1. Information Theoretic Authentication and Secrecy Codes in the Splitting Model

    CERN Document Server

    Huber, Michael

    2011-01-01

    In the splitting model, information theoretic authentication codes allow non-deterministic encoding, that is, several messages can be used to communicate a particular plaintext. Certain applications require that the aspect of secrecy should hold simultaneously. Ogata-Kurosawa-Stinson-Saido (2004) have constructed optimal splitting authentication codes achieving perfect secrecy for the special case when the number of keys equals the number of messages. In this paper, we establish a construction method for optimal splitting authentication codes with perfect secrecy in the more general case when the number of keys may differ from the number of messages. To the best knowledge, this is the first result of this type.

  2. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  3. Differences between the 1993 and 1995 CABO Model Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Conover, D.R.; Lucas, R.G.

    1995-10-01

    The Energy Policy Act of 1992 requires the US DOE to determine if changes to the Council of American Building Officials` (CABO) 1993 Model Energy Code (MEC) (CABO 1993), published in the 1995 edition of the MEC (CABO 1995), will improve energy efficiency in residential buildings. The DOE, the states, and others have expressed an interest in the differences between the 1993 and 1995 editions of the MEC. This report describes each change to the 1993 MEC, and its impact. Referenced publications are also listed along with discrepancies between code changes approved in the 1994 and 1995 code-change cycles and what actually appears in the 1995 MEC.

  4. On Optimal Causal Coding of Partially Observed Markov Sources in Single and Multi-Terminal Settings

    CERN Document Server

    Yüksel, Serdar

    2010-01-01

    The optimal causal coding of a partially observed Markov process is studied, where the cost to be minimized is a bounded, non-negative, additive, measurable single-letter function of the source and the receiver output. A structural result is obtained extending Witsenhausen's and Walrand-Varaiya's structural results on the optimal real-time coders to a partially observed setting. The decentralized (multi-terminal) setup is also considered. For the case where the source is an i.i.d. process, it is shown that the design of optimal decentralized causal coding of correlated observations admits a separation. For Markov sources, a counterexample to a natural separation conjecture is presented. Applications in estimation and networked control problems are discussed, in the context of a linear, Gaussian setup.

  5. Modeling Guidelines for Code Generation in the Railway Signaling Context

    Science.gov (United States)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  6. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  7. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    G. Valenzise

    2009-02-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  8. Bayesian kinematic earthquake source models

    Science.gov (United States)

    Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.

    2009-12-01

    Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high

  9. Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry

    DEFF Research Database (Denmark)

    Stickland, Matt; Scanlon, Tom; Fabre, Sylvie;

    2009-01-01

    hours of compute time to solve even on a high speed processor. One way of reducing the compute time is by employing parallel processing on a number of computational nodes. However; increasing the number of computational nodes may involve the purchase of extra licenses if using a standard commercial code....... The cost of the extra licences can become the limit on the final number of nodes employed. Whilst there are significant benefits to be found when using a commercial code which has a user friendly interface and has undergone significant verification testing the financial advantages of using an open source...

  10. A Joint Source and Channel Video Coding Algorithmfor H.223 Based Wireless Channel

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Robust video streaming through high error-prone wireless channel has attracted much attention.In this paper the authors introduce an effective al gorithm by joining the Unequal Error Protection ability of the channel multiplexing prot ocol H.223 Annex D,and the new H.263++ Annex V Data Partition together. Bas ed on the optimal trade-off of these two technologies,the Joint Source and Channel Coding algorithm can result in stronger error resilience.The simulation results have shown its superiority against separate coding mode and some Unequal Error Protection mode under recommended wireless channel error patterns.

  11. Student Model Tools Code Release and Documentation

    DEFF Research Database (Denmark)

    Johnson, Matthew; Bull, Susan; Masci, Drew

    This document contains a wealth of information about the design and implementation of the Next-TELL open learner model. Information is included about the final specification (Section 3), the interfaces and features (Section 4), its implementation and technical design (Section 5) and also a summary...

  12. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    Science.gov (United States)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  13. Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)

  14. Computer vision for detecting and quantifying gamma-ray sources in coded-aperture images

    Energy Technology Data Exchange (ETDEWEB)

    Schaich, P.C.; Clark, G.A.; Sengupta, S.K.; Ziock, K.P.

    1994-11-02

    The authors report the development of an automatic image analysis system that detects gamma-ray source regions in images obtained from a coded aperture, gamma-ray imager. The number of gamma sources in the image is not known prior to analysis. The system counts the number (K) of gamma sources detected in the image and estimates the lower bound for the probability that the number of sources in the image is K. The system consists of a two-stage pattern classification scheme in which the Probabilistic Neural Network is used in the supervised learning mode. The algorithms were developed and tested using real gamma-ray images from controlled experiments in which the number and location of depleted uranium source disks in the scene are known.

  15. The JCSS probabilistic model code: Experience and recent developments

    NARCIS (Netherlands)

    Chryssanthopoulos, M.; Diamantidis, D.; Vrouwenvelder, A.C.W.M.

    2003-01-01

    The JCSS Probabilistic Model Code (JCSS-PMC) has been available for public use on the JCSS website (www.jcss.ethz.ch) for over two years. During this period, several examples have been worked out and new probabilistic models have been added. Since the engineering community has already been exposed t

  16. A Mathematical Model for Comparing Holland's Personality and Environmental Codes.

    Science.gov (United States)

    Kwak, Junkyu Christopher; Pulvino, Charles J.

    1982-01-01

    Presents a mathematical model utilizing three-letter codes of personality patterns determined from the Self Directed Search. This model compares personality types over time or determines relationships between personality types and person-environment interactions. This approach is consistent with Holland's theory yet more comprehensive than one- or…

  17. GRHydro: A new open source general-relativistic magnetohydrodynamics code for the Einstein Toolkit

    CERN Document Server

    Moesta, Philipp; Faber, Joshua A; Haas, Roland; Noble, Scott C; Bode, Tanja; Loeffler, Frank; Ott, Christian D; Reisswig, Christian; Schnetter, Erik

    2013-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein Toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the Toolkit builds upon previous releases and implements the evolution of relativistic magnetised fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both hyperbolic divergence cleaning and constrained transport schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfv\\'en waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code's performanc...

  18. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  19. Model-building codes for membrane proteins.

    Energy Technology Data Exchange (ETDEWEB)

    Shirley, David Noyes; Hunt, Thomas W.; Brown, W. Michael; Schoeniger, Joseph S. (Sandia National Laboratories, Livermore, CA); Slepoy, Alexander; Sale, Kenneth L. (Sandia National Laboratories, Livermore, CA); Young, Malin M. (Sandia National Laboratories, Livermore, CA); Faulon, Jean-Loup Michel; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA)

    2005-01-01

    We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.

  20. VULCAN: an Open-Source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres

    CERN Document Server

    Tsai, Shang-Min; Grosheintz, Luc; Rimmer, Paul B; Kitzmann, Daniel; Heng, Kevin

    2016-01-01

    We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K using a reduced C- H-O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer & Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. Further validation of VULCAN is made by examining the theoretical trends produced when the temperature-pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching ap...

  1. Multiview coding mode decision with hybrid optimal stopping model.

    Science.gov (United States)

    Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay

    2013-04-01

    In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.

  2. An adaptive source-channel coding with feedback for progressive transmission of medical images.

    Science.gov (United States)

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design.

  3. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    Directory of Open Access Journals (Sweden)

    Jen-Lung Lo

    2009-01-01

    Full Text Available A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI. The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician. In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC and Rayleigh channel. The experimental results verify the effectiveness of the design.

  4. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  5. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Directory of Open Access Journals (Sweden)

    Lisa H Glynn

    Full Text Available The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI. Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  6. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, J.J. [Oak Ridge National Lab., TN (United States)

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  7. Modelling spread of Bluetongue in Denmark: The code

    DEFF Research Database (Denmark)

    Græsbøll, Kaare

    This technical report was produced to make public the code produced as the main project of the PhD project by Kaare Græsbøll, with the title: "Modelling spread of Bluetongue and other vector borne diseases in Denmark and evaluation of intervention strategies".......This technical report was produced to make public the code produced as the main project of the PhD project by Kaare Græsbøll, with the title: "Modelling spread of Bluetongue and other vector borne diseases in Denmark and evaluation of intervention strategies"....

  8. VULCAN: an Open-Source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres

    OpenAIRE

    2016-01-01

    We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K using a reduced C- H-O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing ...

  9. Source Code Plagiarism Detection Method Using Protégé Built Ontologies

    OpenAIRE

    Ion SMEUREANU; Bogdan IANCU

    2013-01-01

    Software plagiarism is a growing and serious problem that affects computer science universities in particular and the quality of education in general. More and more students tend to copy their thesis’s software from older theses or internet databases. Checking source codes manually, to detect if they are similar or the same, is a laborious and time consuming job, maybe even impossible due to existence of large digital repositories. Ontology is a way of describing a document’s semantic, so it ...

  10. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  11. Results on the Fundamental Gain of Memory-Assisted Universal Source Coding

    CERN Document Server

    Beirami, Ahmad; Fekri, Faramarz

    2012-01-01

    Many applications require data processing to be performed on individual pieces of data which are of finite sizes, e.g., files in cloud storage units and packets in data networks. However, traditional universal compression solutions would not perform well over the finite-length sequences. Recently, we proposed a framework called memory-assisted universal compression that holds a significant promise for reducing the amount of redundant data from the finite-length sequences. The proposed compression scheme is based on the observation that it is possible to learn source statistics (by memorizing previous sequences from the source) at some intermediate entities and then leverage the memorized context to reduce redundancy of the universal compression of finite-length sequences. We first present the fundamental gain of the proposed memory-assisted universal source coding over conventional universal compression (without memorization) for a single parametric source. Then, we extend and investigate the benefits of the ...

  12. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104). [PWR; BWR

    Energy Technology Data Exchange (ETDEWEB)

    Kress, T. S. [comp.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time.

  13. VULCAN: An Open-source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres

    Science.gov (United States)

    Tsai, Shang-Min; Lyons, James R.; Grosheintz, Luc; Rimmer, Paul B.; Kitzmann, Daniel; Heng, Kevin

    2017-02-01

    We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K, using a reduced C–H–O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer & Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. We also use VULCAN to examine the theoretical trends produced when the temperature–pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching approximation and find that it is accurate for methane but breaks down for acetylene, because the disequilibrium abundance of acetylene is not directly determined by transport-induced quenching, but is rather indirectly controlled by the disequilibrium abundance of methane. Therefore we suggest that the quenching approximation should be used with caution and must always be checked against a chemical kinetics calculation. A one-dimensional model atmosphere with 100 layers, computed using VULCAN, typically takes several minutes to complete. VULCAN is part of the Exoclimes Simulation Platform (ESP; exoclime.net) and publicly available at https://github.com/exoclime/VULCAN.

  14. Studies of Stellar Collapse and Black Hole Formation with the Open-Source Code GR1D

    CERN Document Server

    Ott, Christian D; 10.1063/1.3485130

    2010-01-01

    We discuss results from simulations of black hole formation in failing core-collapse supernovae performed with the code GR1D, a new open-source Eulerian spherically-symmetric general-relativistic hydrodynamics code. GR1D includes rotation in an approximate way (1.5D), comes with multiple finite-temperature nuclear equations of state (EOS), and treats neutrinos in the post-core-bounce phase via a 3-flavor leakage scheme and a heating prescription. We chose the favored K_0=220 MeV-variant of the Lattimer & Swesty (1990) EOS and present collapse calculations using the progenitor models of Limongi & Chieffi (2006). We show that there is no direct (or ``prompt'') black hole formation in the collapse of ordinary massive stars (8 M_Sun ~< M_ZAMS ~< 100 M_Sun) and present first results from black hole formation simulations that include rotation.

  15. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)

    2013-05-15

    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  16. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.

  17. Code Shift: Grid Specifications and Dynamic Wind Turbine Models

    DEFF Research Database (Denmark)

    Ackermann, Thomas; Ellis, Abraham; Fortmann, Jens

    2013-01-01

    Grid codes (GCs) and dynamic wind turbine (WT) models are key tools to allow increasing renewable energy penetration without challenging security of supply. In this article, the state of the art and the further development of both tools are discussed, focusing on the European and North American...

  18. Modelling binary rotating stars by new population synthesis code BONNFIRES

    CERN Document Server

    Lau, Herbert H B; Schneider, Fabian R N

    2013-01-01

    BONNFIRES, a new generation of population synthesis code, can calculate nuclear reaction, various mixing processes and binary interaction in a timely fashion. We use this new population synthesis code to study the interplay between binary mass transfer and rotation. We aim to compare theoretical models with observations, in particular the surface nitrogen abundance and rotational velocity. Preliminary results show binary interactions may explain the formation of nitrogen-rich slow rotators and nitrogen-poor fast rotators, but more work needs to be done to estimate whether the observed frequencies of those stars can be matched.

  19. Data Mining for Secure Software Engineering – Source Code Management Tool Case Study

    Directory of Open Access Journals (Sweden)

    A.V.Krishna Prasad,

    2010-07-01

    Full Text Available As Data Mining for Secure Software Engineering improves software productivity and quality, software engineers are increasingly applying data mining algorithms to various software engineering tasks. However mining software engineering data poses several challenges, requiring various algorithms to effectively mine sequences, graphs and text from such data. Software engineering data includes code bases, execution traces, historical code changes,mailing lists and bug data bases. They contains a wealth of information about a projects-status, progress and evolution. Using well established data mining techniques, practitioners and researchers can explore the potential of this valuable data in order to better manage their projects and do produce higher-quality software systems that are delivered on time and with in budget. Data mining can be used in gathering and extracting latent security requirements, extracting algorithms and business rules from code, mining legacy applications for requirements and business rules for new projects etc. Mining algorithms for software engineering falls into four main categories: Frequent pattern mining – finding commonly occurring patterns; Pattern matching – finding data instances for given patterns; Clustering – grouping data into clusters and Classification – predicting labels of data based on already labeled data. In this paper, we will discuss the overview of strategies for data mining for secure software engineering, with the implementation of a case study of text mining for source code management tool.

  20. Design and Simulation of Photoneutron Source by MCNPX Monte Carlo Code for Boron Neutron Capture Therapy

    Directory of Open Access Journals (Sweden)

    Mona Zolfaghari

    2015-07-01

    Full Text Available Introduction Electron linear accelerator (LINAC can be used for neutron production in Boron Neutron Capture Therapy (BNCT. BNCT is an external radiotherapeutic method for the treatment of some cancers. In this study, Varian 2300 C/D LINAC was simulated as an electron accelerator-based photoneutron source to provide a suitable neutron flux for BNCT. Materials and Methods Photoneutron sources were simulated, using MCNPX Monte Carlo code. In this study, a 20 MeV LINAC was utilized for electron-photon reactions. After the evaluation of cross-sections and threshold energies, lead (Pb, uranium (U and beryllium deuteride (BeD2were selected as photoneutron sources. Results According to the simulation results, optimized photoneutron sources with a compact volume and photoneutron yields of 107, 108 and 109 (n.cm-2.s-1 were obtained for Pb, U and BeD2 composites. Also, photoneutrons increased by using enriched U (10-60% as an electron accelerator-based photoneutron source. Conclusion Optimized photoneutron sources were obtained with compact sizes of 107, 108 and 109 (n.cm-2.s-1, respectively. These fluxs can be applied for BNCT by decelerating fast neutrons and using a suitable beam-shaping assembly, surrounding electron-photon and photoneutron sources.

  1. Design and Simulation of Photoneutron Source by MCNPX Monte Carlo Code for Boron Neutron Capture Therapy

    Directory of Open Access Journals (Sweden)

    Mona Zolfaghari

    2015-07-01

    Full Text Available Introduction Electron linear accelerator (LINAC can be used for neutron production in Boron Neutron Capture Therapy (BNCT. BNCT is an external radiotherapeutic method for the treatment of some cancers. In this study, Varian 2300 C/D LINAC was simulated as an electron accelerator-based photoneutron source to provide a suitable neutron flux for BNCT. Materials and Methods Photoneutron sources were simulated, using MCNPX Monte Carlo code. In this study, a 20 MeV LINAC was utilized for electron-photon reactions. After the evaluation of cross-sections and threshold energies, lead (Pb, uranium (U and beryllium deuteride (BeD2were selected as photoneutron sources. Results According to the simulation results, optimized photoneutron sources with a compact volume and photoneutron yields of 107, 108 and 109 (n.cm-2.s-1 were obtained for Pb, U and BeD2 composites. Also, photoneutrons increased by using enriched U (10-60% as an electron accelerator-based photoneutron source. Conclusion Optimized photoneutron sources were obtained with compact sizes of 107, 108 and 109 (n.cm-2.s-1, respectively. These fluxs can be applied for BNCT by decelerating fast neutrons and using a suitable beam-shaping assembly, surrounding electron-photon and photoneutron sources.

  2. Modeling of Anomalous Transport in Tokamaks with FACETS code

    Science.gov (United States)

    Pankin, A. Y.; Batemann, G.; Kritz, A.; Rafiq, T.; Vadlamani, S.; Hakim, A.; Kruger, S.; Miah, M.; Rognlien, T.

    2009-05-01

    The FACETS code, a whole-device integrated modeling code that self-consistently computes plasma profiles for the plasma core and edge in tokamaks, has been recently developed as a part of the SciDAC project for core-edge simulations. A choice of transport models is available in FACETS through the FMCFM interface [1]. Transport models included in FMCFM have specific ranges of applicability, which can limit their use to parts of the plasma. In particular, the GLF23 transport model does not include the resistive ballooning effects that can be important in the tokamak pedestal region and GLF23 typically under-predicts the anomalous fluxes near the magnetic axis [2]. The TGLF and GYRO transport models have similar limitations [3]. A combination of transport models that covers the entire discharge domain is studied using FACETS in a realistic tokamak geometry. Effective diffusivities computed with the FMCFM transport models are extended to the region near the separatrix to be used in the UEDGE code within FACETS. 1. S. Vadlamani et al. (2009) %First time-dependent transport simulations using GYRO and NCLASS within FACETS (this meeting).2. T. Rafiq et al. (2009) %Simulation of electron thermal transport in H-mode discharges Submitted to Phys. Plasmas.3. C. Holland et al. (2008) %Validation of gyrokinetic transport simulations using %DIII-D core turbulence measurements Proc. of IAEA FEC (Switzerland, 2008)

  3. Sources of financial pressure and up coding behavior in French public hospitals.

    Science.gov (United States)

    Georgescu, Irène; Hartmann, Frank G H

    2013-05-01

    Drawing upon role theory and the literature concerning unintended consequences of financial pressure, this study investigates the effects of health care decision pressure from the hospital's administration and from the professional peer group on physician's inclination to engage in up coding. We explore two kinds of up coding, information-related and action-related, and develop hypothesis that connect these kinds of data manipulation to the sources of pressure via the intermediate effect of role conflict. Qualitative data from initial interviews with physicians and subsequent questionnaire evidence from 578 physicians in 14 French hospitals suggest that the source of pressure is a relevant predictor of physicians' inclination to engage in data-manipulation. We further find that this effect is partly explained by the extent to which these pressures create role conflict. Given the concern about up coding in treatment-based reimbursement systems worldwide, our analysis adds to understanding how the design of the hospital's management control system may enhance this undesired type of behavior.

  4. Source Code Prioritization Using Forward Slicing for Exposing Critical Elements in a Program

    Institute of Scientific and Technical Information of China (English)

    Mitrabinda Ray; Kanhaiya lal Kumawat; Durga Prasad Mohapatra

    2011-01-01

    Even after thorough testing, a few bugs still remain in a program with moderate complexity. These residual bugs are randomly distributed throughout the code. We have noticed that bugs in some parts of a program cause frequent and severe failures compared to those in other parts. Then, it is necessary to take a decision about what to test more and what to test less within the testing budget. It is possible to prioritize the methods and classes of an object-oriented program according to their potential to cause failures. For this, we propose a program metric called influence metric to find the influence of a program element on the source code. First, we represent the source code into an intermediate graph called extended system dependence graph. Then, forward slicing is applied on a node of the graph to get the influence of that node. The influence metric for a method m in a program shows the number of statements of the program which directly or indirectly use the result produced by method m. We compute the influence metric for a class c based on the influence metric of all its methods. As influence metric is computed statically, it does not show the expected behavior of a class at run time. It is already known that faults in highly executed parts tend to more failures. Therefore, we have considered operational profile to find the average execution time of a class in a system. Then, classes are prioritized in the source code based on influence metric and average execution time. The priority of an element indicates the potential of the element to cause failures. Once all program elements have been prioritized, the testing effort can be apportioned so that the elements causing frequent failures will be tested thoroughly. We have conducted experiments for two well-known case studies -- Library Management System and Trading Automation System -- and successfully identified critical elements in the source code of each case study. We have also conducted experiments to

  5. Model classification rate control algorithm for video coding

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A model classification rate control method for video coding is proposed. The macro-blocks are classified according to their prediction errors, and different parameters are used in the rate-quantization and distortion-quantization model.The different model parameters are calculated from the previous frame of the same type in the process of coding. These models are used to estimate the relations among rate, distortion and quantization of the current frame. Further steps,such as R-D optimization based quantization adjustment and smoothing of quantization of adjacent macroblocks, are used to improve the quality. The results of the experiments prove that the technique is effective and can be realized easily. The method presented in the paper can be a good way for MPEG and H. 264 rate control.

  6. NLTE solar irradiance modeling with the COSI code

    CERN Document Server

    Shapiro, A I; Schoell, M; Haberreiter, M; Rozanov, E

    2010-01-01

    Context. The solar irradiance is known to change on time scales of minutes to decades, and it is suspected that its substantial fluctua- tions are partially responsible for climate variations. Aims. We are developing a solar atmosphere code that allows the physical modeling of the entire solar spectrum composed of quiet Sun and active regions. This code is a tool for modeling the variability of the solar irradiance and understanding its influence on Earth. Methods. We exploit further development of the radiative transfer code COSI that now incorporates the calculation of molecular lines. We validated COSI under the conditions of local thermodynamic equilibrium (LTE) against the synthetic spectra calculated with the ATLAS code. The synthetic solar spectra were also calculated in non-local thermodynamic equilibrium (NLTE) and compared to the available measured spectra. In doing so we have defined the main problems of the modeling, e.g., the lack of opacity in the UV part of the spectrum and the inconsistency in...

  7. A semianalytic Monte Carlo code for modelling LIDAR measurements

    Science.gov (United States)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  8. Alternative modeling methods for plasma-based Rf ion sources

    Energy Technology Data Exchange (ETDEWEB)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com; Beckwith, Kristian R. C., E-mail: beckwith@txcorp.com [Tech-X Corporation, Boulder, Colorado 80303 (United States)

    2016-02-15

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two

  9. Alternative modeling methods for plasma-based Rf ion sources

    Science.gov (United States)

    Veitzer, Seth A.; Kundrapu, Madhusudhan; Stoltz, Peter H.; Beckwith, Kristian R. C.

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H- source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H- ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models

  10. Searching in one billion vectors: re-rank with source coding

    CERN Document Server

    Jégou, Hervé; Douze, Matthijs; Amsaleg, Laurent

    2011-01-01

    Recent indexing techniques inspired by source coding have been shown successful to index billions of high-dimensional vectors in memory. In this paper, we propose an approach that re-ranks the neighbor hypotheses obtained by these compressed-domain indexing methods. In contrast to the usual post-verification scheme, which performs exact distance calculation on the short-list of hypotheses, the estimated distances are refined based on short quantization codes, to avoid reading the full vectors from disk. We have released a new public dataset of one billion 128-dimensional vectors and proposed an experimental setup to evaluate high dimensional indexing algorithms on a realistic scale. Experiments show that our method accurately and efficiently re-ranks the neighbor hypotheses using little memory compared to the full vectors representation.

  11. Inferential multi-spectral image compression based on distributed source coding

    Science.gov (United States)

    Wu, Xian-yun; Li, Yun-song; Wu, Cheng-ke; Kong, Fan-qiang

    2008-08-01

    Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with 3D-SPIHT at the cost of slight degrade in PSNR.

  12. Discovering binary codes for documents by learning deep generative models.

    Science.gov (United States)

    Hinton, Geoffrey; Salakhutdinov, Ruslan

    2011-01-01

    We describe a deep generative model in which the lowest layer represents the word-count vector of a document and the top layer represents a learned binary code for that document. The top two layers of the generative model form an undirected associative memory and the remaining layers form a belief net with directed, top-down connections. We present efficient learning and inference procedures for this type of generative model and show that it allows more accurate and much faster retrieval than latent semantic analysis. By using our method as a filter for a much slower method called TF-IDF we achieve higher accuracy than TF-IDF alone and save several orders of magnitude in retrieval time. By using short binary codes as addresses, we can perform retrieval on very large document sets in a time that is independent of the size of the document set using only one word of memory to describe each document.

  13. Transform Coding for Point Clouds Using a Gaussian Process Model.

    Science.gov (United States)

    De Queiroz, Ricardo; Chou, Philip A

    2017-04-28

    We propose using stationary Gaussian Processes (GPs) to model the statistics of the signal on points in a point cloud, which can be considered samples of a GP at the positions of the points. Further, we propose using Gaussian Process Transforms (GPTs), which are Karhunen-Lo`eve transforms of the GP, as the basis of transform coding of the signal. Focusing on colored 3D point clouds, we propose a transform coder that breaks the point cloud into blocks, transforms the blocks using GPTs, and entropy codes the quantized coefficients. The GPT for each block is derived from both the covariance function of the GP and the locations of the points in the block, which are separately encoded. The covariance function of the GP is parameterized, and its parameters are sent as side information. The quantized coefficients are sorted by eigenvalues of the GPTs, binned, and encoded using an arithmetic coder with bin-dependent Laplacian models whose parameters are also sent as side information. Results indicate that transform coding of 3D point cloud colors using the proposed GPT and entropy coding achieves superior compression performance on most of our data sets.

  14. The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering

    NARCIS (Netherlands)

    Walraven, J.C.; Bigaj-Van Vliet, A.

    2011-01-01

    The fib Model Code is a recommendation for the design of reinforced and prestressed concrete which is intended to be a guiding document for future codes. Model Codes have been published before, in 1978 and 1990. The draft for fib Model Code 2010 was published in May 2010. The most important new elem

  15. Protecting Oracle PL/SQL Source Code From a DBA User

    OpenAIRE

    Hakik Paci; Elinda Kajo Mece; Aleksander Xhuvani

    2012-01-01

    In this paper we are presenting a new way to disable DDL statements on some specific PL/SQL procedures to a dba user in the Oracle database. Nowadays dba users have access to a lot of data and source code even if they do not have legal permissions to see or modify them. With this method we can disable the ability to execute DDL and DML statements on some specific pl/sql procedures from every Oracle database user even if it has a dba role. Oracle gives to developer the possibility to wrap the ...

  16. Chronos sickness: digital reality in Duncan Jones’s Source Code

    Directory of Open Access Journals (Sweden)

    Marcia Tiemy Morita Kawamoto

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.

  17. Numerical model of electron cyclotron resonance ion source

    Directory of Open Access Journals (Sweden)

    V. Mironov

    2015-12-01

    Full Text Available Important features of the electron cyclotron resonance ion source (ECRIS operation are accurately reproduced with a numerical code. The code uses the particle-in-cell technique to model the dynamics of ions in ECRIS plasma. It is shown that a gas dynamical ion confinement mechanism is sufficient to provide the ion production rates in ECRIS close to the experimentally observed values. Extracted ion currents are calculated and compared to the experiment for a few sources. Changes in the simulated extracted ion currents are obtained with varying the gas flow into the source chamber and the microwave power. Empirical scaling laws for ECRIS design are studied and the underlying physical effects are discussed.

  18. Improvement of Basic Fluid Dynamics Models for the COMPASS Code

    Science.gov (United States)

    Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi

    The COMPASS code is a new next generation safety analysis code to provide local information for various key phenomena in core disruptive accidents of sodium-cooled fast reactors, which is based on the moving particle semi-implicit (MPS) method. In this study, improvement of basic fluid dynamics models for the COMPASS code was carried out and verified with fundamental verification calculations. A fully implicit pressure solution algorithm was introduced to improve the numerical stability of MPS simulations. With a newly developed free surface model, numerical difficulty caused by poor pressure solutions is overcome by involving free surface particles in the pressure Poisson equation. In addition, applicability of the MPS method to interactions between fluid and multi-solid bodies was investigated in comparison with dam-break experiments with solid balls. It was found that the PISO algorithm and free surface model makes simulation with the passively moving solid model stable numerically. The characteristic behavior of solid balls was successfully reproduced by the present numerical simulations.

  19. New Mechanical Model for the Transmutation Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller

    2008-04-01

    A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.

  20. Basic Pilot Code Development for Two-Fluid, Three-Field Model

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jae Jun; Bae, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.; Ha, K. S.; Kang, D. H

    2006-03-15

    A basic pilot code for one-dimensional, transient, two-fluid, three-field model has been developed. Using 9 conceptual problems, the basic pilot code has been verified. The results of the verification are summarized below: - It was confirmed that the basic pilot code can simulate various flow conditions (such as single-phase liquid flow, bubbly flow, slug/churn turbulent flow, annular-mist flow, and single-phase vapor flow) and transitions of the flow conditions. A mist flow was not simulated, but it seems that the basic pilot code can simulate mist flow conditions. - The pilot code was programmed so that the source terms of the governing equations and numerical solution schemes can be easily tested. - The mass and energy conservation was confirmed for single-phase liquid and single-phase vapor flows. - It was confirmed that the inlet pressure and velocity boundary conditions work properly. - It was confirmed that, for single- and two-phase flows, the velocity and temperature of non-existing phase are calculated as intended. - During the simulation of a two-phase flow, the calculation reaches a quasisteady state with small-amplitude oscillations. The oscillations seem to be induced by some numerical causes. The research items for the improvement of the basic pilot code are listed in the last section of this report.

  1. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    Science.gov (United States)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic

  2. Progressive Source Channel Embedded Coding of Image Over Static (Memory Less Channel

    Directory of Open Access Journals (Sweden)

    Anil L.Wanare

    2009-06-01

    Full Text Available In this paper, we proposed a progressive time varying source channel coding system for transmitting image over wireless channels. Transmission of compressed image data over noisy channel is an important problem and has been investigated in a variety of scenarios. the core results obtained by a systematic method of instantaneous rate allocation between the progressive source coder and progressive channel coder .It is developed by closed form ofexpression for end-to-end distortion , rate allocation respectively in static channels. It is extended the static result to an algorithm for fading channels. It is introduced set DCT (blocks approach is adapted to perform sub band decomposition followed by SPIHT (Setpartitioning in Hierarchical tree

  3. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    Science.gov (United States)

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  4. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  5. Current Capabilities of the Fuel Performance Modeling Code PARFUME

    Energy Technology Data Exchange (ETDEWEB)

    G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson

    2004-09-01

    The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.

  6. Development Of Sputtering Models For Fluids-Based Plasma Simulation Codes

    Science.gov (United States)

    Veitzer, Seth; Beckwith, Kristian; Stoltz, Peter

    2015-09-01

    Rf-driven plasma devices such as ion sources and plasma processing devices for many industrial and research applications benefit from detailed numerical modeling. Simulation of these devices using explicit PIC codes is difficult due to inherent separations of time and spatial scales. One alternative type of model is fluid-based codes coupled with electromagnetics, that are applicable to modeling higher-density plasmas in the time domain, but can relax time step requirements. To accurately model plasma-surface processes, such as physical sputtering and secondary electron emission, kinetic particle models have been developed, where particles are emitted from a material surface due to plasma ion bombardment. In fluid models plasma properties are defined on a cell-by-cell basis, and distributions for individual particle properties are assumed. This adds a complexity to surface process modeling, which we describe here. We describe the implementation of sputtering models into the hydrodynamic plasma simulation code USim, as well as methods to improve the accuracy of fluids-based simulation of plasmas-surface interactions by better modeling of heat fluxes. This work was performed under the auspices of the Department of Energy, Office of Basic Energy Sciences Award #DE-SC0009585.

  7. Open Source Physics: Code and Curriculum Material for Teachers, Authors, and Developers

    Science.gov (United States)

    Christian, Wolfgang

    2004-03-01

    The continued use of procedural languages in education is due in part to the lack of up-to-date curricular materials that combine science topics with an object-oriented programming framework. Although there are many resources for teaching computational physics, few are object-oriented. What is needed by the broader science education community is not another computational physics, numerical analysis, or Java programming book (although such books are essential for discipline-specific practitioners), but a synthesis of curriculum development, computational physics, computer science, and physics education that will be useful for scientists and students wishing to write their own simulations and develop their own curricular material. The Open Source Physics (OSP) project was established to meet this need. OSP is an NSF-funded curriculum development project that is developing and distributing a code library, programs, and examples of computer-based interactive curricular material. In this talk, we will describe this library, demonstrate its use, and report on its adoption by curriculum authors. The Open Source Physics code library, documentation, and sample curricular material can be downloaded from http://www.opensourcephysics.org/. Partial funding for this work was obtained through NSF grant DUE-0126439.

  8. [Review of urban nonpoint source pollution models].

    Science.gov (United States)

    Wang, Long; Huang, Yue-Fei; Wang, Guang-Qian

    2010-10-01

    The development history of urban nonpoint source pollution models is reviewed. Features, applicability and limitations of seven popular urban nonpoint source pollution models (SWMM, STORM, SLAMM, HSPF, DR3M-QUAL, MOUSE, and HydroWorks) are discussed. The methodology and research findings of uncertainty in urban nonpoint source pollution modeling are presented. Analytical probabilistic models for estimation of urban nonpoint sources are also presented. The research achievements of urban nonpoint source pollution models in China are summarized. The shortcomings and gaps of approaches on urban nonpoint source pollution models are pointed out. Improvements in modeling of pollutants buildup and washoff, sediments and pollutants transport, and pollutants biochemical reactions are desired for those seven popular models. Most of the models developed by researchers in China are empirical models, so that they can only applied for specific small areas and have inadequate accuracy. Future approaches include improving capability in fate and transport simulation of sediments and pollutants, exploring methodologies of modeling urban nonpoint source pollution in regions with little data or incomplete information, developing stochastic models for urban nonpoint source pollution simulation, and applying GIS to facilitate urban nonpoint source pollution simulation.

  9. Direct containment heating models in the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  10. Effectiveness Evaluation of Skin Covers against Intravascular Brachytherapy Sources Using VARSKIN3 Code

    Directory of Open Access Journals (Sweden)

    Baghani HR

    2013-12-01

    Full Text Available Background and Objective: The most common intravascular brachytherapy sources include 32P, 188Re, 106Rh and 90Sr/90Y. In this research, skin absorbed dose for different covering materials in dealing with these sources were evaluated and the best covering material for skin protection and reduction of absorbed dose by radiation staff was recognized and recommended. Method: Four materials including polyethylene, cotton and two different kinds of plastic were proposed as skin covers and skin absorbed dose at different depths for each kind of the materials was calculated separately using the VARSKIN3 code. Results: The results suggested that for all sources, skin absorbed dose was minimized when using polyethylene. Considering this material as skin cover, maximum and minimum doses at skin surface were related to 90Sr/90Y and 106Rh, respectively. Conclusion: polyethylene was found the most effective cover in reducing skin dose and protecting the skin. Furthermore, proper agreement between the results of VARSKIN3 and other experimental measurements indicated that VRASKIN3 is a powerful tool for skin dose calculations when working with beta emitter sources. Therefore, it can be utilized in dealing with the issue of radiation protection.

  11. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  12. kspectrum: an open-source code for high-resolution molecular absorption spectra production

    Science.gov (United States)

    Eymet, V.; Coustet, C.; Piaud, B.

    2016-01-01

    We present the kspectrum, scientific code that produces high-resolution synthetic absorption spectra from public molecular transition parameters databases. This code was originally required by the atmospheric and astrophysics communities, and its evolution is now driven by new scientific projects among the user community. Since it was designed without any optimization that would be specific to any particular application field, its use could also be extended to other domains. kspectrum produces spectral data that can subsequently be used either for high-resolution radiative transfer simulations, or for producing statistic spectral model parameters using additional tools. This is a open project that aims at providing an up-to-date tool that takes advantage of modern computational hardware and recent parallelization libraries. It is currently provided by Méso-Star (http://www.meso-star.com) under the CeCILL license, and benefits from regular updates and improvements.

  13. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  14. Dynamic Model on the Transmission of Malicious Codes in Network

    Directory of Open Access Journals (Sweden)

    Bimal Kumar Mishra

    2013-08-01

    Full Text Available This paper introduces differential susceptible e-epidemic model S_i IR (susceptible class-1 for virus (S1 - susceptible class-2 for worms (S2 -susceptible class-3 for Trojan horse (S3 – infectious (I – recovered (R for the transmission of malicious codes in a computer network. We derive the formula for reproduction number (R0 to study the spread of malicious codes in computer network. We show that the Infectious free equilibrium is globally asymptotically stable and endemic equilibrium is locally asymptotically sable when reproduction number is less than one. Also an analysis has been made on the effect of antivirus software in the infectious nodes. Numerical methods are employed to solve and simulate the system of equations developed.

  15. Finite element code development for modeling detonation of HMX composites

    Science.gov (United States)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  16. A Mutation Model from First Principles of the Genetic Code.

    Science.gov (United States)

    Thorvaldsen, Steinar

    2016-01-01

    The paper presents a neutral Codons Probability Mutations (CPM) model of molecular evolution and genetic decay of an organism. The CPM model uses a Markov process with a 20-dimensional state space of probability distributions over amino acids. The transition matrix of the Markov process includes the mutation rate and those single point mutations compatible with the genetic code. This is an alternative to the standard Point Accepted Mutation (PAM) and BLOcks of amino acid SUbstitution Matrix (BLOSUM). Genetic decay is quantified as a similarity between the amino acid distribution of proteins from a (group of) species on one hand, and the equilibrium distribution of the Markov chain on the other. Amino acid data for the eukaryote, bacterium, and archaea families are used to illustrate how both the CPM and PAM models predict their genetic decay towards the equilibrium value of 1. A family of bacteria is studied in more detail. It is found that warm environment organisms on average have a higher degree of genetic decay compared to those species that live in cold environments. The paper addresses a new codon-based approach to quantify genetic decay due to single point mutations compatible with the genetic code. The present work may be seen as a first approach to use codon-based Markov models to study how genetic entropy increases with time in an effectively neutral biological regime. Various extensions of the model are also discussed.

  17. Non-equilibrium Information Envelopes and the Capacity-Delay-Error-Tradeoff of Source Coding

    CERN Document Server

    Lübben, Ralf

    2011-01-01

    This paper develops an envelope-based approach to establish a link between information and queueing theory. Unlike classical, equilibrium information theory, information envelopes focus on the dynamics of sources and coders, using functions of time that bound the number of bits generated. In the limit the information envelopes converge to the average behavior and recover the entropy of a source, respectively, the average codeword length of a coder. In contrast, on short time scales and for sources with memory it is shown that large deviations from known equilibrium results occur with non-negligible probability. These can cause significant network delays. Compared to well-known traffic models from queueing theory, information envelopes consider the functioning of information sources and coders, avoiding a priori assumptions, such as exponential traffic, or empirical, trace-based traffic models. Using results from the stochastic network calculus, the envelopes yield a characterization of the operating points of...

  18. Noise Feedback Coding Revisited: Refurbished Legacy Codecs and New Coding Models

    Institute of Scientific and Technical Information of China (English)

    Stephane Ragot; Balazs Kovesi; Alain Le Guyader

    2012-01-01

    Noise feedback coding (NFC) has attracted renewed interest with the recent standardization of backward-compatible enhancements for ITU-T G.711 and G.722. It has also been revisited with the emergence of proprietary speech codecs, such as BV16, BV32, and SILK, that have structures different from CELP coding. In this article, we review NFC and describe a novel coding technique that optimally shapes coding noise in embedded pulse-code modulation (PCM) and embedded adaptive differential PCM (ADPCM). We describe how this new technique was incorporated into the recent ITU-T G.711.1, G.711 App. III, and G.722 Annex B (G.722B) speech-coding standards.

  19. MMA, A Computer Code for Multi-Model Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  20. Modeling of transient dust events in fusion edge plasmas with DUSTT-UEDGE code

    Science.gov (United States)

    Smirnov, R. D.; Krasheninnikov, S. I.; Pigarov, A. Yu.; Rognlien, T. D.

    2016-10-01

    It is well known that dust can be produced in fusion devices due to various processes involving structural damage of plasma exposed materials. Recent computational and experimental studies have demonstrated that dust production and associated with it plasma contamination can present serious challenges in achieving sustained fusion reaction in future fusion devices, such as ITER. To analyze the impact, which dust can have on performance of fusion plasmas, modeling of coupled dust and plasma transport with DUSTT-UEDGE code is used by the authors. In past, only steady-state computational studies, presuming continuous source of dust influx, were performed due to iterative nature of DUSTT-UEDGE code coupling. However, experimental observations demonstrate that intermittent injection of large quantities of dust, often associated with transient plasma events, may severely impact fusion plasma conditions and even lead to discharge termination. In this work we report on progress in coupling of DUSTT-UEDGE codes in time-dependent regime, which allows modeling of transient dust-plasma transport processes. The methodology and details of the time-dependent code coupling, as well as examples of simulations of transient dust-plasma transport phenomena will be presented. These include time-dependent modeling of impact of short out-bursts of different quantities of tungsten dust in ITER divertor on the edge plasma parameters. The plasma response to the out-bursts with various duration, location, and ejected dust sizes will be analyzed.

  1. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  2. Priliminary Modeling of Air Breakdown with the ICEPIC code

    CERN Document Server

    Schulz, A E; Cartwright, K L; Mardahl, P J; Peterkin, R E; Bruner, N; Genoni, T; Hughes, T P; Welch, D

    2004-01-01

    Interest in air breakdown phenomena has recently been re-kindled with the advent of advanced virtual prototyping of radio frequency (RF) sources for use in high power microwave (HPM) weapons technology. Air breakdown phenomena are of interest because the formation of a plasma layer at the aperture of an RF source decreases the transmitted power to the target, and in some cases can cause significant reflection of RF radiation. Understanding the mechanisms behind the formation of such plasma layers will aid in the development of maximally effective sources. This paper begins with some of the basic theory behind air breakdown, and describes two independent approaches to modeling the formation of plasmas, the dielectric fluid model and the Particle in Cell (PIC) approach. Finally we present the results of preliminary studies in numerical modeling and simulation of breakdown.

  3. Living Up to the Code's Exhortations? Social Workers' Political Knowledge Sources, Expectations, and Behaviors.

    Science.gov (United States)

    Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan

    2016-01-01

    The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed.

  4. REBOUND: An open-source multi-purpose N-body code for collisional dynamics

    CERN Document Server

    Rein, Hanno

    2011-01-01

    REBOUND is a new multi-purpose N-body code which is freely available under an open-source license. It was designed for collisional dynamics such as planetary rings but can also solve the classical N-body problem. It is highly modular and can be customized easily to work on a wide variety of different problems in astrophysics and beyond. REBOUND comes with three symplectic integrators: leap-frog, the symplectic epicycle integrator (SEI) and a Wisdom-Holman mapping (WH). It supports open, periodic and shearing-sheet boundary conditions. REBOUND can use a Barnes-Hut tree to calculate both self-gravity and collisions. These modules are fully parallelized with MPI as well as OpenMP. The former makes use of a static domain decomposition and a distributed essential tree. Two new collision detection modules based on a plane-sweep algorithm are also implemented. The performance of the plane-sweep algorithm is superior to a tree code for simulations in which one dimension is much longer than the other two and in simula...

  5. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    Science.gov (United States)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions (∽ keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approach (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with γ-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and γ-ray strength functions. The results can be converted into ENDF-6 formatted files using the

  6. Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder

    Science.gov (United States)

    Lee, Szu-Wei; Kuo, C.-C. Jay

    2007-09-01

    One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.

  7. Acoustic Gravity Wave Chemistry Model for the RAYTRACE Code.

    Science.gov (United States)

    2014-09-26

    AU)-AI56 850 ACOlUSTIC GRAVITY WAVE CHEMISTRY MODEL FOR THE IAYTRACE I/~ CODE(U) MISSION RESEARCH CORP SANTA BARBIARA CA T E OLD Of MAN 84 MC-N-SlS...DNA-TN-S4-127 ONAOOI-BO-C-0022 UNLSSIFIlED F/O 20/14 NL 1-0 2-8 1111 po 312.2 1--I 11111* i •. AD-A 156 850 DNA-TR-84-127 ACOUSTIC GRAVITY WAVE...Hicih Frequency Radio Propaoation Acoustic Gravity Waves 20. ABSTRACT (Continue en reveree mide if tteceeemr and Identify by block number) This

  8. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm, which mainly exploits the source statistics at the decoder based on the availability of decoder side information. One approach to DVC is feedback channel based Transform Domain Wyner–Ziv (TDWZ) video coding. The efficiency of current ...

  9. SIMULATE-4 multigroup nodal code with microscopic depletion model

    Energy Technology Data Exchange (ETDEWEB)

    Bahadir, T. [Studsvik Scandpower, Inc., Newton, MA (United States); Lindahl, St.O. [Studsvik Scandpower AB, Vasteras (Sweden); Palmtag, S.P. [Studsvik Scandpower, Inc., Idaho Falls, ID (United States)

    2005-07-01

    SIMULATE-4 is a three-dimensional multigroup analytical nodal code with microscopic depletion capability. It has been developed employing 'first principal models' thus avoiding ad hoc approximations. The multigroup diffusion equations or, optionally, the simplified P{sub 3} equations are solved. Cross sections are described by a hybrid microscopic-macroscopic model that includes approximately 50 heavy nuclides and fission products. Heterogeneities in the axial direction of an assembly are treated systematically. Radially, the assembly is divided into heterogeneous sub-meshes, thereby overcoming the shortcomings of spatially-averaged assembly cross sections and discontinuity factors generated with zero net-current boundary conditions. Numerical tests against higher order transport methods and critical experiments show substantial improvements compared to results of existing nodal models. (authors)

  10. Bayesian Regularization in a Neural Network Model to Estimate Lines of Code Using Function Points

    Directory of Open Access Journals (Sweden)

    K. K. Aggarwal

    2005-01-01

    Full Text Available It is a well known fact that at the beginning of any project, the software industry needs to know, how much will it cost to develop and what would be the time required ? . This paper examines the potential of using a neural network model for estimating the lines of code, once the functional requirements are known. Using the International Software Benchmarking Standards Group (ISBSG Repository Data (release 9 for the experiment, this paper examines the performance of back propagation feed forward neural network to estimate the Source Lines of Code. Multiple training algorithms are used in the experiments. Results demonstrate that the neural network models trained using Bayesian Regularization provide the best results and are suitable for this purpose.

  11. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  12. HELIOS: An Open-source, GPU-accelerated Radiative Transfer Code for Self-consistent Exoplanetary Atmospheres

    Science.gov (United States)

    Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin

    2017-02-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).

  13. Kinetic models of gene expression including non-coding RNAs

    Science.gov (United States)

    Zhdanov, Vladimir P.

    2011-03-01

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  14. A simple model of optimal population coding for sensory systems.

    Science.gov (United States)

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  15. A New Open-Source Code for Spherically-Symmetric Stellar Collapse to Neutron Stars and Black Holes

    CERN Document Server

    O'Connor, Evan

    2009-01-01

    We present the new open-source spherically-symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We pr...

  16. Development of condensation modeling modeling and simulation code for IRWST

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Nyung; Jang, Wan Ho; Ko, Jong Hyun; Ha, Jong Baek; Yang, Chang Keun; Son, Myung Seong [Kyung Hee Univ., Seoul (Korea)

    1997-07-01

    One of the design improvements of the KNGR(Korean Next Generation Reactor) which is advanced to safety and economy is the adoption of IRWST(In-Containment Refueling Water Storage Tank). The IRWST, installed inside of the containment building, has more designed purpose than merely the location change of the tank. Since the design functions of the IRWST is similar to these of the BWR's suppression pool, theoretical models applicable to BWR's suppression pool can be mostly applied to the IRWST. But for the PWR, the geometry of the sparger, the operation mode and the steam quantity and temperature and pressure of discharged fluid from primary system to IRWST through PSV or SDV may be different from those of BWR. Also there is some defects in detailed parts of condensation model. Therefore we, as the first nation to construct PWR with IRWST, must carry out profound research for there problems such that the results can be utilized and localized as an exclusive technology. All kinds of thermal hydraulics phenomena was investigated and existing condensation models by Hideki Nariai and Izuo Aya were analyzed. Also throuh a rigorous literature review such as operation experience, experimental data, design document of KNGR, items which need modification and supplementation were derived. Analytical model for chugging phenomena is also presented. 15 refs., 18 figs., 4 tabs. (Author)

  17. Photovoltaic sources modeling and emulation

    CERN Document Server

    Piazza, Maria Carmela Di

    2012-01-01

    This book offers an extensive introduction to the modeling of photovoltaic generators and their emulation by means of power electronic converters will aid in understanding and improving design and setup of new PV plants.

  18. OSeMOSYS: The Open Source Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Howells, Mark, E-mail: mark.i.howells@gmail.com [Royal Institute of Technology (KTH) (Sweden); Rogner, Holger [Planning and Economic Studies Section, International Atomic Energy Agency (Austria); Strachan, Neil [Energy Institute, University College London (United Kingdom); Heaps, Charles [Stockholm Environmental Institute (SEI) (United States); Huntington, Hillard [Stanford University (United States); Kypreos, Socrates [Paul Scherrer Institute (Switzerland); Hughes, Alison [Energy Research Centre, University of Cape Town (South Africa); Silveira, Semida [Royal Institute of Technology (KTH) (Sweden); DeCarolis, Joe [North Carolina State University (United States); Bazillian, Morgan [United Nations Industrial Development Organization (UNIDO) (Austria); Roehrl, Alexander [United Nations Department of Economic and Social Affairs (UNDESA) (United States)

    2011-10-15

    This paper discusses the design and development of the Open Source Energy Modeling System (OSeMOSYS). It describes the model's formulation in terms of a 'plain English' description, algebraic formulation, implementation-in terms of its full source code, as well as a detailed description of the model inputs, parameters, and outputs. A key feature of the OSeMOSYS implementation is that it is contained in less than five pages of documented, easily accessible code. Other existing energy system models that do not have this emphasis on compactness and openness makes the barrier to entry by new users much higher, as well as making the addition of innovative new functionality very difficult. The paper begins by describing the rationale for the development of OSeMOSYS and its structure. The current preliminary implementation of the model is then demonstrated for a discrete example. Next, we explain how new development efforts will build on the existing OSeMOSYS codebase. The paper closes with thoughts regarding the organization of the OSeMOSYS community, associated capacity development efforts, and linkages to other open source efforts including adding functionality to the LEAP model. - Highlights: > OSeMOSYS is a new free and open source energy systems. > This model is written in a simple, open, flexible and transparent manner to support teaching. > OSeMOSYS is based on free software and optimizes using a free solver. > This model replicates the results of many popular tools, such as MARKAL. > A link between OSeMOSYS and LEAP has been developed.

  19. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  20. Open Source Software Reliability Growth Model by Considering Change- Point

    Directory of Open Access Journals (Sweden)

    Mashaallah Basirzadeh

    2012-01-01

    Full Text Available The modeling technique for Software Reliability is reaching its prosperity. Software reliability growth models have been used extensively for closed source software. The design and development of open source software (OSS is different from closed source software. We observed some basic characteristics for open source software like (i more instructions execution and code coverage taking place with respect to time, (ii release early, release often (iii frequent addition of patches (iv heterogeneity in fault density and effort expenditure (v Frequent release activities seem to have changed the bug dynamics significantly (vi Bug reporting on bug tracking system drastically increases and decreases. Due to this reason bug reported on bug tracking system keeps an irregular state and fluctuations. Therefore, fault detection/removal process can not be smooth and may be changed at some time point called change-point. In this paper, an instructions executed dependent software reliability growth model has been developed by considering change-point in order to cater diverse and huge user profile, irregular state of bug tracking system and heterogeneity in fault distribution. We have analyzed actual software failure count data to show numerical examples of software reliability assessment for the OSS. We also compare our model with the conventional in terms of goodness-of-fit for actual data. We have shown that the proposed model can assist improvement of quality for OSS systems developed under the open source project.

  1. Numerical modeling of the SNS H- ion source

    Science.gov (United States)

    Veitzer, Seth A.; Beckwith, Kristian R. C.; Kundrapu, Madhusudhan; Stoltz, Peter H.

    2015-04-01

    here on comparisons of simulated plasma parameters and code performance using more accurate physical models, such as two-temperature extended MHD models, for both a related benchmark system describing a inductively coupled plasma reactor, and for the SNS ion source. We also present results from scaling studies for mesh generation and solvers in the USim simulation code.

  2. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    capability of performing iterated-source (criticality), multiplied-fixed-source, and fixed-source calculations. MCV uses a highly detailed continuous-energy (as opposed to multigroup) representation of neutron histories and cross section data. The spatial modeling is fully three-dimensional (3-D), and any geometrical region that can be described by quadric surfaces may be represented. The primary results are region-wise reaction rates, neutron production rates, slowing-down-densities, fluxes, leakages, and when appropriate the eigenvalue or multiplication factor. Region-wise nuclidic reaction rates are also computed, which may then be used by other modules in the system to determine time-dependent nuclide inventories so that RACER can perform depletion calculations. Furthermore, derived quantities such as ratios and sums of primary quantities and/or other derived quantities may also be calculated. MCV performs statistical analyses on output quantities, computing estimates of the 95% confidence intervals as well as indicators as to the reliability of these estimates. The remainder of this chapter provides an overview of the MCV algorithm. The following three chapters describe the MCV mathematical, physical, and statistical treatments in more detail. Specifically, Chapter 2 discusses topics related to tracking the histories including: geometry modeling, how histories are moved through the geometry, and variance reduction techniques related to the tracking process. Chapter 3 describes the nuclear data and physical models employed by MCV. Chapter 4 discusses the tallies, statistical analyses, and edits. Chapter 5 provides some guidance as to how to run the code, and Chapter 6 is a list of the code input options.

  3. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  4. Mathematical modeling of wiped-film evaporators. [MAIN codes

    Energy Technology Data Exchange (ETDEWEB)

    Sommerfeld, J.T.

    1976-05-01

    A mathematical model and associated computer program were developed to simulate the steady-state operation of wiped-film evaporators for the concentration of typical waste solutions produced at the Savannah River Plant. In this model, which treats either a horizontal or a vertical wiped-film evaporator as a plug-flow device with no backmixing, three fundamental phenomena are described: sensible heating of the waste solution, vaporization of water, and crystallization of solids from solution. Physical property data were coded into the computer program, which performs the calculations of this model. Physical properties of typical waste solutions and of the heating steam, generally as analytical functions of temperature, were obtained from published data or derived by regression analysis of tabulated or graphical data. Preliminary results from tests of the Savannah River Laboratory semiworks wiped-film evaporators were used to select a correlation for the inside film heat transfer coefficient. This model should be a useful aid in the specification, operation, and control of the full-scale wiped-film evaporators proposed for application under plant conditions. In particular, it should be of value in the development and analysis of feed-forward control schemes for the plant units. Also, this model can be readily adapted, with only minor changes, to simulate the operation of wiped-film evaporators for other conceivable applications, such as the concentration of acid wastes.

  5. CODE's new solar radiation pressure model for GNSS orbit determination

    Science.gov (United States)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.

    2015-08-01

    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  6. HELIOS: An Open-Source, GPU-Accelerated Radiative Transfer Code For Self-Consistent Exoplanetary Atmospheres

    CERN Document Server

    Malik, Matej; Mendonça, João M; Grimm, Simon L; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L; Stevenson, Kevin B; Heng, Kevin

    2016-01-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with non-isotropic scattering. The opacities are computed with the opacity calculator HELIOS-K and converted to k-distribution tables by weighing the molecular abundances with analytical chemistry formulae. We validate HELIOS by comparing a model of GJ 1214b to that computed using COOLTLUSTY and from the work of Miller-Ricci & Fortney, and by performing several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with 1-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct "null-hypothesis" models (chemic...

  7. The Commercial Open Source Business Model

    Science.gov (United States)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  8. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg.

  9. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    Science.gov (United States)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  10. Djehuty, a Code for Modeling Stars in Three Dimensions

    CERN Document Server

    Bazán, G; Dossa, D D; Eggleton, P P; Taylor, A; Castor, J I; Murray, S; Cook, K H; Eltgroth, P G; Cavallo, R M; Turcotte, S; Keller, S C; Pudliner, B S

    2003-01-01

    Current practice in stellar evolution is to employ one-dimensional calculations that quantitatively apply only to a minority of the observed stars (single non-rotating stars, or well detached binaries). Even in these systems, astrophysicists are dependent on approximations to handle complex three-dimensional processes like convection. Understanding the structure of binary stars, like those that lead to the Type Ia supernovae used to measure the expansion of the universe, are grossly non-spherical and await a 3D treatment. To approach very large problems like multi-dimensional modeling of stars, the Lawrence Livermore National Laboratory has invested in massively parallel computers and invested even more in developing the algorithms to utilize them on complex physics problems. We have leveraged skills from across the lab to develop a 3D stellar evolution code, Djehuty (after the Egyptian god for writing and calculation) that operates efficiently on platforms with thousands of nodes, with the best available phy...

  11. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  12. Modelling of aspherical nebulae. I. A quick pseudo-3D photoionization code

    CERN Document Server

    Morisset, C; Peña, M

    2005-01-01

    We describe a pseudo-3D photoionization code, NEBU_3D and its associated visualization tool, VIS_NEB3D, which are able to easily and rapidly treat a wide variety of nebular geometries, by combining models obtained with a 1D photoionization code. The only requirement for the code to work is that the ionization source is uniqu e and not extended. It is applicable as long as the diffuse ionizing radiation f ield is not dominant and strongly inhomogeneous. As examples of the capabilities of these new tools, we consider two very differ ent theoretical cases. One is that of a high excitation planetary nebula that ha s an ellipsoidal shape with two polar density knots. The other one is that of a blister HII region, for which we have also constructed a spherical model (the sp herical impostor) which has exactly the same Hbeta surface brightness distrib ution as the blister model and the same ionizing star. These two examples warn against preconceived ideas when interpreting spectroscop ic and imaging data of HII regi...

  13. New Source Model for Chemical Explosions

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaoning [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    With sophisticated inversion scheme, we recover characteristics of SPE explosions such as corner frequency fc and moment M0, which are used to develop a new source model for chemical explosions.

  14. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    Science.gov (United States)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  15. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    Science.gov (United States)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  16. Modeling of water radiolysis at spallation neutron sources

    Energy Technology Data Exchange (ETDEWEB)

    Daemen, L.L.; Kanner, G.S.; Lillard, R.S.; Butt, D.P.; Brun, T.O.; Sommer, W.F.

    1998-12-01

    In spallation neutron sources neutrons are produced when a beam of high-energy particles (e.g., 1 GeV protons) collides with a (water-cooled) heavy metal target such as tungsten. The resulting spallation reactions produce a complex radiation environment (which differs from typical conditions at fission and fusion reactors) leading to the radiolysis of water molecules. Most water radiolysis products are short-lived but extremely reactive. When formed in the vicinity of the target surface they can react with metal atoms, thereby contributing to target corrosion. The authors will describe the results of calculations and experiments performed at Los Alamos to determine the impact on target corrosion of water radiolysis in the spallation radiation environment. The computational methodology relies on the use of the Los Alamos radiation transport code, LAHET, to determine the radiation environment, and the AEA code, FACSIMILE, to model reaction-diffusion processes.

  17. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  18. Modelling of sprays in containment applications with A CMFD code

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.f [Electricite de France R and D Division, 6 Quai Watier, F-78400 Chatou (France); Lamy, J.-S. [Electricite de France R and D Division, 1 av. du General de Gaulle, F-92140 Clamart (France); Lavieville, J. [Electricite de France R and D Division, 6 Quai Watier, F-78400 Chatou (France); Guieu, S.; Martin, M. [Electricite de France SEPTEN Division, 12-14 av. Dutrievoz, 69628 Villeurbanne (France)

    2010-09-15

    During the course of a hypothetical severe accident in a Pressurized Water Reactor (PWR), spray systems are used in the containment in order to prevent overpressure in case of a steam line break, and to enhance the gas mixing in case of the presence of hydrogen. In the frame of the Severe Accident Research Network (SARNET) of the 6th EC Framework Programme, two tests was produced in the TOSQAN facility in order to study the spray behaviour under severe accident conditions: TOSQAN 101 and TOSQAN 113. The TOSQAN facility is a closed cylindrical vessel. The inner spray system is located on the top of the enclosure on the vertical axis. For the TOSQAN 101 case, an initial pressurization in the vessel is performed with superheated steam up to 2.5 bar. Then, steam injection is stopped and spraying starts simultaneously at a given water temperature (around 25 {sup o}C) and water mass flow-rate (around 30 g/s). The depressurization transient starts and continues until the equilibrium phase, which corresponds to the stabilization of the average temperature and pressure of the gaseous mixture inside the vessel. The purpose of the TOSQAN 113 cold spray test is to study helium mixing due to spray activation without heat and mass transfers between gas and droplets. We present in this paper the spray modelling implemented in NEPTUNE{sub C}FD, a three-dimensional multi-fluid code developed especially for nuclear reactor applications. A new model dedicated to the droplet evaporation at the wall is also detailed. Keeping in mind the Best Practice Guidelines, closure laws have been selected to ensure a grid-dependence as weak as possible. For the TOSQAN 113 case, the time evolution of the helium volume fraction calculated shows that the physical approach described in the paper is able to reproduce the mixing of helium by the spray. The prediction of the transient behaviour should be improved by including in the model corrections based on better understanding of the influence of the

  19. Subgrid Combustion Modeling for the Next Generation National Combustion Code

    Science.gov (United States)

    Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher

    2003-01-01

    In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.

  20. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh;

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  1. Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code

    Energy Technology Data Exchange (ETDEWEB)

    Viani, B.E.; Bruton, C.J.

    1992-06-01

    Assessing the suitability of Yucca Mtn., NV as a potential repository for high-level nuclear waste requires the means to simulate ion-exchange behavior of zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs or Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites.

  2. Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code

    Energy Technology Data Exchange (ETDEWEB)

    Viani, B.E.; Bruton, C.J. [Lawrence Livermore National Lab., CA (United States)

    1992-12-31

    Potential disposal of high-level nuclear waste at Yucca Mtn., Nevada requires the means to simulate ion-exchange behavior of clays and zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs and Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites. 15 refs., 5 figs., 1 tab.

  3. On Network Coded Filesystem Shim: Over-the-top Multipath Multi-Source Made Easy

    DEFF Research Database (Denmark)

    Sørensen, Chres Wiant; Roetter, Daniel Enrique Lucani; Médard, Muriel

    2017-01-01

    benefits to any application in a computer. However, incorporating new protocols to the Internet is a challenging and slow process. Second, deploying coding at the application layer, which forces each application to implement network coding. This paper proposes an alternative approach through the use...

  4. Distributed source coding of video with non-stationary side-information

    NARCIS (Netherlands)

    Meyer, P.F.A.; Westerlaken, R.P.; Klein Gunnewiek, R.; Lagendijk, R.L.

    2005-01-01

    In distributed video coding, the complexity of the video encoder is reduced at the cost of a more complex video decoder. Using the principles of Slepian andWolf, video compression is then carried out using channel coding principles, under the assumption that the video decoder can temporally predict

  5. A New Model for the Error Detection Delay of Finite Precision Binary Arithmetic Codes with a Forbidden Symbol

    Science.gov (United States)

    Pang, Yuye; Sun, Jun; Wang, Jia; Wang, Peng

    In this paper, the statistical characteristic of the Error Detection Delay (EDD) of Finite Precision Binary Arithmetic Codes (FPBAC) is discussed. It is observed that, apart from the probability of the Forbidden Symbol (FS) inserted into the list of the source symbols, the probability of the source sequence and the operation precision as well as the position of the FS in the coding interval can affect the statistical characteristic of the EDD. Experiments demonstrate that the actual distribution of the EDD of FPBAC is quite different from the geometric distribution of infinite precision arithmetic codes. This phenomenon is researched deeply, and a new statistical model (gamma distribution) of the actual distribution of the EDD is proposed, which can make a more precise prediction of the EDD. Finally, the relation expressions between the parameters of gamma distribution and the related factors affecting the distribution are given.

  6. A Mathematical Model Accounting for the Organisation in Multiplets of the Genetic Code

    OpenAIRE

    Sciarrino, A.

    2001-01-01

    Requiring stability of genetic code against translation errors, modelised by suitable mathematical operators in the crystal basis model of the genetic code, the main features of the organisation in multiplets of the mitochondrial and of the standard genetic code are explained.

  7. A realistic model under which the genetic code is optimal.

    Science.gov (United States)

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-10-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  8. Bug-Fixing and Code-Writing: The Private Provision of Open Source Software

    DEFF Research Database (Denmark)

    Bitzer, Jürgen; Schröder, Philipp

    2002-01-01

    Open source software (OSS) is a public good. A self-interested individual would consider providing such software, if the benefits he gained from having it justified the cost of programming. Nevertheless each agent is tempted to free ride and wait for others to develop the software instead....... This problem is modelled as a war of attrition with complete information, job signaling, repeated contribution to the public good and uncertainty in programming. The resulting game does not feature any delay: software will be provided swiftly, by young, low-cost individuals who gain considerably by signaling...

  9. Semantic-preload video model based on VOP coding

    Science.gov (United States)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  10. Probabilistic forward model for electroencephalography source analysis

    Energy Technology Data Exchange (ETDEWEB)

    Plis, Sergey M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); George, John S [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Jun, Sung C [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Ranken, Doug M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Volegov, Petr L [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Schmidt, David M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2007-09-07

    Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates.

  11. On the development of LWR fuel analysis code (1). Analysis of the FEMAXI code and proposal of a new model

    Energy Technology Data Exchange (ETDEWEB)

    Lemehov, Sergei; Suzuki, Motoe [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2000-01-01

    This report summarizes the review on the modeling features of FEMAXI code and proposal of a new theoretical equation model of clad creep on the basis of irradiation-induced microstructure change. It was pointed out that plutonium build-up in fuel matrix and non-uniform radial power profile at high burn-up affect significantly fuel behavior through the interconnected effects with such phenomena as clad irradiation-induced creep, fission gas release, fuel thermal conductivity degradation, rim porous band formation and associated fuel swelling. Therefore, these combined effects should be properly incorporated into the models of the FEMAXI code so that the code can carry out numerical analysis at the level of accuracy and elaboration that modern experimental data obtained in test reactors have. Also, the proposed new mechanistic clad creep model has a general formalism which allows the model to be flexibly applied for clad behavior analysis under normal operation conditions and power transients as well for Zr-based clad materials by the use of established out-of-pile mechanical properties. The model has been tested against experimental data, while further verification is needed with specific emphasis on power ramps and transients. (author)

  12. Edge Transport Modeling using the 3D EMC3-Eirene code on Tokamaks and Stellarators

    Science.gov (United States)

    Lore, J. D.; Ahn, J. W.; Briesemeister, A.; Ferraro, N.; Labombard, B.; McLean, A.; Reinke, M.; Shafer, M.; Terry, J.

    2015-11-01

    The fluid plasma edge transport code EMC3-Eirene has been applied to aid data interpretation and understanding the results of experiments with 3D effects on several tokamaks. These include applied and intrinsic 3D magnetic fields, 3D plasma facing components, and toroidally and poloidally localized heat and particle sources. On Alcator C-Mod, a series of experiments explored the impact of toroidally and poloidally localized impurity gas injection on core confinement and asymmetries in the divertor fluxes, with the differences between the asymmetry in L-mode and H-mode qualitatively reproduced in the simulations due to changes in the impurity ionization in the private flux region. Modeling of NSTX experiments on the effect of 3D fields on detachment matched the trend of a higher density at which the detachment occurs when 3D fields are applied. On DIII-D, different magnetic field models were used in the simulation and compared against the 2D Thomson scattering diagnostic. In simulating each device different aspects of the code model are tested pointing to areas where the model must be further developed. The application to stellarator experiments will also be discussed. Work supported by U.S. DOE: DE-AC05-00OR22725, DE AC02-09CH11466, DE-FC02-99ER54512, and DE-FC02-04ER54698.

  13. JSim, an open-source modeling system for data analysis.

    Science.gov (United States)

    Butterworth, Erik; Jardine, Bartholomew E; Raymond, Gary M; Neal, Maxwell L; Bassingthwaighte, James B

    2013-01-01

    JSim is a simulation system for developing models, designing experiments, and evaluating hypotheses on physiological and pharmacological systems through the testing of model solutions against data. It is designed for interactive, iterative manipulation of the model code, handling of multiple data sets and parameter sets, and for making comparisons among different models running simultaneously or separately. Interactive use is supported by a large collection of graphical user interfaces for model writing and compilation diagnostics, defining input functions, model runs, selection of algorithms solving ordinary and partial differential equations, run-time multidimensional graphics, parameter optimization (8 methods), sensitivity analysis, and Monte Carlo simulation for defining confidence ranges. JSim uses Mathematical Modeling Language (MML) a declarative syntax specifying algebraic and differential equations. Imperative constructs written in other languages (MATLAB, FORTRAN, C++, etc.) are accessed through procedure calls. MML syntax is simple, basically defining the parameters and variables, then writing the equations in a straightforward, easily read and understood mathematical form. This makes JSim good for teaching modeling as well as for model analysis for research.   For high throughput applications, JSim can be run as a batch job.  JSim can automatically translate models from the repositories for Systems Biology Markup Language (SBML) and CellML models. Stochastic modeling is supported. MML supports assigning physical units to constants and variables and automates checking dimensional balance as the first step in verification testing. Automatic unit scaling follows, e.g. seconds to minutes, if needed. The JSim Project File sets a standard for reproducible modeling analysis: it includes in one file everything for analyzing a set of experiments: the data, the models, the data fitting, and evaluation of parameter confidence ranges. JSim is open source; it

  14. Djehuty A Code for Modeling Whole Stars in Three Dimensions

    CERN Document Server

    Turcotte, S; Castor, J I; Cavallo, R M; Cohl, H S; Cook, K; Dearborn, D S P; Dossa, D D; Eastman, R; Eggleton, P P; Eltgroth, P; Keller, S; Murray, S; Taylor, A

    2001-01-01

    The DJEHUTY project is an intensive effort at the Lawrence Livermore National Laboratory (LLNL) to produce a general purpose 3-D stellar structure and evolution code to study dynamic processes in whole stars.

  15. Algorthms and Regolith Erosion Models for the Alert Code Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ORBITEC and Duke University have teamed on this STTR to develop the ALERT (Advanced Lunar Exhaust-Regolith Transport) code which will include new developments in...

  16. Modeling huge sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modeling point sources, line sources, and surface sources is presented. Line and surface sources are modeled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces of the room. Point sources are modeled using a hybrid calculation...... method combining this ray-tracing method with image source modeling. With these three source types it is possible to model huge and complex sound sources in industrial environments. Compared to a calculation with only point sources, the use of extended sound sources is shown to improve the agreement...

  17. Exo-Transmit: An Open-Source Code for Calculating Transmission Spectra for Exoplanet Atmospheres of Varied Composition

    Science.gov (United States)

    Kempton, Eliza M.-R.; Lupu, Roxana; Owusu-Asare, Albert; Slough, Patrick; Cale, Bryson

    2017-04-01

    We present Exo-Transmit, a software package to calculate exoplanet transmission spectra for planets of varied composition. The code is designed to generate spectra of planets with a wide range of atmospheric composition, temperature, surface gravity, and size, and is therefore applicable to exoplanets ranging in mass and size from hot Jupiters down to rocky super-Earths. Spectra can be generated with or without clouds or hazes with options to (1) include an optically thick cloud deck at a user-specified atmospheric pressure or (2) to augment the nominal Rayleigh scattering by a user-specified factor. The Exo-Transmit code is written in C and is extremely easy to use. Typically the user will only need to edit parameters in a single user input file in order to run the code for a planet of their choosing. Exo-Transmit is available publicly on Github with open-source licensing at https://github.com/elizakempton/Exo_Transmit.

  18. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  19. Exo-Transmit: An Open-Source Code for Calculating Transmission Spectra for Exoplanet Atmospheres of Varied Composition

    CERN Document Server

    Kempton, Eliza M -R; Owusu-Asare, Albert; Slough, Patrick; Cale, Bryson

    2016-01-01

    We present Exo-Transmit, a software package to calculate exoplanet transmission spectra for planets of varied composition. The code is designed to generate spectra of planets with a wide range of atmospheric composition, temperature, surface gravity, and size, and is therefore applicable to exoplanets ranging in mass and size from hot Jupiters down to rocky super-Earths. Spectra can be generated with or without clouds or hazes with options to (1) include an optically thick cloud deck at a user-specified atmospheric pressure or (2) to augment the nominal Rayleigh scattering by a user-specified factor. The Exo-Transmit code is written in C and is extremely easy to use. Typically the user will only need to edit parameters in a single user input file in order to run the code for a planet of their choosing. Exo-Transmit is available publicly on Github with open-source licensing at https://github.com/elizakempton/Exo_Transmit .

  20. Analytical models of volcanic ellipsoidal expansion sources

    Directory of Open Access Journals (Sweden)

    Antonella Amoruso

    2013-11-01

    Full Text Available Modeling non-double-couple earthquakes and surficial deformation in volcanic and geothermal areas usually involves expansion sources. Given an ensemble of ellipsoidal or tensile expansion sources and double-couple ones, it is straightforward to obtain the equivalent single moment tensor under the far-field approximation. On the contrary, the moment tensor interpretation is by no means unique or unambiguous. If the far-field approximation is unsatisfied, the single moment tensor representation is inappropriate. Here we focus on the volume change estimate in the case of single sources, in particular finite pressurized ellipsoidal sources, presenting the expressions for the computation of the volume change and surficial displacement in a closed analytical form. We discuss the implications of different domains of the moment-tensor eigenvalue ratios in terms of volume change computation. We also discuss how the volume change of each source can be obtained from the isotropic component of the total moment tensor, in few cases of coupled sources where the total volume change is null. The new expressions for the computation of the volume change and surficial displacement in case of finite pressurized ellipsoidal sources should make their use easier with respect to the already published formulations.

  1. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  2. Numerical modelling of pressure suppression pools with CFD and FEM codes

    Energy Technology Data Exchange (ETDEWEB)

    Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))

    2011-06-15

    Experiments on large-break loss-of-coolant accident for BWR is modeled with computational fluid (CFD) dynamics and finite element calculations. In the CFD calculations, the direct-contact condensation in the pressure suppression pool is studied. The heat transfer in the liquid phase is modeled with the Hughes-Duffey correlation based on the surface renewal model. The heat transfer is proportional to the square root of the turbulence kinetic energy. The condensation models are implemented with user-defined functions in the Euler-Euler two-phase model of the Fluent 12.1 CFD code. The rapid collapse of a large steam bubble and the resulting pressure source is studied analytically and numerically. Pressure source obtained from simplified calculations is used for studying the structural effects and FSI in a realistic BWR containment. The collapse results in volume acceleration, which induces pressure loads on the pool walls. In the case of a spherical bubble, the velocity term of the volume acceleration is responsible of the largest pressure load. As the amount of air in the bubble is decreased, the peak pressure increases. However, when the water compressibility is accounted for, the finite speed of sound becomes a limiting factor. (Author)

  3. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  4. Open-source direct simulation Monte Carlo chemistry modeling for hypersonic flows

    OpenAIRE

    Scanlon, Thomas J.; White, Craig; Borg, Matthew K.; Palharini, Rodrigo C.; Farbar, Erin; Boyd, Iain D.; Reese, Jason M.; Brown, Richard E

    2015-01-01

    An open source implementation of chemistry modelling for the direct simulationMonte Carlo (DSMC) method is presented. Following the recent work of Bird [1] an approach known as the quantum kinetic (Q-K) method has been adopted to describe chemical reactions in a 5-species air model using DSMC procedures based on microscopic gas information. The Q-K technique has been implemented within the framework of the dsmcFoam code, a derivative of the open source CFD code OpenFOAM. Results for vibration...

  5. Adaptive Partially Hidden Markov Models with Application to Bilevel Image Coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rasmussen, Tage

    1999-01-01

    Adaptive Partially Hidden Markov Models (APHMM) are introduced extending the PHMM models. The new models are applied to lossless coding of bi-level images achieving resluts which are better the JBIG standard.......Adaptive Partially Hidden Markov Models (APHMM) are introduced extending the PHMM models. The new models are applied to lossless coding of bi-level images achieving resluts which are better the JBIG standard....

  6. Biocomputational prediction of non-coding RNAs in model cyanobacteria

    Directory of Open Access Journals (Sweden)

    Ude Susanne

    2009-03-01

    Full Text Available Abstract Background In bacteria, non-coding RNAs (ncRNA are crucial regulators of gene expression, controlling various stress responses, virulence, and motility. Previous work revealed a relatively high number of ncRNAs in some marine cyanobacteria. However, for efficient genetic and biochemical analysis it would be desirable to identify a set of ncRNA candidate genes in model cyanobacteria that are easy to manipulate and for which extended mutant, transcriptomic and proteomic data sets are available. Results Here we have used comparative genome analysis for the biocomputational prediction of ncRNA genes and other sequence/structure-conserved elements in intergenic regions of the three unicellular model cyanobacteria Synechocystis PCC6803, Synechococcus elongatus PCC6301 and Thermosynechococcus elongatus BP1 plus the toxic Microcystis aeruginosa NIES843. The unfiltered numbers of predicted elements in these strains is 383, 168, 168, and 809, respectively, combined into 443 sequence clusters, whereas the numbers of individual elements with high support are 94, 56, 64, and 406, respectively. Removing also transposon-associated repeats, finally 78, 53, 42 and 168 sequences, respectively, are left belonging to 109 different clusters in the data set. Experimental analysis of selected ncRNA candidates in Synechocystis PCC6803 validated new ncRNAs originating from the fabF-hoxH and apcC-prmA intergenic spacers and three highly expressed ncRNAs belonging to the Yfr2 family of ncRNAs. Yfr2a promoter-luxAB fusions confirmed a very strong activity of this promoter and indicated a stimulation of expression if the cultures were exposed to elevated light intensities. Conclusion Comparison to entries in Rfam and experimental testing of selected ncRNA candidates in Synechocystis PCC6803 indicate a high reliability of the current prediction, despite some contamination by the high number of repetitive sequences in some of these species. In particular, we

  7. Development of sump model for containment hydrogen distribution calculations using CFD code

    Energy Technology Data Exchange (ETDEWEB)

    Ravva, Srinivasa Rao, E-mail: srini@aerb.gov.in [Indian Institute of Technology-Bombay, Mumbai (India); Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India); Iyer, Kannan N. [Indian Institute of Technology-Bombay, Mumbai (India); Gaikwad, A.J. [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India)

    2015-12-15

    Highlights: • Sump evaporation model was implemented in FLUENT using three different approaches. • Validated the implemented sump evaporation models against TOSQAN facility. • It was found that predictions are in good agreement with the data. • Diffusion based model would be able to predict both condensation and evaporation. - Abstract: Computational Fluid Dynamics (CFD) simulations are necessary for obtaining accurate predictions and local behaviour for carrying out containment hydrogen distribution studies. However, commercially available CFD codes do not have all necessary models for carrying out hydrogen distribution analysis. One such model is sump or suppression pool evaporation model. The water in the sump may evaporate during the accident progression and affect the mixture concentrations in the containment. Hence, it is imperative to study the sump evaporation and its effect. Sump evaporation is modelled using three different approaches in the present work. The first approach deals with the calculation of evaporation flow rate and sump liquid temperature and supplying these quantities through user defined functions as boundary conditions. In this approach, the mean values of the domain are used. In the second approach, the mass, momentum, energy and species sources arise due to the sump evaporation are added to the domain through user defined functions. Cell values adjacent to the sump interface are used in this. Heat transfer between gas and liquid is calculated automatically by the code itself. However, in these two approaches, the evaporation rate was computed using an experimental correlation. In the third approach, the evaporation rate is directly estimated using diffusion approximation. The performance of these three models is compared with the sump behaviour experiment conducted in TOSQAN facility.Classification: K. Thermal hydraulics.

  8. Modeling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  9. Modeling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  10. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P.; Vranca, L.; Vaclav, E. [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1995-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  11. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1))....

  12. General Description of Fission Observables: GEF Model Code

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K.-H. [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Jurado, B., E-mail: jurado@cenbg.in2p3.fr [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Amouroux, C. [CEA, DSM-Saclay (France); Schmitt, C., E-mail: schmitt@ganil.fr [GANIL, Bd. Henri Becquerel, B.P. 55027, F-14076 Caen Cedex 05 (France)

    2016-01-15

    The GEF (“GEneral description of Fission observables”) model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  13. User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E)

    Science.gov (United States)

    2014-06-01

    User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics ( DPD -E) by James P. Larentzos...Energy Dissipative Particle Dynamics ( DPD -E) James P. Larentzos Engility Corporation John K. Brennan, Joshua D. Moore, and William D. Mattson...Constant Energy Dissipative Particle Dynamics ( DPD -E) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) James P

  14. Modelling and Implementation of Network Coding for Video

    Directory of Open Access Journals (Sweden)

    Can Eyupoglu

    2016-08-01

    Full Text Available In this paper, we investigate Network Coding for Video (NCV which we apply for video streaming over wireless networks. NCV provides a basis for network coding. We use NCV algorithm to increase throughput and video quality. When designing NCV algorithm, we take the deadline as well as the decodability of the video packet at the receiver. In network coding, different flows of video packets are packed into a single packet at intermediate nodes and forwarded to other nodes over wireless networks. There are many problems that occur during transmission on the wireless channel. Network coding plays an important role in dealing with these problems. We observe the benefits of network coding for throughput increase thanks to applying broadcast operations on wireless networks. The aim of this study is to implement NCV algorithm using C programming language which takes the output of the H.264 video codec generating the video packets. In our experiments, we investigated improvements in terms of video quality and throughput at different scenarios.

  15. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  16. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  17. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  18. ASTEC V2 severe accident integral code: Fission product modelling and validation

    Energy Technology Data Exchange (ETDEWEB)

    Cantrel, L., E-mail: laurent.cantrel@irsn.fr; Cousin, F.; Bosland, L.; Chevalier-Jabet, K.; Marchetto, C.

    2014-06-01

    One main goal of the severe accident integral code ASTEC V2, jointly developed since almost more than 15 years by IRSN and GRS, is to simulate the overall behaviour of fission products (FP) in a damaged nuclear facility. ASTEC applications are source term determinations, level 2 Probabilistic Safety Assessment (PSA2) studies including the determination of uncertainties, accident management studies and physical analyses of FP experiments to improve the understanding of the phenomenology. ASTEC is a modular code and models of a part of the phenomenology are implemented in each module: the release of FPs and structural materials from degraded fuel in the ELSA module; the transport through the reactor coolant system approximated as a sequence of control volumes in the SOPHAEROS module; and the radiochemistry inside the containment nuclear building in the IODE module. Three other modules, CPA, ISODOP and DOSE, allow respectively computing the deposition rate of aerosols inside the containment, the activities of the isotopes as a function of time, and the gaseous dose rate which is needed to model radiochemistry in the gaseous phase. In ELSA, release models are semi-mechanistic and have been validated for a wide range of experimental data, and noticeably for VERCORS experiments. For SOPHAEROS, the models can be divided into two parts: vapour phase phenomena and aerosol phase phenomena. For IODE, iodine and ruthenium chemistry are modelled based on a semi-mechanistic approach, these FPs can form some volatile species and are particularly important in terms of potential radiological consequences. The models in these 3 modules are based on a wide experimental database, resulting for a large part from international programmes, and they are considered at the state of the art of the R and D knowledge. This paper illustrates some FPs modelling capabilities of ASTEC and computed values are compared to some experimental results, which are parts of the validation matrix.

  19. Open source software engineering for geoscientific modeling applications

    Science.gov (United States)

    Bilke, L.; Rink, K.; Fischer, T.; Kolditz, O.

    2012-12-01

    OpenGeoSys (OGS) is a scientific open source project for numerical simulation of thermo-hydro-mechanical-chemical (THMC) processes in porous and fractured media. The OGS software development community is distributed all over the world and people with different backgrounds are contributing code to a complex software system. The following points have to be addressed for successful software development: - Platform independent code - A unified build system - A version control system - A collaborative project web site - Continuous builds and testing - Providing binaries and documentation for end users OGS should run on a PC as well as on a computing cluster regardless of the operating system. Therefore the code should not include any platform specific feature or library. Instead open source and platform independent libraries like Qt for the graphical user interface or VTK for visualization algorithms are used. A source code management and version control system is a definite requirement for distributed software development. For this purpose Git is used, which enables developers to work on separate versions (branches) of the software and to merge those versions at some point to the official one. The version control system is integrated into an information and collaboration website based on a wiki system. The wiki is used for collecting information such as tutorials, application examples and case studies. Discussions take place in the OGS mailing list. To improve code stability and to verify code correctness a continuous build and testing system, based on the Jenkins Continuous Integration Server, has been established. This server is connected to the version control system and does the following on every code change: - Compiles (builds) the code on every supported platform (Linux, Windows, MacOS) - Runs a comprehensive test suite of over 120 benchmarks and verifies the results Runs software development related metrics on the code (like compiler warnings, code complexity

  20. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  1. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jenny; Edlund, O.; Hermann, J.; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User`s Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  2. Code generation by model transformation: a case study in transformation modularity

    NARCIS (Netherlands)

    Hemel, Z.; Kats, L.C.L.; Groenewegen, D.M.; Viser, E.

    2009-01-01

    The realization of model-driven software development requires effective techniques for implementing code generators for domain-specific languages. This paper identifies techniques for improving separation of concerns in the implementation of generators. The core technique is code generation by model

  3. An Open Source modular platform for hydrological model implementation

    Science.gov (United States)

    Kolberg, Sjur; Bruland, Oddbjørn

    2010-05-01

    An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not

  4. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources; SCRIC: un code pour calculer l'absorption et l'emission detaillees de plasmas hors equilibre, inhomogenes et etendus; application aux sources EUV a base de xenon

    Energy Technology Data Exchange (ETDEWEB)

    Gaufridy de Dortan, F. de

    2006-07-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  5. Developing a Successful Open Source Training Model

    Directory of Open Access Journals (Sweden)

    Belinda Lopez

    2010-01-01

    Full Text Available Training programs for open source software provide a tangible, and sellable, product. A successful training program not only builds revenue, it also adds to the overall body of knowledge available for the open source project. By gathering best practices and taking advantage of the collective expertise within a community, it may be possible for a business to partner with an open source project to build a curriculum that promotes the project and supports the needs of the company's training customers. This article describes the initial approach used by Canonical, the commercial sponsor of the Ubuntu Linux operating system, to engage the community in the creation of its training offerings. We then discuss alternate curriculum creation models and some of the conditions that are necessary for successful collaboration between creators of existing documentation and commercial training providers.

  6. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  7. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  8. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan;

    2014-01-01

    performance of several encoded high-definition video sequences constrained by the channel bitrate and the packet size. We argue that light video compression and low complexity channel coding for the W-band fiber-wireless link enable low-delay multiple channel 1080p wireless HD video transmission....

  9. 49 CFR 41.120 - Acceptable model codes.

    Science.gov (United States)

    2010-10-01

    ... Natural Hazards, Federal Emergency Management Agency, 500 C Street, SW., Washington, DC 20472.): (i) The..., published by the Building Officials and Code Administrators, 4051 West Flossmoor Rd., Country Club Hills... Disaster Relief and Emergency Assistance Act (Stafford Act), 42 U.S.C. 5170a, 5170b, 5192, and 5193, or...

  10. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  11. Corner neutronic code

    Directory of Open Access Journals (Sweden)

    V.P. Bereznev

    2015-10-01

    An iterative solution process is used, including external iterations for the fission source and internal iterations for the scattering source. The paper presents the results of a cross-verification against the Monte Carlo MMK code [3] and on a model of the BN-800 reactor core.

  12. The modeling of core melting and in-vessel corium relocation in the APRIL code

    Energy Technology Data Exchange (ETDEWEB)

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  13. Laser-Plasma Modeling Using PERSEUS Extended-MHD Simulation Code for HED Plasmas

    Science.gov (United States)

    Hamlin, Nathaniel; Seyler, Charles

    2016-10-01

    We discuss the use of the PERSEUS extended-MHD simulation code for high-energy-density (HED) plasmas in modeling laser-plasma interactions in relativistic and nonrelativistic regimes. By formulating the fluid equations as a relaxation system in which the current is semi-implicitly time-advanced using the Generalized Ohm's Law, PERSEUS enables modeling of two-fluid phenomena in dense plasmas without the need to resolve the smallest electron length and time scales. For relativistic and nonrelativistic laser-target interactions, we have validated a cycle-averaged absorption (CAA) laser driver model against the direct approach of driving the electromagnetic fields. The CAA model refers to driving the radiation energy and flux rather than the fields, and using hyperbolic radiative transport, coupled to the plasma equations via energy source terms, to model absorption and propagation of the radiation. CAA has the advantage of not requiring adequate grid resolution of each laser wavelength, so that the system can span many wavelengths without requiring prohibitive CPU time. For several laser-target problems, we compare existing MHD results to extended-MHD results generated using PERSEUS with the CAA model, and examine effects arising from Hall physics. This work is supported by the National Nuclear Security Administration stewardship sciences academic program under Department of Energy cooperative agreements DE-FOA-0001153 and DE-NA0001836.

  14. A computer code for calculations in the algebraic collective model of the atomic nucleus

    CERN Document Server

    Welsh, T A

    2016-01-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This, in particular, obviates the use of coefficients of fractional parentage. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [pi x q x pi]_0 and [pi x pi]_{LM}, where q_M are the model's quadrupole moments, and pi_N are corresponding conjugate momenta (-2>=M,N<=2). The code also provides ready access to SO(3)-reduced SO(5) Clebsch-Gordan coefficients through data files provided with the code.

  15. DiskFit: a code to fit simple non-axisymmetric galaxy models either to photometric images or to kinematic maps

    CERN Document Server

    Sellwood, J A

    2015-01-01

    This posting announces public availability of version 1.2 of the DiskFit software package developed by the authors, which may be used to fit simple non-axisymmetric models either to images or to velocity fields of disk galaxies. Here we give an outline of the capability of the code and provide the link to downloading executables, the source code, and a comprehensive on-line manual. We argue that in important respects the code is superior to rotcur for fitting kinematic maps and to galfit for fitting multi-component models to photometric images.

  16. Modelling of skin exposure from distributed sources

    DEFF Research Database (Denmark)

    Fogh, C.L.; Andersson, Kasper Grann

    2000-01-01

    A simple model of indoor air pollution concentrations was used together with experimental results on deposition velocities to skin to calculate the skin dose from an outdoor plume of contaminants, The primary pathway was considered to be direct deposition to the skin from a homogeneously distribu...... distributed air source. The model has been used to show that skin deposition was a significant dose contributor for example when compared to inhalation dose. (C) 2000 British Occupational Hygiene Society, Published by Elsevier Science Ltd. All rights reserved....

  17. Accelerating scientific codes by performance and accuracy modeling

    CERN Document Server

    Fabregat-Traver, Diego; Bientinesi, Paolo

    2016-01-01

    Scientific software is often driven by multiple parameters that affect both accuracy and performance. Since finding the optimal configuration of these parameters is a highly complex task, it extremely common that the software is used suboptimally. In a typical scenario, accuracy requirements are imposed, and attained through suboptimal performance. In this paper, we present a methodology for the automatic selection of parameters for simulation codes, and a corresponding prototype tool. To be amenable to our methodology, the target code must expose the parameters affecting accuracy and performance, and there must be formulas available for error bounds and computational complexity of the underlying methods. As a case study, we consider the particle-particle particle-mesh method (PPPM) from the LAMMPS suite for molecular dynamics, and use our tool to identify configurations of the input parameters that achieve a given accuracy in the shortest execution time. When compared with the configurations suggested by exp...

  18. A high burnup model developed for the DIONISIO code

    Energy Technology Data Exchange (ETDEWEB)

    Soba, A. [U.A. Combustibles Nucleares, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires (Argentina); Denis, A., E-mail: denis@cnea.gov.ar [U.A. Combustibles Nucleares, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires (Argentina); Romero, L. [U.A. Reactores Nucleares, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires (Argentina); Villarino, E.; Sardella, F. [Departamento Ingeniería Nuclear, INVAP SE, Comandante Luis Piedra Buena 4950, 8430 San Carlos de Bariloche, Río Negro (Argentina)

    2013-02-15

    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO{sub 2} fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in {sup 235}U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  19. A high burnup model developed for the DIONISIO code

    Science.gov (United States)

    Soba, A.; Denis, A.; Romero, L.; Villarino, E.; Sardella, F.

    2013-02-01

    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO2 fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in 235U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  20. Test code for the assessment and improvement of Reynolds stress models

    Science.gov (United States)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA

    1987-01-01

    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  1. Evaluation of the analysis models in the ASTRA nuclear design code system

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Nam Jin; Park, Chang Jea; Kim, Do Sam; Lee, Kyeong Taek; Kim, Jong Woon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2000-11-15

    In the field of nuclear reactor design, main practice was the application of the improved design code systems. During the process, a lot of basis and knowledge were accumulated in processing input data, nuclear fuel reload design, production and analysis of design data, et al. However less efforts were done in the analysis of the methodology and in the development or improvement of those code systems. Recently, KEPO Nuclear Fuel Company (KNFC) developed the ASTRA (Advanced Static and Transient Reactor Analyzer) code system for the purpose of nuclear reactor design and analysis. In the code system, two group constants were generated from the CASMO-3 code system. The objective of this research is to analyze the analysis models used in the ASTRA/CASMO-3 code system. This evaluation requires indepth comprehension of the models, which is important so much as the development of the code system itself. Currently, most of the code systems used in domestic Nuclear Power Plant were imported, so it is very difficult to maintain and treat the change of the situation in the system. Therefore, the evaluation of analysis models in the ASTRA nuclear reactor design code system in very important.

  2. Earthquake Source Modeling using Time-Reversal or Adjoint Methods

    Science.gov (United States)

    Hjorleifsdottir, V.; Liu, Q.; Tromp, J.

    2007-12-01

    In recent years there have been great advances in earthquake source modeling. Despite the effort, many questions about earthquake source physics remain unanswered. In order to address some of these questions, it is useful to reconstruct what happens on the fault during an event. In this study we focus on determining the slip distribution on a fault plane, or a moment-rate density, as a function of time and space. This is a difficult process involving many trade offs between model parameters. The difficulty lies in the fact that earthquakes are not a controlled experiment, we don't know when and where they will occur, and therefore we have only limited control over what data will be acquired for each event. As a result, much of the advance that can be made, is by extracting more information out of the data that is routinely collected. Here we use a technique that uses 3D waveforms to invert for the slip on a fault plane during rupture. By including 3D wave-forms we can use parts of the wave-forms that are often discarded, as they are altered by structural effects in ways that cannot be accurately predicted using 1D Earth models. However, generating 3D synthetic is computationally expensive. Therefore we turn to an `adjoint' method (Tarantola Geoph.~1984, Tromp et al.~GJI 2005), that reduces the computational cost relative to methods that use Green's function libraries. In it's simplest form an adjoint method for inverting for source parameters can be viewed as a time-reversal experiment performed with a wave-propagation code (McMechan GJRAS 1982). The recorded seismograms are inserted as simultaneous sources at the location of the receiver and the computed wave field (which we call the adjoint wavefield) is recorded on an array around the earthquake location. Here we show, mathematically, that for source inversions for a moment tensor (distributed) source, the time integral of the adjoint strain is the quantity to monitor. We present the results of time

  3. Analysing Renewable Energy Source Impacts on Power System National Network Code

    Directory of Open Access Journals (Sweden)

    Georgiana Balaban

    2017-08-01

    Full Text Available This paper analyses the impact on renewable energy sources integrated into the Romanian power system on the electrical network operation considering the reduction of electricity consumption with respect to the 1990s. This decrease has led to increased difficulties in integrating the renewable energy sources into the power system (network reinforcements, as well as issues concerning the balance of production/consumption. Following the excess of certain proportions of the energy mix, intermittent renewable energy sources require the expansion of networks, storage, back-up capacities and efforts for a flexible consumption, in the absence of which renewable energy sources cannot be used or the grid can be overloaded. To highlight the difficulty of connecting some significant capacities installed in wind power plants and photovoltaic installation, the paper presents a case study for Dobrogea area that has the most installed capacity from renewable energy sources in operation.

  4. A reversibility-gain model for integer Karhunen-Loève transform design in video coding

    Institute of Scientific and Technical Information of China (English)

    Xing-guo ZHU; Lu YU

    2015-01-01

    Karhunen-Loève transform (KLT) is the optimal transform that minimizes distortion at a given bit allocation for Gaussian source. As a KLT matrix usually contains non-integers, integer-KLT design is a classical problem. In this paper, a joint reversibility-gain (R-G) model is proposed for integer-KLT design in video coding. Specifically, the 'reversibility' is modeled according to distortion analysis in using forward and inverse integer transform without quantization. It not only measures how invertible a transform is, but also bounds the distortion introduced by the non-orthonormal integer transform process. The 'gain' means transform coding gain (TCG), which is a widely used criterion for transform design in video coding. Since KLT maximizes the TCG under some assumptions, here we define the TCG loss ratio (LR) to measure how much coding gain an integer-KLT loses when compared with the original KLT. Thus, the R-G model can be explained as follows: subject to a certain TCG LR, an integer- KLT with the best reversibility is the optimal integer transform for a given non-integer-KLT. Experimental results show that the R-G model can guide the design of integer-KLTs with good performance.

  5. Version 3.0 of code Java for 3D simulation of the CCA model

    Science.gov (United States)

    Zhang, Kebo; Zuo, Junsen; Dou, Yifeng; Li, Chao; Xiong, Hailing

    2016-10-01

    In this paper we provide a new version of program for replacing the previous version. The frequency of traversing the clusters-list was reduced, and some code blocks were optimized properly; in addition, we appended and revised the comments of the source code for some methods or attributes. The compared experimental results show that new version has better time efficiency than the previous version.

  6. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Euro-Mediterranean, http://www.share-eu.org/; EMME in the Middle East, http://www.emme-gem.org/) and global scale (e.g., GEM, http://www.globalquakemodel.org/; Anonymous 2008). To some extent, each of these efforts is still trying to resolve the level of optimal detail required for this type of compilation. The comparison we provide defines a common standard for consideration by the international community for future regional and global seismogenic source models by identifying the necessary parameters that capture the essence of geological fault data in order to characterize seismogenic sources. In addition, we inform potential users of differences in our usage of common geological/seismological terms to avoid inappropriate use of the data in our models and provide guidance to convert the data from one model to the other (for detailed instructions, see the electronic supplement to this article). Applying our recommendations will permit probabilistic seismic hazard assessment codes to run seamlessly using either seismogenic source input. The USGS and INGV database schema compare well at a first-level inspection. Both databases contain a set of fields representing generalized fault three-dimensional geometry and additional fields that capture the essence of past earthquake occurrences. Nevertheless, there are important differences. When we further analyze supposedly comparable fields, many are defined differently. These differences would cause anomalous results in hazard prediction if one assumes the values are similarly defined. The data, however, can be made fully compatible using simple transformations.

  7. 多源线性网络编码的同态签名算法%Homomorphic Signature Algorithm for Multi-source Linear Network Coding

    Institute of Scientific and Technical Information of China (English)

    牛淑芬; 王彩芬

    2012-01-01

    网络编码易遭受污染攻击,但传统的签名技术不适用于多源网络编码.为此,基于同态函数,使用双线性对技术,提出一种可抵御污染攻击的多源线性网络编码签名算法,其中,每个源节点用自己的私钥对文件签名,中间或信宿节点仅用公钥即可验证收到的签名,利用随机预言模型证明该算法能够抵抗信源节点和中间节点的攻击.%Network coding is highly susceptible to pollution attacks, which can not be prevented by using standard signature. Based on homomorphic function and bilinear pairings, an efficient signature scheme for multi-source networks coding against pollution attacks is proposed. The intermediate nodes with the corresponding public keys can verify the integrity of the received messages signed by different source nodes with private keys. Under the random oracle model, the scheme is proved to be secure against the source nodes and intermediate nodes attacks.

  8. Light source modeling for automotive lighting devices

    Science.gov (United States)

    Zerhau-Dreihoefer, Harald; Haack, Uwe; Weber, Thomas; Wendt, Dierk

    2002-08-01

    Automotive lighting devices generally have to meet high standards. For example to avoid discomfort glare for the oncoming traffic, luminous intensities of a low beam headlight must decrease by more than one order of magnitude within a fraction of a degree along the horizontal cutoff-line. At the same time, a comfortable homogeneous illumination of the road requires slowly varying luminous intensities below the cutoff line. All this has to be realized taking into account both, the legal requirements and the customer's stylistic specifications. In order to be able to simulate and optimize devices with a good optical performance different light source models are required. In the early stage of e.g. reflector development simple unstructured models allow a very fast development of the reflectors shape. On the other hand the final simulation of a complex headlamp or signal light requires a sophisticated model of the spectral luminance. In addition to theoretical models based on the light source's geometry, measured luminance data can also be used in the simulation and optimization process.

  9. TOMO3D: 3-D joint refraction and reflection traveltime tomography parallel code for active-source seismic data—synthetic test

    Science.gov (United States)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.

    2015-10-01

    We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.

  10. Numerical modeling of a high power terahertz source in Shanghai

    Institute of Scientific and Technical Information of China (English)

    DAI Jin-Hua; DENG Hai-Xiao; DAI Zhi-Min

    2012-01-01

    On the basis of an energy-recovery linac,a terahertz source with a potential for kilowatts of average power is proposed in Shanghai,which will serve as an effective tool for material and biological sciences,In this paper,the physical design of two free electron laser (FEL) oscillators,in a frequency range of 2-10 THz and 0.5-2 THz respectively,are presented.By using three-dimensional,time-dependent numerical modeling of GENESIS in combination with a paraxial optical propagation code,the THz oscillator performance,the detuning effects,and the tolerance requirements on the electron beam,the undulator field and the cavity alignment are given.

  11. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  12. Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code

    Energy Technology Data Exchange (ETDEWEB)

    Rakhno, I. L. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Mokhov, N. V. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gudima, K. K. [National Academy of Sciences, Cisineu (Moldova)

    2015-04-25

    An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.

  13. The Nuremberg Code subverts human health and safety by requiring animal modeling

    OpenAIRE

    Greek Ray; Pippus Annalea; Hansen Lawrence A

    2012-01-01

    Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive...

  14. Implementation of the critical points model in a SFM-FDTD code working in oblique incidence

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)

    2011-06-22

    We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.

  15. Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions.

    Science.gov (United States)

    Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.

    1998-01-01

    As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…

  16. The price of ignorance: The impact of side-information on delay for lossless source-coding

    CERN Document Server

    Chang, Cheng

    2007-01-01

    Inspired by the context of compressing encrypted sources, this paper considers the general tradeoff between rate, end-to-end delay, and probability of error for lossless source coding with side-information. The notion of end-to-end delay is made precise by considering a sequential setting in which source symbols are revealed in real time and need to be reconstructed at the decoder within a certain fixed latency requirement. Upper bounds are derived on the reliability functions with delay when side-information is known only to the decoder as well as when it is also known at the encoder. When the encoder is not ignorant of the side-information (including the trivial case when there is no side-information), it is possible to have substantially better tradeoffs between delay and probability of error at all rates. This shows that there is a fundamental price of ignorance in terms of end-to-end delay when the encoder is not aware of the side information. This effect is not visible if only fixed-block-length codes a...

  17. Analyses of containment source term of BWR5 considering iodine chemistry suppression pool with THALES-2 code

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, Jun; Moriyama, Kiyofumi [Japan Atomic Energy Agency, Ibaraki (Japan)

    2009-05-15

    After JCO criticality accident in 1999, recognized the importance of PSA application research for emergency planning and basic technical study supporting decision making in protective actions. In order to evaluate containment source term in the late phase SA, coupling of severe accident analysis code THALES-2 and kinetics of iodine chemistry code Kiche was done. And containment source term analyses were performed a typical accident sequence TQUV of BWR5/Mark-II. The lower the pH in the pool was, the more fraction of iodine were released to gas phase, as was in agreement with the known tendency. Total release fractions of all iodine species to gas phase at 40 hr were 0.1[-](pH=5), 0.01[-](pH=7), 4x10{sup -4}[-] (pH=9). I{sub 2} was dominant in released iodine to gas phase and most of released I{sub 2} was adsorbed to the wall. As the operation of the containment spray, the release of iodine tot the gas phase was enhanced due to the break of a steady state by the circulation in the containment. In future, JAEA will perform containment source term analyses for extensive accident sequences with consideration of iodine chemistry.

  18. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  19. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  20. Modeling of renewable hybrid energy sources

    Directory of Open Access Journals (Sweden)

    Dumitru Cristian Dragos

    2009-12-01

    Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.

  1. Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC

    Energy Technology Data Exchange (ETDEWEB)

    Talou, P. (Patrick); Chadwick, M. B. (Mark B.); Dietrich, F. S.; Herman, M.; Kawano, T. (Toshihiko); Konig, A.; Obložinský, P.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.

  2. A Robust Source Coding Watermark Technique Based on Magnitude DFT Decomposition

    Directory of Open Access Journals (Sweden)

    Sushil Kumar

    2012-07-01

    Full Text Available Image watermarking is considered a powerful tool forCopyright protection, Content authentication, Fingerprintingand for protecting intellectual property. We present in thispaper a watermarking algorithm based on block wise changingmagnitude of DFT domain. This algorithm can be used as anapplication for copyright protection. To provide multi-levelsecurities we have first used best self-synchronizing T-codes toencode the watermark. The encoded watermark is thenembedded into the cover image using a stego-key. We haveanalyzed our algorithm against noise such as Salt and Pepper,Gaussian and Speckle.

  3. Mathematical model and computer code for the analysis of advanced fast reactor dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Schukin, N.V. (Moscow Engineering Physics Inst. (Russian Federation)); Korsun, A.S. (Moscow Engineering Physics Inst. (Russian Federation)); Vitruk, S.G. (Moscow Engineering Physics Inst. (Russian Federation)); Zimin, V.G. (Moscow Engineering Physics Inst. (Russian Federation)); Romanin, S.D. (Moscow Engineering Physics Inst. (Russian Federation))

    1993-04-01

    Efficient algorithms for mathematical modeling of 3-D neutron kinetics and thermal hydraulics are described. The model and appropriate computer code make it possible to analyze a variety of transient events ranging from normal operational states to catastrophic accident excursions. To verify the code, a number of calculations of different kind of transients was carried out. The results of the calculations show that the model and the computer code could be used for conceptual design of advanced liquid metal reactors. The detailed description of calculations of TOP WS accident is presented. (orig./DG)

  4. Varian 2100C/D Clinac 18 MV photon phase space file characterization and modeling by using MCNP Code

    Science.gov (United States)

    Ezzati, Ahad Ollah

    2015-07-01

    Multiple points and a spatial mesh based surface source model (MPSMBSS) was generated for 18MV Varian 2100 C/D Clinac phase space file (PSF) and implemented in MCNP code. The generated source model (SM) was benchmarked against PSF and measurements. PDDs and profiles were calculated using the SM and original PSF for different field sizes from 5 × 5 to 20 × 20 cm2. Agreement was within 2% of the maximum dose at 100cm SSD for beam profiles at the depths of 4cm and 15cm with respect to the original PSF. Differences between measured and calculated points were less than 2% of the maximum dose or 2mm distance to agreement (DTA) at 100 cm SSD. Thus it can be concluded that the modified MCNP code can be used for radiotherapy calculations including multiple source model (MSM) and using the source biasing capability of MPSMBSS can increase the simulation speed up to 3600 for field sizes smaller than 5 × 5 cm2.

  5. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.; Lee, S. W. [Korea Automic Energy Research Institute, Taejon (Korea, Republic of)

    2004-02-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the second step of the 3 year project, and the main researches were focused on the development of downcorner boiling model. During the current year, the bubble stream model of downcorner has been developed and installed in he auditing code. The model sensitivity analysis has been performed for APR1400 LBLOCA scenario using the modified code. The preliminary calculation has been performed for the experimental test facility using FLUENT and MARS code. The facility for air bubble experiment has been installed. The thermal hydraulic phenomena for VHTR and super critical reactor have been identified for the future application and model development.

  6. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    Science.gov (United States)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  7. HELIOS-Retrieval: An Open-source, Nested Sampling Atmospheric Retrieval Code, Application to the HR 8799 Exoplanets and Inferred Constraints for Planet Formation

    CERN Document Server

    Lavie, Baptiste; Mordasini, Christoph; Malik, Matej; Bonnefoy, Mickaël; Demory, Brice-Olivier; Oreshenko, Maria; Grimm, Simon L; Ehrenreich, David; Heng, Kevin

    2016-01-01

    We present an open-source retrieval code named HELIOS-Retrieval (hereafter HELIOS-R), designed to obtain chemical abundances and temperature-pressure profiles from inverting the measured spectra of exoplanetary atmospheres. In the current implementation, we use an exact solution of the radiative transfer equation, in the pure absorption limit, in our forward model, which allows us to analytically integrate over all of the outgoing rays (instead of performing Gaussian quadrature). Two chemistry models are considered: unconstrained chemistry (where the mixing ratios are treated as free parameters) and equilibrium chemistry (enforced via analytical formulae, where only the elemental abundances are free parameters). The nested sampling algorithm allows us to formally implement Occam's Razor based on a comparison of the Bayesian evidence between models. We perform a retrieval analysis on the measured spectra of the HR 8799b, c, d and e directly imaged exoplanets. Chemical equilibrium is disfavored by the Bayesian ...

  8. Modeling a neutron rich nuclei source

    Energy Technology Data Exchange (ETDEWEB)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J. [Institut de Physique Nucleaire, IN2P3/CNRS, 91 - Orsay (France); Mirea, M. [Institute of Physics and Nuclear Engineering, Tandem Lab., Bucharest (Romania)

    2000-07-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (author000.

  9. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  10. The Open Source Snowpack modelling ecosystem

    Science.gov (United States)

    Bavay, Mathias; Fierz, Charles; Egger, Thomas; Lehning, Michael

    2016-04-01

    As a large number of numerical snow models are available, a few stand out as quite mature and widespread. One such model is SNOWPACK, the Open Source model that is developed at the WSL Institute for Snow and Avalanche Research SLF. Over the years, various tools have been developed around SNOWPACK in order to expand its use or to integrate additional features. Today, the model is part of a whole ecosystem that has evolved to both offer seamless integration and high modularity so each tool can easily be used outside the ecosystem. Many of these Open Source tools experience their own, autonomous development and are successfully used in their own right in other models and applications. There is Alpine3D, the spatially distributed version of SNOWPACK, that forces it with terrain-corrected radiation fields and optionally with blowing and drifting snow. This model can be used on parallel systems (either with OpenMP or MPI) and has been used for applications ranging from climate change to reindeer herding. There is the MeteoIO pre-processing library that offers fully integrated data access, data filtering, data correction, data resampling and spatial interpolations. This library is now used by several other models and applications. There is the SnopViz snow profile visualization library and application that supports both measured and simulated snow profiles (relying on the CAAML standard) as well as time series. This JavaScript application can be used standalone without any internet connection or served on the web together with simulation results. There is the OSPER data platform effort with a data management service (build on the Global Sensor Network (GSN) platform) as well as a data documenting system (metadata management as a wiki). There are several distributed hydrological models for mountainous areas in ongoing development that require very little information about the soil structure based on the assumption that in step terrain, the most relevant information is

  11. Unified Models of Molecular Emission from Class 0 Protostellar Outflow Sources

    CERN Document Server

    Rawlings, J M C; Carolan, P B

    2013-01-01

    Low mass star-forming regions are more complex than the simple spherically symmetric approximation that is often assumed. We apply a more realistic infall/outflow physical model to molecular/continuum observations of three late Class 0 protostellar sources with the aims of (a) proving the applicability of a single physical model for all three sources, and (b) deriving physical parameters for the molecular gas component in each of the sources. We have observed several molecular species in multiple rotational transitions. The observed line profiles were modelled in the context of a dynamical model which incorporates infall and bipolar outflows, using a three dimensional radiative transfer code. This results in constraints on the physical parameters and chemical abundances in each source. Self-consistent fits to each source are obtained. We constrain the characteristics of the molecular gas in the envelopes as well as in the molecular outflows. We find that the molecular gas abundances in the infalling envelope ...

  12. Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language

    Science.gov (United States)

    Heaphy, R. T.; Burke, M. P.; Love, J. T.

    2015-12-01

    Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown

  13. Energy Management Policies for Energy-Neutral Source-Channel Coding

    CERN Document Server

    Castiglione, Paolo; Erkip, Elza; Zemen, Thomas

    2011-01-01

    In cyber-physical systems where sensors measure the temporal evolution of a given phenomenon of interest and radio communication takes place over short distances, the energy spent for source acquisition and compression may be comparable with that used for transmission. Additionally, in order to avoid limited lifetime issues, sensors may be powered via energy harvesting and thus collect all the energy they need from the environment. This work addresses the problem of energy allocation over source acquisition/compression and transmission for energy-harvesting sensors. At first, focusing on a single-sensor, energy management policies are identified that guarantee a maximal average distortion while at the same time ensuring the stability of the queue connecting source and channel encoders. It is shown that the identified class of policies is optimal in the sense that it stabilizes the queue whenever this is feasible by any other technique that satisfies the same average distortion constraint. Moreover, this class...

  14. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  15. Conceptual OOP design of Pilot Code for Two-Fluid, Three-field Model with C++ 6.0

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Lee, Y. J

    2006-09-15

    To establish the concept of the objective oriented program (OOP) design for reactor safety analysis code, the preliminary OOP design for PILOT code, which based on one dimensional two fluid three filed model, has been attempted with C++ language feature. Microsoft C++ language has been used since it is available as groupware utilization in KAERI. The language has can be merged with Compac Visual Fortran 6.6 in Visual Studio platform. In the development platform, C++ has been used as main language and Fortran has been used as mixed language in connection with C++ main drive program. The mixed language environment is a specific feature provided in visual studio. Existing Fortran source was utilized for input routine of reading steam table from generated file and routine of steam property calculation. The calling convention and passing argument from C++ driver was corrected. The mathematical routine, such as inverse matrix conversion and tridiagonal matrix solver, has been used as PILOT Fortran routines. Simple volume and junction utilized in PILOT code can be treated as objects, since they are the basic construction elements of code system. Other routines for overall solution scheme have been realized as procedure C functions. The conceptual design which consists of hydraulic loop, component, volume, and junction class has been described in the appendix in order to give the essential OOP structure of system safety analysis code. The attempt shows that many part of system analysis code can be expressed as objects, although the overall structure should be maintained as procedure functions. The encapsulation of data and functions within an object can provide many beneficial aspects in programming of system code.

  16. A Model for the Sources of the Slow Solar Wind

    CERN Document Server

    Antiochos, S K; Titov, V S; Lionello, R; Linker, J A

    2011-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: The slow wind has the composition of the closed field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to ~ 60{\\circ}, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spat...

  17. The use of machine learning with signal- and NLP processing of source code to detect and classify vulnerabilities and weaknesses with MARFCAT

    CERN Document Server

    Mokhov, Serguei A

    2010-01-01

    We present a machine learning approach to static code analysis for weaknesses related to security and others with the open-source MARF framework and its application to for the NIST's SATE 2010 static analysis tool exhibition workshop.

  18. The Big Effects of Short-term Efforts: Mentorship and Code Integration in Open Source Scientific Software

    Directory of Open Access Journals (Sweden)

    Erik H Trainer

    2014-07-01

    Full Text Available Scientific progress relies crucially on software, yet in practice there are significant challenges to scientific software production and maintenance. We conducted a case study of a bioinformatics software library called Biopython to investigate the promise of Google Summer of Code (GSoC, a program that pays students to work on open-source projects for the summer, for addressing these challenges. We find three positive outcomes of GSoC in the Biopython community: the addition of new features to the Biopython codebase, training, and personal development. We also find, however, that mentors face several challenges related to GSoC project selection and ranking. We believe that because GSoC provides an occasion to extend the software with capabilities that can be used to produce new knowledge, and to train successive generations of potential contributors to the software, it can play a vital role in the sustainability of open-source scientific software.

  19. Semi-device-independent randomness expansion with partially free random sources using 3 →1 quantum random access code

    Science.gov (United States)

    Zhou, Yu-Qian; Gao, Fei; Li, Dan-Dan; Li, Xin-Hui; Wen, Qiao-Yan

    2016-09-01

    We have proved that new randomness can be certified by partially free sources using 2 →1 quantum random access code (QRAC) in the framework of semi-device-independent (SDI) protocols [Y.-Q. Zhou, H.-W. Li, Y.-K. Wang, D.-D. Li, F. Gao, and Q.-Y. Wen, Phys. Rev. A 92, 022331 (2015), 10.1103/PhysRevA.92.022331]. To improve the effectiveness of the randomness generation, here we propose the SDI randomness expansion using 3 →1 QRAC and obtain the corresponding classical and quantum bounds of the two-dimensional quantum witness. Moreover, we get the condition which should be satisfied by the partially free sources to successfully certify new randomness, and the analytic relationship between the certified randomness and the two-dimensional quantum witness violation.

  20. Asteroid Models from Multiple Data Sources

    CERN Document Server

    Durech, J; Delbo, M; Kaasalainen, M; Viikinkoski, M

    2015-01-01

    In the past decade, hundreds of asteroid shape models have been derived using the lightcurve inversion method. At the same time, a new framework of 3-D shape modeling based on the combined analysis of widely different data sources such as optical lightcurves, disk-resolved images, stellar occultation timings, mid-infrared thermal radiometry, optical interferometry, and radar delay-Doppler data, has been developed. This multi-data approach allows the determination of most of the physical and surface properties of asteroids in a single, coherent inversion, with spectacular results. We review the main results of asteroid lightcurve inversion and also recent advances in multi-data modeling. We show that models based on remote sensing data were confirmed by spacecraft encounters with asteroids, and we discuss how the multiplication of highly detailed 3-D models will help to refine our general knowledge of the asteroid population. The physical and surface properties of asteroids, i.e., their spin, 3-D shape, densit...