WorldWideScience

Sample records for code comparison project

  1. WEC3: Wave Energy Converter Code Comparison Project: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien; Ruehl, Kelley; Roy, Andre; Costello, Ronan; Laporte Weywada, Pauline; Bailey, Helen

    2017-01-01

    This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to model hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.

  2. Final Report on EURAMET key comparison (EURAMET.M.M-K2.5) of 10 kg mass standards in stainless steel (Project code: EURAMET 1222)

    Science.gov (United States)

    Vámossy, Csilla; Bulku, Defrim; Borys, Michael; Alisic, Sejla; Boskovic, Tamara; Zelenka, Zoltan

    2015-01-01

    The report describes a European regional key comparison of a stainless steel 10 kg standard as a multiple of the kilogram carried out under the auspices of EURAMET and designated Project 1222. This comparison is also a KCDB Regional Key Comparison, registered as EURAMET.M.M-K2.5. The objectives of this comparison are to check the measurement capabilities in the field of mass of the participating national laboratories, to facilitate the demonstration of metrological equivalence between the laboratories in Europe, and to check or support the validity of quoted calibration measurement capabilities (CMC). This comparison provides a link to CCM.M-K2. BEV (Austria) was the pilot laboratory and the provider of the transfer standard. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  3. On the role of code comparisons in verification and validation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2003-08-01

    This report presents a perspective on the role of code comparison activities in verification and validation. We formally define the act of code comparison as the Code Comparison Principle (CCP) and investigate its application in both verification and validation. One of our primary conclusions is that the use of code comparisons for validation is improper and dangerous. We also conclude that while code comparisons may be argued to provide a beneficial component in code verification activities, there are higher quality code verification tasks that should take precedence. Finally, we provide a process for application of the CCP that we believe is minimal for achieving benefit in verification processes.

  4. A comparison of cosmological hydrodynamic codes

    Science.gov (United States)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic

  5. A CFD code comparison of wind turbine wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Storey, R. C.; Sørensen, Niels N.;

    2014-01-01

    A comparison is made between the EllipSys3D and SnS CFD codes. Both codes are used to perform Large-Eddy Simulations (LES) of single wind turbine wakes, using the actuator disk method. The comparison shows that both LES models predict similar velocity deficits and stream-wise Reynolds...

  6. Radiation Hardened Turbo Coded OFDM Modulator Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Space Micro Inc. proposes to develop an innovative Turbo-Coded Orthogonal Frequency Division Modulation (TC-OFDM) ASIC device. The proposed device provides data...

  7. Wireless Magnetic Sensor with Orthogonal Frequency Coding Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of this SBIR Phase I research project is to develop batteryless, wireless magnetic sensors with orthogonal frequency coding (OFC). These sensors will be...

  8. [Analysis of selected changes in project the penal code].

    Science.gov (United States)

    Berent, Jarosław; Jurczyk, Agnieszka P; Szram, Stefan

    2002-01-01

    In this paper the authors have analysed selected proposals of changes in the project of amendments in the penal code. Special attention has been placed on problem of the legality of the "comma" in art. 156 of the penal code. In this matter also a review of court jurisdiction has been made.

  9. Comparison between SERPENT and MONTEBURNS codes applied to burnup calculations of a GFR-like configuration

    Energy Technology Data Exchange (ETDEWEB)

    Chersola, Davide [GeNERG – DIME/TEC, University of Genova, via all’Opera Pia 15/a, 16145 Genova (Italy); INFN, via Dodecaneso 33, 16146 Genova (Italy); Lomonaco, Guglielmo, E-mail: guglielmo.lomonaco@unige.it [GeNERG – DIME/TEC, University of Genova, via all’Opera Pia 15/a, 16145 Genova (Italy); INFN, via Dodecaneso 33, 16146 Genova (Italy); Marotta, Riccardo [GeNERG – DIME/TEC, University of Genova, via all’Opera Pia 15/a, 16145 Genova (Italy); INFN, via Dodecaneso 33, 16146 Genova (Italy); Mazzini, Guido [Centrum výzkumu Řež (Research Centre Rez), Husinec-Rez, cp. 130, 25068 Rez (Czech Republic)

    2014-07-01

    Highlights: • MC codes are widely adopted to analyze nuclear facilities, including GEN-IV reactors. • Burnup calculations are an efficient tool to test neutronic Monte Carlo codes. • In this comparison the used codes show some differences but a good agreement exists. - Abstract: This paper presents the comparison between two Monte Carlo based burnup codes: SERPENT and MONTEBURNS. Monte Carlo codes are fully and worldwide adopted to perform analyses on nuclear facilities, also in the frame of Generation IV advanced reactors simulations. Thus, faster and most powerful calculation codes are needed with the aim to analyze complex geometries and specific neutronic behaviors. Burnup calculations are an efficient tool to test neutronic Monte Carlo codes: indeed these calculations couple transport and depletion procedures, so that neutronic reactor behavior can be simulated in its totality. Comparisons have been performed on a configuration representing the Allegro MOX 75 MW{sub th} reactor proposed by the European GoFastR (Gas-cooled Fast Reactor) Project in the frame of the 7th Euratom Framework Program. Although in burnup and criticality comparisons the codes used in simulations show different calculation times and some differences in amounts and in precision (in term of statistical errors), a reasonably good agreement between them exists.

  10. QR Codes in the Library: Are They Worth the Effort? Analysis of a QR Code Pilot Project

    OpenAIRE

    Wilson, Andrew M

    2012-01-01

    The literature is filled with potential uses for Quick Response (QR) codes in the library. Setting, but few library QR code projects have publicized usage statistics. A pilot project carried out in the Eda Kuhn Loeb Music Library of the Harvard College Library sought to determine whether library patrons actually understand and use QR codes. Results and analysis of the pilot project are provided, attempting to answer the question as to whether QR codes are worth the effort for libraries.

  11. MOCCA code for star cluster simulation: comparison with optical observations using COCOA

    Science.gov (United States)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Olech, Arkadiusz; Hypki, Arkadiusz

    2016-02-01

    We introduce and present preliminary results from COCOA (Cluster simulatiOn Comparison with ObservAtions) code for a star cluster after 12 Gyr of evolution simulated using the MOCCA code. The COCOA code is being developed to quickly compare results of numerical simulations of star clusters with observational data. We use COCOA to obtain parameters of the projected cluster model. For comparison, a FITS file of the projected cluster was provided to observers so that they could use their observational methods and techniques to obtain cluster parameters. The results show that the similarity of cluster parameters obtained through numerical simulations and observations depends significantly on the quality of observational data and photometric accuracy.

  12. MOCCA Code for Star Cluster Simulation: Comparison with Optical Observations using COCOA

    CERN Document Server

    Askar, Abbas; Pych, Wojciech; Olech, Arkadiusz; Hypki, Arkadiusz

    2015-01-01

    We introduce and present preliminary results from COCOA (Cluster simulatiOn Comparison with ObservAtions) code for a star cluster after 12 Gyrs of evolution simulated using the MOCCA code. The COCOA code is being developed to quickly compare results of numerical simulations of star clusters with observational data. We use COCOA to obtain parameters of the projected cluster model. For comparison, a FITS file of the projected cluster was provided to observers so that they could use their observational methods and techniques to obtain cluster parameters. The results show that the similarity of cluster parameters obtained through numerical simulations and observations depends significantly on the quality of observational data and photometric accuracy.

  13. Higher dimensional Numerical Relativity: code comparison

    CERN Document Server

    Witek, Helvi; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Shibata, Masaru; Sperhake, Ulrich; Zilhao, Miguel

    2014-01-01

    The nonlinear behavior of higher dimensional black hole spacetimes is of interest in several contexts, ranging from an understanding of cosmic censorship to black hole production in high-energy collisions. However, nonlinear numerical evolutions of higher dimensional black hole spacetimes are tremendously complex, involving different diagnostic tools and "dimensional reduction methods". In this work we compare two different successful codes to evolve Einstein's equations in higher dimensions, and show that the results of such different procedures agree to numerical precision, when applied to the collision from rest of two equal-mass black holes. We calculate the total radiated energy to be E/M=9x10^{-4} in five dimensions and E/M=8.1x10^{-4} in six dimensions.

  14. Plasma physics code contribution to the Mont-Blanc project

    OpenAIRE

    Sáez, Xavier; Soba, Alejandro; Mantsinen, Mervi

    2015-01-01

    This work develops strategies for adapting a particle-in-cell code to heterogeneous computer architectures and, in particular, to an ARM-based prototype of the Mont-Blanc project using OmpSs programming model and the OpenMP and OpenCL languages.

  15. Offshore Code Comparison Collaboration, Continuation: Phase II Results of a Floating Semisubmersible Wind System: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, A.; Jonkman, J.; Musial, W.; Vorpahl, F.; Popko, W.

    2013-11-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation tools that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, and foundation dynamics of the support structure. The Offshore Code Comparison Collaboration (OC3), which operated under the International Energy Agency (IEA) Wind Task 23, was established to verify the accuracy of these simulation tools [1]. This work was then extended under the Offshore Code Comparison Collaboration, Continuation (OC4) project under IEA Wind Task 30 [2]. Both of these projects sought to verify the accuracy of offshore wind turbine dynamics simulation tools (or codes) through code-to-code comparison of simulated responses of various offshore structures. This paper describes the latest findings from Phase II of the OC4 project, which involved the analysis of a 5-MW turbine supported by a floating semisubmersible. Twenty-two different organizations from 11 different countries submitted results using 24 different simulation tools. The variety of organizations contributing to the project brought together expertise from both the offshore structure and wind energy communities. Twenty-one different load cases were examined, encompassing varying levels of model complexity and a variety of metocean conditions. Differences in the results demonstrate the importance and accuracy of the various modeling approaches used. Significant findings include the importance of mooring dynamics to the mooring loads, the role nonlinear hydrodynamic terms play in calculating drift forces for the platform motions, and the difference between global (at the platform level) and local (at the member level) modeling of viscous drag. The results from this project will help guide development and improvement efforts for these tools to ensure that they are providing the accurate information needed to support the design and

  16. The Second Workshop on Lineshape Code Comparison: Isolated Lines

    Directory of Open Access Journals (Sweden)

    Spiros Alexiou

    2014-05-01

    Full Text Available In this work, we briefly summarize the theoretical aspects of isolated line broadening. We present and discuss test run comparisons from different participating lineshape codes for the 2s-2p transition for LiI, B III and NV.

  17. Multicode comparison of selected source-term computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.

    1989-04-01

    This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.

  18. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    Energy Technology Data Exchange (ETDEWEB)

    Skinner, David; Verdier, Francesca; Anand, Harsh; Carter,Jonathan; Durst, Mark; Gerber, Richard

    2005-03-05

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems. An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.

  19. CFD code comparison for 2D airfoil flows

    DEFF Research Database (Denmark)

    Sørensen, Niels N.; Méndez, B.; Muñoz, A.;

    2016-01-01

    The current paper presents the effort, in the EU AVATAR project, to establish the necessary requirements to obtain consistent lift over drag ratios among seven CFD codes. The flow around a 2D airfoil case is studied, for both transitional and fully turbulent conditions at Reynolds numbers of 3 × ...

  20. CFD code comparison for 2D airfoil flows

    DEFF Research Database (Denmark)

    Sørensen, Niels N.; Méndez, B.; Muñoz, A.

    2016-01-01

    The current paper presents the effort, in the EU AVATAR project, to establish the necessary requirements to obtain consistent lift over drag ratios among seven CFD codes. The flow around a 2D airfoil case is studied, for both transitional and fully turbulent conditions at Reynolds numbers of 3...... × 106 and 15 × 106. The necessary grid resolution, domain size, and iterative convergence criteria to have consistent results are discussed, and suggestions are given for best practice. For the fully turbulent results four out of seven codes provide consistent results. For the laminar...

  1. Projections of color coding retinal neurons in urodele amphibians.

    Science.gov (United States)

    Himstedt, W; Helas, A; Sommer, T J

    1981-01-01

    Optic fiber projection in the brain of Salamandra salamandra was investigated by degeneration techniques. Terminal fields are described in the thalamus and in the optic tectum. Microelectrode recordings were performed from ganglion cells in the retina and from their terminals in the thalamus and tectum in Salamandra and Triturus alpestris. 'On' cells showed maximal sensitivity either in the blue or in the yellow spectral region; they project to the thalamus. Color coding 'on-off' cells project to the tectum opticum. In Triturus a seasonal change in these neurons occurs. Probably due to transition of vitamin A2 into vitamin A1 the spectral sensitivity is different. In springtime blue-red opponent-color neurons were recorded, in fall however, blue-yellow neurons were found.

  2. Hadronic shower code inter-comparison and verification

    Energy Technology Data Exchange (ETDEWEB)

    Mokhov, N.V.; Striganov, S.I.; /Fermilab

    2007-01-01

    To evaluate the quality of general purpose particle interaction and transport codes widely used in the high-energy physics community, express benchmarking is conducted. Seven tasks, important for high-energy physics applications, are chosen. For this first shot, they are limited to particle production on thin and thick targets and energy deposition in targets and calorimetric setups. Five code groups were asked to perform calculations in the identical conditions and provide results to the authors of this report. Summary of the code inter-comparison and verification against available experimental data is presented in this paper. Agreement is quite reasonable in many cases, but quite serious problems were revealed in the others.

  3. Offshore Code Comparison Collaboration within IEA Wind Annex XXIII: Phase II Results Regarding Monopile Foundation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Jonkman, J.; Butterfield, S.; Passon, P.; Larsen, T.; Camp, T.; Nichols, J.; Azcona, J.; Martinez, A.

    2008-01-01

    This paper presents an overview and describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, which operates under Subtask 2 of the International Energy Agency Wind Annex XXIII.

  4. Comparison of two LES codes for wind turbine wake studies

    DEFF Research Database (Denmark)

    Chivaee, Hamid Sarlak; Pierella, F.; Mikkelsen, Robert Flemming

    2014-01-01

    are compared in terms of velocity deficit, turbulence kinetic energy and eddy viscosity. It is seen that both codes predict similar near-wake flow structures with the exception of OpenFoam's simulations without the subgrid-scale model. The differences begin to increase with increasing the distance from......For the third time a blind test comparison in Norway 2013, was conducted comparing numerical simulations for the rotor Cp and Ct and wake profiles with the experimental results. As the only large eddy simulation study among participants, results of the Technical University of Denmark (DTU) using...... their in-house CFD solver, EllipSys3D, proved to be more reliable among the other models for capturing the wake profiles and the turbulence intensities downstream the turbine. It was therefore remarked in the workshop to investigate other LES codes to compare their performance with EllipSys3D. The aim...

  5. Application of the BISON Fuel Performance Code to the FUMEX-III Coordinated Research Project

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Williamson; S. R. Novascone

    2012-04-01

    INL recently participated in FUMEX-III, an International Atomic Energy Agency sponsored fuel modeling Coordinated Research Project. A main purpose of FUMEX-III is to compare code predictions to reliable experimental data. During the same time period, the INL initiated development of a new multidimensional (2D and 3D) multiphysics nuclear fuel performance code called BISON. Interactions with international fuel modeling researchers via FUMEX-III played a significant and important role in the BISON evolution, particularly influencing the selection of material and behavioral models which are now included in the code. BISON's ability to model integral fuel rod behavior did not mature until 2011, thus the only FUMEX-III case considered was the Riso3-GE7 experiment, which includes measurements of rod outer diameter following pellet clad mechanical interaction (PCMI) resulting from a power ramp late in fuel life. BISON comparisons to the Riso3-GE7 final rod diameter measurements are quite reasonable. The INL is very interested in participation in the next Fuel Modeling Coordinated Research Project and would like to see the project initiated as soon as possible.

  6. Project Everware - running other people's code doesn't have to be painful

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Everware is a project that allows you to edit and run someone else's code with one click, even if that code has complicated setup instructions. The main aim of the project is to encourage reuse of software between researchers by making it easy and risk free to try out someone else's code.

  7. Code-to-code comparison between TINTE and MGT for steady state scenario

    Energy Technology Data Exchange (ETDEWEB)

    Druska, C., E-mail: c.druska@juelich.de [Forschungszentrum Juelich GmbH, Institut fuer Energieforschung (IEF), Sicherheitsforschung und Reaktortechnik (IEF-6), D-52426 Juelich (Germany); Nuenighoff, K.; Kasselmann, S.; Allelein, H.-J. [Forschungszentrum Juelich GmbH, Institut fuer Energieforschung (IEF), Sicherheitsforschung und Reaktortechnik (IEF-6), D-52426 Juelich (Germany)

    2012-10-15

    Based on the scenarios defined in the OECD-NEA benchmark 'PBMR Coupled Neutronics/Thermal Hydraulics Transient Benchmark of the PBMR-400 Core Design', a code-to-code comparison was performed between the reactor dynamics program TINTE (Time Dependent Neutronics and Temperatures) using two neutron energy groups and the advanced code MGT (Multi Group TINTE) which is able to take into account up to 43 energy groups. The effect of an increasing number of energy groups on time- and space dependent safety-related parameters like the fuel and moderator temperature, the reactivity or the control rod worth has been studied. In a first step, calculations were carried out to compare the results obtained from MGT and TINTE using two energy groups. Afterwards the MGT two-group model was compared to a scenario using 43 energy groups. In addition, the benchmark was well suited to validate the new nuclear library of MGT now based on ENDF/B-VII.0 data.

  8. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    Science.gov (United States)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  9. Outcomes of the 2013 GTO Workshop on Geothermal Code Comparison

    Energy Technology Data Exchange (ETDEWEB)

    Scheibe, Timothy D.; White, Mark D.; White, Signe K.

    2013-03-01

    Pacific Northwest National Laboratory (PNNL) is supporting the Department of Energy (DOE) Geothermal Technologies Office (GTO) in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. A key element of the projct was the planning and implementation of a one-day project kickoff workshop, held February 14, 2013 in Palo Alto, CA. The primary goals of the workshop were to 1) introduce the project and its objectives to potential participating team members, and 2) develop an initial set of test problem descriptions for use in the execution stage. This report summarizes the outcomes of the workshop.

  10. Offshore Code Comparison Collaboration (OC3) for IEA Wind Task 23 Offshore Wind Technology and Deployment

    Energy Technology Data Exchange (ETDEWEB)

    Jonkman, J.; Musial, W.

    2010-12-01

    This final report for IEA Wind Task 23, Offshore Wind Energy Technology and Deployment, is made up of two separate reports, Subtask 1: Experience with Critical Deployment Issues and Subtask 2: Offshore Code Comparison Collaborative (OC3). Subtask 1 discusses ecological issues and regulation, electrical system integration, external conditions, and key conclusions for Subtask 1. Subtask 2 included here, is the larger of the two volumes and contains five chapters that cover background information and objectives of Subtask 2 and results from each of the four phases of the project.

  11. Performance Comparison of Turbo Codes with other Forward Error Correcting Codes

    Directory of Open Access Journals (Sweden)

    Kapil Narwal

    2012-03-01

    Full Text Available : In this paper, we compare performance of turbo codes with different FEC codes presented in literatures. Performance and overall coding gain are compared for Convolutional codes, CTC, TPC and LDPC codes. Turbo Codes tend to work best at low code rates and not so well at high code rates. LDPC’s work very well at high code rates and at low code rates LDPC’s Performance can be very close to Shannon capacity. The decoder complexity of TPC is much less than the Convolutional Turbo Codes (CTC. The performance of TPC is close to capacity for higher code rates, but is not great for low code rates where CTC outperforms.Index terms —

  12. Remarks on low weight codewords of generalized affine and projective Reed-Muller codes

    CERN Document Server

    Ballet, Stéphane

    2012-01-01

    We make a brief survey on low weight codewords of generalized Reed-Muller codes and projective generalized Reed-Muller codes. In the affine case we give some information about the words that reach the second distance in cases where these words are not all characterized. Moreover we give the second weight of the projective Reed-Muller codes which was unknown until now. We relate the words of the projective Reed-Muller code reaching the second distance to the words of the affine Reed-Muller code reaching the second distance.

  13. Offshore Code Comparison Collaboration, Continuation within IEA Wind Task 30: Phase II Results Regarding a Floating Semisubmersible Wind System: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, A.; Jonkman, J.; Vorpahl, F.; Popko, W.; Qvist, J.; Froyd, L.; Chen, X.; Azcona, J.; Uzungoglu, E.; Guedes Soares, C.; Luan, C.; Yutong, H.; Pengcheng, F.; Yde, A.; Larsen, T.; Nichols, J.; Buils, R.; Lei, L.; Anders Nygard, T.; et al.

    2014-03-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation tools (or codes) that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, and foundation dynamics of the support structure. This paper describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, Continuation (OC4) project, which operates under the International Energy Agency (IEA) Wind Task 30. In the latest phase of the project, participants used an assortment of simulation codes to model the coupled dynamic response of a 5-MW wind turbine installed on a floating semisubmersible in 200 m of water. Code predictions were compared from load-case simulations selected to test different model features. The comparisons have resulted in a greater understanding of offshore floating wind turbine dynamics and modeling techniques, and better knowledge of the validity of various approximations. The lessons learned from this exercise have improved the participants? codes, thus improving the standard of offshore wind turbine modeling.

  14. The research of breakdown structure and coding system for construction project

    Institute of Scientific and Technical Information of China (English)

    丁大勇; 金维兴; 李培

    2004-01-01

    Whether the breakdown structure and coding system of construction projects are reasonable or not determines to a large degree the pepfofmance level of the entire project management. We analyze in detail the similarities and differences of two kinds of decomposing methods classified by type of work and construction elements based on the discussion of international typical coding standards system designing. We then deduce the differential coefficient relation between project breakdown strueture(PBS) and work breakdown structure (WBS). At the same time we constitute a comprehensive construction project breakdown system including element code and type of work code and make a further schematic presentation of the implementation of the sysrem' s functions.

  15. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  16. Code of Conduct for wind-power projects - Phase 3; Code of Conduct fuer windkraftprojekte. Phase 3 Machbarkeit und Strategie

    Energy Technology Data Exchange (ETDEWEB)

    Strub, P. [Pierre Strub, freischaffender Berater, Binningen (Switzerland); Ziegler, Ch. [Inter Act, Basel (Switzerland)

    2008-11-15

    This paper discusses the results of phase three of a project concerning wind-power projects. Feasibility and strategy aspects are examined and discussed. The current state of the wind power market is discussed on the basis of the results of a survey made on the subject. The social acceptance of wind power installations is discussed, whereby the rejection of particular projects is compared with a general lack of acceptance. Requirements placed on such projects and possible solutions are discussed. Finally, the feasibility of setting up a code of conduct in the area of wind-power projects is discussed and the definition of further instruments is examined

  17. One dimensional Convolutional Goppa Codes over the projective line

    CERN Document Server

    Pérez, J A Domínguez; Sotelo, G Serrano

    2011-01-01

    We give a general method to construct MDS one-dimensional convolutional codes. Our method generalizes previous constructions of H. Gluesing-Luerssen and B. Langfeld. Moreover we give a classification of one-dimensional Convolutional Goppa Codes and propose a characterization of MDS codes of this type.

  18. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    Energy Technology Data Exchange (ETDEWEB)

    White, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Podgorney, Robert [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kelkar, Sharad M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McClure, Mark W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Danko, George [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ghassemi, Ahmad [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fu, Pengcheng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bahrami, Davood [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barbier, Charlotte [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cheng, Qinglu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chiu, Kit-Kwan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Detournay, Christine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elsworth, Derek [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fang, Yi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Furtney, Jason K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gan, Quan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gao, Qian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Guo, Bin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, Yue [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horne, Roland N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Huang, Kai [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Im, Kyungjae [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Norbeck, Jack [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rutqvist, Jonny [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Safari, M. R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sesetty, Varahanaresh [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sonnenthal, Eric [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tao, Qingfeng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); White, Signe K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wong, Yang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xia, Yidong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-02

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems

  19. Offshore Code Comparison Collaboration within IEA Wind Annex XXIII: Phase III Results Regarding Tripod Support Structure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, J.; Camp, T.; Jonkman, J.; Butterfield, S.; Larsen, T.; Hansen, A.; Azcona, J.; Martinez, A.; Munduate, X.; Vorpahl, F.; Kleinhansl, S.; Kohlmeier, M.; Kossel, T.; Boker, C.; Kaufer, D.

    2009-01-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation codes. This paper describes the findings of code-to-code verification activities of the IEA Offshore Code Comparison Collaboration.

  20. Offshore Code Comparison Collaboration within IEA Wind Task 23: Phase IV Results Regarding Floating Wind Turbine Modeling; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Jonkman, J.; Larsen, T.; Hansen, A.; Nygaard, T.; Maus, K.; Karimirad, M.; Gao, Z.; Moan, T.; Fylling, I.

    2010-04-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation codes that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, and foundation dynamics of the support structure. This paper describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, which operates under Subtask 2 of the International Energy Agency Wind Task 23. In the latest phase of the project, participants used an assortment of codes to model the coupled dynamic response of a 5-MW wind turbine installed on a floating spar buoy in 320 m of water. Code predictions were compared from load-case simulations selected to test different model features. The comparisons have resulted in a greater understanding of offshore floating wind turbine dynamics and modeling techniques, and better knowledge of the validity of various approximations. The lessons learned from this exercise have improved the participants' codes, thus improving the standard of offshore wind turbine modeling.

  1. UEDGE code comparisons with DIII-D bolometer DATA

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, J.M.

    1995-01-01

    This paper describes the work done to develop a bolometer post processor that converts volumetric radiated power values taken from a UEDGE solution, to a line integrated radiated power along chords of the bolometers in the DIII-D tokamak. The UEDGE code calculates plasma physics quantities, such as plasma density, radiated power, or electron temperature, and compares them to actual diagnostic measurements taken from the scrape off layer (SOL) and divertor regions of the DIII-D tokamak. Bolometers are devices measuring radiated power within the tokamak. The bolometer interceptors are made up of two complete arrays, an upper array with a vertical view and a lower array with a horizontal view, so that a two dimensional profile of the radiated power may be obtained. The bolometer post processor stores line integrated values taken from UEDGE solutions into a file in tabular format. Experimental data is then put into tabular form and placed in another file. Comparisons can be made between the UEDGE solutions and actual bolometer data. Analysis has been done to determine the accuracy of the plasma physics involved in producing UEDGE simulations.

  2. UEDGE code comparisons with DIII-D bolometer data

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, J.M.

    1994-12-01

    This paper describes the work done to develop a bolometer post processor that converts volumetric radiated power values taken from a UEDGE solution, to a line integrated radiated power along chords of the bolometers in the DIII-D tokamak. The UEDGE code calculates plasma physics quantities, such as plasma density, radiated power, or electron temperature, and compares them to actual diagnostic measurements taken from the scrape off layer (SOL) and divertor regions of the DIII-D tokamak. Bolometers are devices measuring radiated power within the tokamak. The bolometer interceptors are made up of two complete arrays, an upper array with a vertical view and a lower array with a horizontal view, so that a two dimensional profile of the radiated power may be obtained. The bolometer post processor stores line integrated values taken from UEDGE solutions into a file in tabular format. Experimental data is then put into tabular form and placed in another file. Comparisons can be made between the UEDGE solutions and actual bolometer data. Analysis has been done to determine the accuracy of the plasma physics involved in producing UEDGE simulations.

  3. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    Science.gov (United States)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  4. An Empirical Comparison of Old and New Holland Occupational Codes.

    Science.gov (United States)

    Richards, James M., Jr.

    Holland's (1973) theory classifies occupations in terms of similarity to six ideal types (Realistic, Investigative, Artistic, Social, Enterprising, and Conventional) by assigning each occupation a three-letter code. Holland codes were recently developed for all occupations in the "Dictionary of Occupational Titles." The code for some…

  5. Code of Conduct for wind-power projects - Feasibility study; Code of Conduct fuer windkraftprojekte. Machbarkeitsstudie - Schlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Strub, P. [Pierre Strub, freischaffender Berater, Binningen (Switzerland); Ziegler, Ch. [Inter Act, Basel (Switzerland)

    2009-02-15

    This final report deals with the results of a feasibility study concerning the development of a Code of Conduct for wind-power projects. The aim is to strengthen the acceptance of wind-power by the general public. The necessity of new, voluntary market instruments is discussed. The urgency of development in this area is quoted as being high, and the authors consider the feasibility of the definition of a code of conduct as being proven. The code of conduct can, according to the authors, be of use at various levels but primarily in project development. Further free-enterprise instruments are also suggested that should help support socially compatible and successful market development. It is noted that the predominant portion of those questioned are prepared to co-operate in further work on the subject

  6. Algorthms and Regolith Erosion Models for the Alert Code Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ORBITEC and Duke University have teamed on this STTR to develop the ALERT (Advanced Lunar Exhaust-Regolith Transport) code which will include new developments in...

  7. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    Science.gov (United States)

    Massimo, F.; Atzeni, S.; Marocchino, A.

    2016-12-01

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for the solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.

  8. Comparison of COULOMB-2, NASCAP-2k and SPIS codes for geostationary spacecrafts charging

    Science.gov (United States)

    Novikov, Lev; Makletsov, Andrei; Sinolits, Vadim

    In developing of international standards for spacecraft charging, it is necessary to compare results of spacecraft charging modeling obtained with various models. In the paper, electrical potentials for spacecraft 3D models were calculated with COULOMB-2, NASCAP-2k [1] and SPIS [2] software, and the comparison of obtained values was performed. To compare COULOMB-2 and NASCAP-2k codes we used a 3D geometrical model of a spacecraft given in [1]. Parameters of spacecraft surface materials were taken from [1], too. For COULOMB-2 and SPIS cross validation, we carried out calculations with SPIS code through SPENVIS web-interface and with COULOMB-2 software for a spacecraft geometrical model given in SPIS test examples [2]. In both cases, we calculated distributions of electric potentials on the spacecraft surface and visualized the obtained distributions with color code. Pictures of the surface potentials distribution calculated with COULOMB-2 and SPIS software are in good qualitative agreement. Absolute values of surface potentials calculated with these codes for different plasma conditions, are close enough. Pictures of the surface potentials distribution calculated for the spacecraft model [1] with COULOMB-2 software completely correspond to actual understanding of physical mechanisms of differential spacecraft surface charging. In this case, we compared only calculated values of the surface potential for the same space plasma conditions because the potential distributions on the spacecraft surface are absent in [1]. For all the plasma conditions considered, COULOMB-2 model gives higher absolute values of negative potential, than NASCAP-2k model. Differences in these values reach 2-3 kV. The possible explanations of the divergences indicated above are distinctions in calculation procedures of primary plasma currents and secondary emission currents. References 1. Ferguson D.С., Wimberly S.C. 51st AIAA Aerospace Science Meeting 2013 (AIAA 2013-0810). 2. http://dev.spis.org/projects/spine/home/spis

  9. Secular trends in diagnostic code density in electronic healthcare data from health care systems in the Vaccine Safety Datalink project.

    Science.gov (United States)

    Hechter, Rulin C; Qian, Lei; Sy, Lina S; Greene, Sharon K; Weintraub, Eric S; Naleway, Allison L; Rowhani-Rahbar, Ali; Donahue, James G; Daley, Matthew F; Vazquez-Benitez, Gabriela; Lugg, Marlene M; Jacobsen, Steven J

    2013-02-04

    Large observational vaccine safety studies often use automated diagnoses extracted from medical care databases to identify pre-specified potential adverse events following immunization (AEFI). We assessed the secular trends and variability in the number of diagnoses per encounter regardless of immunization status referred as diagnostic code density, by healthcare setting, age, and pre-specified condition in eight large health care systems of the Vaccine Safety Datalink project during 2001-2009. An increasing trend in diagnostic code density was observed in all healthcare settings and age groups, with variations across the sites. Sudden increases in diagnostic code density were observed at certain sites when changes in coding policies or data inclusion criteria took place. When vaccine safety studies use an historical comparator, the increased diagnostic code density over time may generate low expected rates (based on historical data) and high observed rates (based on current data), suggesting a false positive association between a vaccine and AEFI. The ongoing monitoring of the diagnostic code density can provide guidance on study design and choice of appropriate comparison groups. It can also be used to ensure data quality and allow timely correction of errors in an active safety surveillance system. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP

    Science.gov (United States)

    Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

    2014-06-01

    This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

  11. POPCORN: A comparison of binary population synthesis codes

    CERN Document Server

    Claeys, J S W; Mennekens, N

    2012-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  12. POPCORN: A comparison of binary population synthesis codes

    Science.gov (United States)

    Claeys, J. S. W.; Toonen, S.; Mennekens, N.

    2013-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  13. A comparison of mechanical algorithms of fuel performance code systems

    Energy Technology Data Exchange (ETDEWEB)

    Park, C. J.; Park, J. H.; Kang, K. H.; Ryu, H. J.; Moon, J. S.; Jeong, I. H.; Lee, C. Y.; Song, K. C

    2003-11-01

    The goal of fuel rod performance evaluation is to identify the robustness of fuel rod with cladding material during fuel irradiation. Computer simulation of fuel rod performance becomes important to develop new nuclear systems. To construct the computing code system for fuel rod performance, we compared several algorithms of existing fuel rod performance code systems and summarized the details and tips as a preliminary work. Among several code systems, FRAPCON, FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. The computational algorithms related to mechanical interaction of the fuel rod are compared including methodologies and subroutines. This work will be utilized to develop the computing code system for dry process fuel rod performance.

  14. Reliability of cause of death coding: an international comparison.

    Science.gov (United States)

    Antini, Carmen; Rajs, Danuta; Muñoz-Quezada, María Teresa; Mondaca, Boris Andrés Lucero; Heiss, Gerardo

    2015-07-01

    This study evaluates the agreement of nosologic coding of cardiovascular causes of death between a Chilean coder and one in the United States, in a stratified random sample of death certificates of persons aged ≥ 60, issued in 2008 in the Valparaíso and Metropolitan regions, Chile. All causes of death were converted to ICD-10 codes in parallel by both coders. Concordance was analyzed with inter-coder agreement and Cohen's kappa coefficient by level of specification ICD-10 code for the underlying cause and the total causes of death coding. Inter-coder agreement was 76.4% for all causes of death and 80.6% for the underlying cause (agreement at the four-digit level), with differences by the level of specification of the ICD-10 code, by line of the death certificate, and by number of causes of death per certificate. Cohen's kappa coefficient was 0.76 (95%CI: 0.68-0.84) for the underlying cause and 0.75 (95%CI: 0.74-0.77) for the total causes of death. In conclusion, causes of death coding and inter-coder agreement for cardiovascular diseases in two regions of Chile are comparable to an external benchmark and with reports from other countries.

  15. Comparison of the Gauss-Seidel spherical polarized radiative transfer code with other radiative transfer codes

    Science.gov (United States)

    Herman, B. M.; Flittner, D. E.; Caudill, T. R.; Thome, K. J.; Ben-David, A.

    1995-07-01

    Calculations that use the Gauss-Seidel method are presented of the diffusely scattered light in a spherical atmosphere with polarization fully included. Comparisons are made between this method and the Monte Carlo calculations of other researchers for spherical geometry in a pure Rayleigh atmosphere. Comparisons with plane-parallel atmospheres are also presented. Single-scatter intensity comparisons with spherical geometry show excellent agreement. When all orders of scattering are included, comparisons of polarization parameters I, Q and U as well as the plane of polarization show good agreement when allowances are made for the statistical variability inherent in the Monte Carlo method.

  16. A high-speed full-field profilometry with coded laser strips projection

    Science.gov (United States)

    Zhang, Guanliang; Zhou, Xiang; Jin, Rui; Xu, Changda; Li, Dong

    2017-06-01

    Line structure light measurement needs accurate mechanical movement device and high -frame-rate camera, which is difficult to realize. We propose a high-speed full-field profilometry to solve these difficult ies, using coded laser strips projected by a MEMS scanning mirror. The mirror could take place of the mechanical movement device with its high speed and accurate. Besides, a method with gray code and color code is used to decrease the frames number of projection, retaining the advantage of line structure light measurement. In the experiment, we use a laser MEMS scanner and two color cameras. The laser MEMS scanner projects coded stripes, with two color cameras collecting the modulated pattern on the measured object. The color cameras compose a stereo vision system so that the three-dimensional data is reconstructed according to triangulation.

  17. GCSS Idealized Cirrus Model Comparison Project

    Science.gov (United States)

    Starr, David OC.; Benedetti, Angela; Boehm, Matt; Brown, Philip R. A.; Gierens, Klaus; Girard, Eric; Giraud, Vincent; Jakob, Christian; Jensen, Eric; Khvorostyanov, Vitaly; hide

    2000-01-01

    The GCSS Working Group on Cirrus Cloud Systems (WG2) is conducting a systematic comparison and evaluation of cirrus cloud models. This fundamental activity seeks to support the improvement of models used for climate simulation and numerical weather prediction through assessment and improvement of the "process" models underlying parametric treatments of cirrus cloud processes in large-scale models. The WG2 Idealized Cirrus Model Comparison Project is an initial comparison of cirrus cloud simulations by a variety of cloud models for a series of idealized situations with relatively simple initial conditions and forcing. The models (16) represent the state-of-the-art and include 3-dimensional large eddy simulation (LES) models, two-dimensional cloud resolving models (CRMs), and single column model (SCM) versions of GCMs. The model microphysical components are similarly varied, ranging from single-moment bulk (relative humidity) schemes to fully size-resolved (bin) treatments where ice crystal growth is explicitly calculated. Radiative processes are included in the physics package of each model. The baseline simulations include "warm" and "cold" cirrus cases where cloud top initially occurs at about -47C and -66C, respectively. All simulations are for nighttime conditions (no solar radiation) where the cloud is generated in an ice supersaturated layer, about 1 km in depth, with an ice pseudoadiabatic thermal stratification (neutral). Continuing cloud formation is forced via an imposed diabatic cooling representing a 3 cm/s uplift over a 4-hour time span followed by a 2-hour dissipation stage with no cooling. Variations of these baseline cases include no-radiation and stable-thermal-stratification cases. Preliminary results indicated the great importance of ice crystal fallout in determining even the gross cloud characteristics, such as average vertically-integrated ice water path (IWP). Significant inter-model differences were found. Ice water fall speed is directly

  18. Offshore code comparison collaboration continuation (OC4), phase I - Results of coupled simulations of an offshore wind turbine with jacket support structure

    DEFF Research Database (Denmark)

    Popko, Wojciech; Vorpahl, Fabian; Zuga, Adam;

    2012-01-01

    In this paper, the exemplary results of the IEA Wind Task 30 "Offshore Code Comparison Collaboration Continuation" (OC4) Project - Phase I, focused on the coupled simulation of an offshore wind turbine (OWT) with a jacket support structure, are presented. The focus of this task has been...

  19. An Evaluation and Comparison of Three Nonlinear Programming Codes

    Science.gov (United States)

    1976-03-01

    sixth problem was selected from the Himmelblau collection [Ref. 11] and the remaining two were adaptations cf an inventory model and an entropy model...both require utilization of the main nonlinear codes with their high core and corresponding time requirements. Himmelblau estimated preparation times...Nonlinear Program mincf Moclel for "Determining a Huni/Eions ITix, By R*.J. CTasen, E.¥.Graves ana J.Y7 Iu, 3arch 1974. 11. Himmelblau . D.M., Applied

  20. Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation

    Science.gov (United States)

    Padilla, Jose F.

    2010-01-01

    Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.

  1. Comparisons of IRI global TEC maps with CODE GIMs

    CERN Document Server

    Coisson, P; Radicella, S M

    2002-01-01

    The Global Ionospheric Maps (GIM) produced at CODE are used in this work as experimental reference to evaluate the performance of IRI in predicting global vertical TEC during a period of high solar activity. The analysis has been done for the four seasons of year 2001, comparing monthly median GIMs for 12 different UTs. The attention is focused on the correct reproduction of large scale features like the equatorial anomaly or behavior at high latitudes, to understand where and under which conditions the IRI model could present unexpectable features.

  2. Benchmarking studies for the DESCARTES and CIDER codes. Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Eslinger, P.W.; Ouderkirk, S.J.; Nichols, W.E.

    1993-01-01

    The Hanford Envirorunental Dose Reconstruction (HEDR) project is developing several computer codes to model the airborne release, transport, and envirormental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In order to calculate the dose of radiation a person may have received in any given location, the geographic area addressed by the HEDR Project will be divided into a grid. The grid size suggested by the draft requirements contains 2091 units called nodes. Two of the codes being developed are DESCARTES and CIDER. The DESCARTES code will be used to estimate the concentration of radionuclides in environmental pathways from the output of the air transport code RATCHET. The CIDER code will use information provided by DESCARTES to estimate the dose received by an individual. The requirements that Battelle (BNW) set for these two codes were released to the HEDR Technical Steering Panel (TSP) in a draft document on November 10, 1992. This document reports on the preliminary work performed by the code development team to determine if the requirements could be met.

  3. Fire aerosol experiment and comparisons with computer code predictions

    Energy Technology Data Exchange (ETDEWEB)

    Gregory, W.S.; Nichols, B.D.; White, B.W.; Smith, P.R.; Leslie, I.H.; Corkran, J.R.

    1988-01-01

    Los Alamos National Laboratory, in cooperation with New Mexico State University, has carried on a series of tests to provide experimental data on fire-generated aerosol transport. These data will be used to verify the aerosol transport capabilities of the FIRAC computer code. FIRAC was developed by Los Alamos for the US Nuclear Regulatory Commission. It is intended to be used by safety analysts to evaluate the effects of hypothetical fires on nuclear plants. One of the most significant aspects of this analysis deals with smoke and radioactive material movement throughout the plant. The tests have been carried out using an industrial furnace that can generate gas temperatures to 300/degree/C. To date, we have used quartz aerosol with a median diameter of about 10 ..mu..m as the fire aerosol simulant. We also plan to use fire-generated aerosols of polystyrene and polymethyl methacrylate (PMMA). The test variables include two nominal gas flow rates (150 and 300 ft/sup 3//min) and three nominal gas temperatures (ambient, 150/degree/C, and 300/degree/C). The test results are presented in the form of plots of aerosol deposition vs length of duct. In addition, the mass of aerosol caught in a high-efficiency particulate air (HEPA) filter during the tests is reported. The tests are simulated with the FIRAC code, and the results are compared with the experimental data. 3 refs., 10 figs., 1 tab.

  4. Fundamental period of Italian reinforced concrete buildings: comparison between numerical, experimental and Italian code simplified values

    Science.gov (United States)

    Ditommaso, Rocco; Carlo Ponzo, Felice; Auletta, Gianluca; Iacovino, Chiara; Nigro, Antonella

    2015-04-01

    Aim of this study is a comparison among the fundamental period of reinforced concrete buildings evaluated using the simplified approach proposed by the Italian Seismic code (NTC 2008), numerical models and real values retrieved from an experimental campaign performed on several buildings located in Basilicata region (Italy). With the intention of proposing simplified relationships to evaluate the fundamental period of reinforced concrete buildings, scientists and engineers performed several numerical and experimental campaigns, on different structures all around the world, to calibrate different kind of formulas. Most of formulas retrieved from both numerical and experimental analyses provides vibration periods smaller than those suggested by the Italian seismic code. However, it is well known that the fundamental period of a structure play a key role in the correct evaluation of the spectral acceleration for seismic static analyses. Generally, simplified approaches impose the use of safety factors greater than those related to in depth nonlinear analyses with the aim to cover possible unexpected uncertainties. Using the simplified formula proposed by the Italian seismic code the fundamental period is quite higher than fundamental periods experimentally evaluated on real structures, with the consequence that the spectral acceleration adopted in the seismic static analysis may be significantly different than real spectral acceleration. This approach could produces a decreasing in safety factors obtained using linear and nonlinear seismic static analyses. Finally, the authors suggest a possible update of the Italian seismic code formula for the simplified estimation of the fundamental period of vibration of existing RC buildings, taking into account both elastic and inelastic structural behaviour and the interaction between structural and non-structural elements. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the

  5. A coded structured light system based on primary color stripe projection and monochrome imaging.

    Science.gov (United States)

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  6. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    Directory of Open Access Journals (Sweden)

    Armando Viviano Razionale

    2013-10-01

    Full Text Available Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  7. Performance Comparison of Turbo Code in WIMAX System with Various Detection Techniques

    Directory of Open Access Journals (Sweden)

    Vikas Tursenia

    2013-07-01

    Full Text Available The different FEC techniques like convolution code, RS code and turbo code are used to improve the performance of communication system. In this paper, we study the performance of the MAP, Log-MAP, Max-Log-MAP and APP decoding algorithms for turbo codes, in terms of the a priori information, a posteriori information, extrinsic information and channel reliability. We also analyze how important an accurate estimate of channel reliability factor is to the good performances of the iterative turbo decoder. The simulations are made for parallel concatenation of two recursive systematic convolution codes with a block interleaver at the transmitter, AWGN channel and iterative decoding with different algorithms at the receiver side. The comparison of these detection techniques in term of BER performance is discussed in result section.

  8. Cross-cultural comparison of political leaders' operational codes.

    Science.gov (United States)

    Dirilen-Gumus, Ozlem

    2016-03-04

    This study aims at comparing operational codes (namely, philosophical and instrumental beliefs about the political universe) of political leaders from different cultures. According to Schwartz (2004), cultures can be categorised into 3 dimensions: autonomy-embeddedness, egalitarianism-hierarchy and mastery-harmony. This study draws upon the 1st dimension (akin to the most popular cultural dimension of Hofstede: individualism-collectivism) and focuses on comparing the leaders of autonomous and embedded cultures based on how cooperative/conflictual they are. The main research hypothesis is as follows: the leaders of embedded cultures would be more cooperative than the leaders of autonomous cultures. For this purpose, 3 autonomous cultures (the UK, Canada and Australia) and embedded cultures (Singapore, South Africa and Malaysia) cultures were chosen randomly and the cooperativeness of the correspondent countries' leaders were compared after being profiled by Profiler Plus. The results indicated that the leaders of embedded cultures were significantly more cooperative than autonomous cultures after holding the control variables constant. The findings were discussed in the light of relevant literature.

  9. User instructions for the DESCARTES environmental accumulation code. Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Miley, T.B.; Eslinger, P.W.; Nichols, W.E.; Lessor, K.S.; Ouderkirk, S.J.

    1994-05-01

    The purpose of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emissions since 1944 from the Hanford Site near Richland, Washington. The HEDR Project work is conducted under several technical and administrative tasks, among which is the Environmental Pathways and Dose Estimates task. The staff on this task have developed a suite of computer codes which are used to estimate doses to individuals in the public. This document contains the user instructions for the DESCARTES (Dynamic estimates of concentrations and Accumulated Radionuclides in Terrestrial Environments) suite of codes. In addition to the DESCARTES code, this includes two air data preprocessors, a database postprocessor, and several utility routines that are used to format input data needed for DESCARTES.

  10. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  11. Comparison of Activation Analysis Codes between CINDER'90 and ORIGEN-S

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jeong Dong; Choi, Hong Yeop; Lee, Yong Deok; Kim, Hodong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    code comparison are provided to select the activation analysis code.

  12. Field depth extension of 2D barcode scanner based on wavefront coding and projection algorithm

    Science.gov (United States)

    Zhao, Tingyu; Ye, Zi; Zhang, Wenzi; Huang, Weiwei; Yu, Feihong

    2008-03-01

    Wavefront coding (WFC) used in 2D barcode scanners can extend the depth of field into a great extent with simpler structure compared to the autofocus microscope system. With a cubic phase mask (CPM) employed in the STOP, blurred images will be obtained in charge coupled device (CCD), which can be restored by digital filters. Direct methods are used widely in real-time restoration with good computational efficiency but with details smoothed. Here, the results of direct method are firstly filtered by hard-threshold function. The positions of the steps can be detected by simple differential operators. With the positions corrected by projection algorithm, the exact barcode information is restored. A wavefront coding system with 7mm effective focal length and 6 F-number is designed as an example. Although with the different magnification, images of different object distances can be restored by one point spread function (PSF) with 200mm object distance. A QR code (Quickly Response Code) of 31mm X 27mm is used as a target object. The simulation results showed that the sharp imaging objective distance is from 80mm to 355mm. The 2D barcode scanner with wavefront coding extends field depth with simple structure, low cost and large manufacture tolerance. This combination of the direct filter and projection algorithm proposed here could get the exact 2D barcode information with good computational efficiency.

  13. Investigate Methods to Decrease Compilation Time-AX-Program Code Group Computer Science R& D Project

    Energy Technology Data Exchange (ETDEWEB)

    Cottom, T

    2003-06-11

    Large simulation codes can take on the order of hours to compile from scratch. In Kull, which uses generic programming techniques, a significant portion of the time is spent generating and compiling template instantiations. I would like to investigate methods that would decrease the overall compilation time for large codes. These would be methods which could then be applied, hopefully, as standard practice to any large code. Success is measured by the overall decrease in wall clock time a developer spends waiting for an executable. Analyzing the make system of a slow to build project can benefit all developers on the project. Taking the time to analyze the number of processors used over the life of the build and restructuring the system to maximize the parallelization can significantly reduce build times. Distributing the build across multiple machines with the same configuration can increase the number of available processors for building and can help evenly balance the load. Becoming familiar with compiler options can have its benefits as well. The time improvements of the sum can be significant. Initial compilation time for Kull on OSF1 was {approx} 3 hours. Final time on OSF1 after completion is 16 minutes. Initial compilation time for Kull on AIX was {approx} 2 hours. Final time on AIX after completion is 25 minutes. Developers now spend 3 hours less waiting for a Kull executable on OSF1, and 2 hours less on AIX platforms. In the eyes of many Kull code developers, the project was a huge success.

  14. Multi-dimensional free-electron laser simulation codes: a comparison study

    CERN Document Server

    Biedron, S G; Dejus, Roger J; Faatz, B; Freund, H P; Milton, S V; Nuhn, H D; Reiche, S

    2000-01-01

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  15. Finite Projective Geometries and Classification of the Weight Hierarchies of Codes (I)

    Institute of Scientific and Technical Information of China (English)

    Wen De CHEN; Torleiv KLфVE

    2004-01-01

    The weight hierarchy of a binary linear [n, k] code C is the sequence (d1, d2,……, dk), where dr is the smallest support of an r-dimensional subcode of C. The codes of dimension 4 are collected in classes and the possible weight hierarchies in each class is determined by finite projective geometries.The possible weight hierarchies in class A, B, C, D are determined in Part (Ⅰ). The possible weight hierarchies in class E, F, G, H, I are determined in Part (Ⅱ).

  16. Revisiting Fenton Hill Phase I reservoir creation and stimulation mechanisms through the GTO code comparison effort

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Pengcheng; Mcclure, Mark; Shiozawa, Sogo; White, Mark D.

    2016-06-27

    A series of experiments performed at the Fenton Hill hot dry rock site after stage 2 drilling of Phase I reservoir provided intriguing field observations on the reservoir’s responses to injection and venting under various conditions. Two teams participating in the US DOE Geothermal Technologies Office (GTO)’s Code Comparison Study (CCS) used different numerical codes to model these five experiments with the objective of inferring the hydraulic stimulation mechanism involved. The codes used by the two teams are based on different numerical principles, and the assumptions made were also different, due to intrinsic limitations in the codes and the modelers’ personal interpretations of the field observations. Both sets of models were able to produce the most important field observations and both found that it was the combination of the vertical gradient of the fracture opening pressure, injection volume, and the use/absence of proppant that yielded the different outcomes of the five experiments.

  17. The AGORA High-Resolution Galaxy Simulations Comparison Project. II: Isolated Disk Test

    CERN Document Server

    Kim, Ji-hoon; Teyssier, Romain; Butler, Michael J; Ceverino, Daniel; Choi, Jun-Hwan; Feldmann, Robert; Keller, Ben W; Lupi, Alessandro; Quinn, Thomas; Revaz, Yves; Wallace, Spencer; Gnedin, Nickolay Y; Leitner, Samuel N; Shen, Sijing; Smith, Britton D; Thompson, Robert; Turk, Matthew J; Abel, Tom; Arraki, Kenza S; Benincasa, Samantha M; Chakrabarti, Sukanya; DeGraf, Colin; Dekel, Avishai; Goldbaum, Nathan J; Hopkins, Philip F; Hummels, Cameron B; Klypin, Anatoly; Li, Hui; Madau, Piero; Mandelker, Nir; Mayer, Lucio; Nagamine, Kentaro; Nickerson, Sarah; O'Shea, Brian W; Primack, Joel R; Roca-Fàbrega, Santi; Semenov, Vadim; Shimizu, Ikkoh; Simpson, Christine M; Todoroki, Keita; Wadsley, James W; Wise, John H

    2016-01-01

    Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, we find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relati...

  18. Understanding transport simulations of heavy-ion collisions at 100 and 400 AMeV: Comparison of heavy ion transport codes under controlled conditions

    CERN Document Server

    Xu, Jun; Tsang, ManYee Betty; Wolter, Hermann; Zhang, Ying-Xun; Aichelin, Joerg; Colonna, Maria; Cozma, Dan; Danielewicz, Pawel; Feng, Zhao-Qing; Fevre, Arnaud Le; Gaitanos, Theodoros; Hartnack, Christoph; Kim, Kyungil; Kim, Youngman; Ko, Che-Ming; Li, Bao-An; Li, Qing-Feng; Li, Zhu-Xia; Napolitani, Paolo; Ono, Akira; Papa, Massimo; Song, Taesoo; Su, Jun; Tian, Jun-Long; Wang, Ning; Wang, Yong-Jia; Weil, Janus; Xie, Wen-Jie; Zhang, Feng-Shou; Zhang, Guo-Qiang

    2016-01-01

    Transport simulations are very valuable for extracting physics information from heavy-ion collision experiments. With the emergence of many different transport codes in recent years, it becomes important to estimate their robustness in extracting physics information from experiments. We report on the results of a transport code comparison project. 18 commonly used transport codes were included in this comparison: 9 Boltzmann-Uehling-Uhlenbeck-type codes and 9 Quantum-Molecular-Dynamics-type codes. These codes have been required to simulate Au+Au collisions using the same physics input for mean fields and for in-medium nucleon-nucleon cross sections, as well as the same initialization set-up, the impact parameter, and other calculational parameters at 100 and 400 AMeV incident energy. Among the codes we compare one-body observables such as rapidity and transverse flow distributions. We also monitor non-observables such as the initialization of the internal states of colliding nuclei and their stability, the co...

  19. A comparison of electric vehicle integration projects

    DEFF Research Database (Denmark)

    Andersen, Peter Bach; Garcia-Valle, Rodrigo; Kempton, Willett

    2012-01-01

    .g. utilization of electric vehicles for ancillary services. To arrive at standardized solutions, it is helpful to analyze the market integration and utilization concepts, architectures and technologies used in a set of state-of-the art electric vehicle demonstration projects. The goal of this paper......It is widely agreed that an intelligent integration of electric vehicles can yield benefits for electric vehicle owner, power grid, and the society as a whole. Numerous electric vehicle utilization concepts have been investigated ranging from the simple e.g. delayed charging to the more advanced e...... is to highlight different approaches to electric vehicle integration in three such projects and describe the underlying technical components which should be harmonized to support interoperability and a broad set of utilization concepts. The projects investigated are the American University of Delaware's V2G...

  20. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    Science.gov (United States)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  1. Comparison of codes assessing radiation exposure of aircraft crew due to galactic cosmic radiation

    Energy Technology Data Exchange (ETDEWEB)

    Bottollier-Depois, Jean-Francois [IRSN Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-Roses (France); Beck, Peter; Latocha, Marcin [AIT Austrian Institute of Technology, Vienna (Austria). Health and Environment Dept.; Mares, Vladimir; Ruehm, Werner [HMGU Helmholtz Zentrum Muenchen, Neuherberg (Germany). Inst. of Radiation Protection; Matthiae, Daniel [DLR Deutsches Zentrum fuer Luft- und Raumfahrt, Koeln (Germany). Inst. of Aerospace Medicine; Wissmann, Frank [Physikalisch-Technische Bundesanstalt, Braunschweig (Germany)

    2012-05-15

    The aim of this report is to compare the doses and dose rates calculated by various codes assessing radiation exposure of aircraft crew due to cosmic radiation. Some of the codes are used routinely for radiation protection purposes while others are purely for scientific use. The calculations were done using a set of representative, real flight routes around the globe. The results are presented in an anonymous way. This comparison is of major importance since a real determination of effective dose is not possible and, therefore, the different methods used to evaluate effective doses can be compared. Eleven codes have been used in this comparison exercise organised by EURADOS on harmonization of aircrew dosimetry practices in European countries. Some of these codes are based on simulations of the secondary field of cosmic radiation by Monte Carlo techniques; others use analytical solutions of the problem, while still others are mainly based on a fit to experimental data. The overall agreement between the codes, however, is better than 20 % from the median.

  2. Experimental comparison of phase-shifting fringe projection and statistical pattern projection for active triangulation systems

    Science.gov (United States)

    Lutzke, Peter; Schaffer, Martin; Kühmstedt, Peter; Kowarschik, Richard; Notni, Gunther

    2013-04-01

    Active triangulation systems are widely used for precise and fast measurements. Many different coding strategies have been invented to solve the correspondence problem. The quality of the measurement results depends on the accuracy of the pixel assignments. The most established method uses phase shifted-patterns projected on the scene. This is compared to a method using statistical patterns. In both coding strategies, the number and the spatial frequency of the projected patterns is varied. The measurements and calculations for all presented results were done with exactly the same measurement setup in a narrow time window to avoid any changes and to guarantee identical technical preconditions as well as comparability.

  3. Experimental research and comparison of LDPC and RS channel coding in ultraviolet communication systems.

    Science.gov (United States)

    Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin

    2014-03-10

    We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.

  4. Yucca Mountain Project thermal and mechanical codes first benchmark exercise: Part 3, Jointed rock mass analysis; Yucca Mountain Site Characterization Project

    Energy Technology Data Exchange (ETDEWEB)

    Costin, L.S.; Bauer, S.J.

    1991-10-01

    Thermal and mechanical models for intact and jointed rock mass behavior are being developed, verified, and validated at Sandia National Laboratories for the Yucca Mountain Site Characterization Project. Benchmarking is an essential part of this effort and is one of the tools used to demonstrate verification of engineering software used to solve thermomechanical problems. This report presents the results of the third (and final) phase of the first thermomechanical benchmark exercise. In the first phase of this exercise, nonlinear heat conduction code were used to solve the thermal portion of the benchmark problem. The results from the thermal analysis were then used as input to the second and third phases of the exercise, which consisted of solving the structural portion of the benchmark problem. In the second phase of the exercise, a linear elastic rock mass model was used. In the third phase of the exercise, two different nonlinear jointed rock mass models were used to solve the thermostructural problem. Both models, the Sandia compliant joint model and the RE/SPEC joint empirical model, explicitly incorporate the effect of the joints on the response of the continuum. Three different structural codes, JAC, SANCHO, and SPECTROM-31, were used with the above models in the third phase of the study. Each model was implemented in two different codes so that direct comparisons of results from each model could be made. The results submitted by the participants showed that the finite element solutions using each model were in reasonable agreement. Some consistent differences between the solutions using the two different models were noted but are not considered important to verification of the codes. 9 refs., 18 figs., 8 tabs.

  5. Computer code simulations of explosions in flow networks and comparison with experiments

    Science.gov (United States)

    Gregory, W. S.; Nichols, B. D.; Moore, J. A.; Smith, P. R.; Steinke, R. G.; Idzorek, R. D.

    1987-10-01

    A program of experimental testing and computer code development for predicting the effects of explosions in air-cleaning systems is being carried out for the Department of Energy. This work is a combined effort by the Los Alamos National Laboratory and New Mexico State University (NMSU). Los Alamos has the lead responsibility in the project and develops the computer codes; NMSU performs the experimental testing. The emphasis in the program is on obtaining experimental data to verify the analytical work. The primary benefit of this work will be the development of a verified computer code that safety analysts can use to analyze the effects of hypothetical explosions in nuclear plant air cleaning systems. The experimental data show the combined effects of explosions in air-cleaning systems that contain all of the important air-cleaning elements (blowers, dampers, filters, ductwork, and cells). A small experimental set-up consisting of multiple rooms, ductwork, a damper, a filter, and a blower was constructed. Explosions were simulated with a shock tube, hydrogen/air-filled gas balloons, and blasting caps. Analytical predictions were made using the EVENT84 and NF85 computer codes. The EVENT84 code predictions were in good agreement with the effects of the hydrogen/air explosions, but they did not model the blasting cap explosions adequately. NF85 predicted shock entrance to and within the experimental set-up very well. The NF85 code was not used to model the hydrogen/air or blasting cap explosions.

  6. Projection based image restoration, super-resolution and error correction codes

    Science.gov (United States)

    Bauer, Karl Gregory

    Super-resolution is the ability of a restoration algorithm to restore meaningful spatial frequency content beyond the diffraction limit of the imaging system. The Gerchberg-Papoulis (GP) algorithm is one of the most celebrated algorithms for super-resolution. The GP algorithm is conceptually simple and demonstrates the importance of using a priori information in the formation of the object estimate. In the first part of this dissertation the continuous GP algorithm is discussed in detail and shown to be a projection on convex sets algorithm. The discrete GP algorithm is shown to converge in the exactly-, over- and under-determined cases. A direct formula for the computation of the estimate at the kth iteration and at convergence is given. This analysis of the discrete GP algorithm sets the stage to connect super-resolution to error-correction codes. Reed-Solomon codes are used for error-correction in magnetic recording devices, compact disk players and by NASA for space communications. Reed-Solomon codes have a very simple description when analyzed with the Fourier transform. This signal processing approach to error- correction codes allows the error-correction problem to be compared with the super-resolution problem. The GP algorithm for super-resolution is shown to be equivalent to the correction of errors with a Reed-Solomon code over an erasure channel. The Restoration from Magnitude (RFM) problem seeks to recover a signal from the magnitude of the spectrum. This problem has applications to imaging through a turbulent atmosphere. The turbulent atmosphere causes localized changes in the index of refraction and introduces different phase delays in the data collected. Synthetic aperture radar (SAR) and hyperspectral imaging systems are capable of simultaneously recording multiple images of different polarizations or wavelengths. Each of these images will experience the same turbulent atmosphere and have a common phase distortion. A projection based restoration

  7. Haloes gone MAD: The Halo-Finder Comparison Project

    CERN Document Server

    Knebe, Alexander; Muldrew, Stuart I; Pearce, Frazer R; Aragon-Calvo, Miguel Angel; Ascasibar, Yago; Behroozi, Peter S; Ceverino, Daniel; Colombi, Stephane; Diemand, Juerg; Dolag, Klaus; Falck, Bridget L; Fasel, Patricia; Gardner, Jeff; Gottloeber, Stefan; Hsu, Chung-Hsing; Iannuzzi, Francesca; Klypin, Anatoly; Lukic, Zarija; Maciejewski, Michal; McBride, Cameron; Neyrinck, Mark C; Planelles, Susana; Potter, Doug; Quilis, Vicent; Rasera, Yann; Read, Justin I; Ricker, Paul M; Roy, Fabrice; Springel, Volker; Stadel, Joachim; Stinson, Greg; Sutter, P M; Turchaninov, Victor; Tweed, Dylan; Yepes, Gustavo; Zemp, Marcel

    2011-01-01

    [abridged] We present a detailed comparison of fundamental dark matter halo properties retrieved by a substantial number of different halo finders. These codes span a wide range of techniques including friends-of-friends (FOF), spherical-overdensity (SO) and phase-space based algorithms. We further introduce a robust (and publicly available) suite of test scenarios that allows halo finder developers to compare the performance of their codes against those presented here. This set includes mock haloes containing various levels and distributions of substructure at a range of resolutions as well as a cosmological simulation of the large-scale structure of the universe. All the halo finding codes tested could successfully recover the spatial location of our mock haloes. They further returned lists of particles (potentially) belonging to the object that led to coinciding values for the maximum of the circular velocity profile and the radius where it is reached. All the finders based in configuration space struggled...

  8. Code Syntax-Comparison Algorithm Based on Type-Redefinition-Preprocessing and Rehash Classification

    Directory of Open Access Journals (Sweden)

    Baojiang Cui

    2011-08-01

    Full Text Available The code comparison technology plays an important role in the fields of software security protection and plagiarism detection. Nowadays, there are mainly FIVE approaches of plagiarism detection, file-attribute-based, text-based, token-based, syntax-based and semantic-based. The prior three approaches have their own limitations, while the technique based on syntax has its shortage of detection ability and low efficiency that all of these approaches cannot meet the requirements on large-scale software plagiarism detection. Based on our prior research, we propose an algorithm on type redefinition plagiarism detection, which could detect the level of simple type redefinition, repeating pattern redefinition, and the redefinition of type with pointer. Besides, this paper also proposes a code syntax-comparison algorithm based on rehash classification, which enhances the node storage structure of the syntax tree, and greatly improves the efficiency.

  9. Project Icarus: Nuclear Fusion Propulsion Concept Comparison

    Science.gov (United States)

    Stanic, M.

    Project Icarus will use nuclear fusion as the primary propulsion, since achieving breakeven is imminent within the next decade. Therefore, fusion technology provides confidence in further development and fairly high technological maturity by the time the Icarus mission would be plausible. Currently there are numerous (over 2 dozen) different fusion approaches that are simultaneously being developed around the World and it is difficult to predict which of the concepts is going to be the most successful one. This study tried to estimate current technological maturity and possible technological extrapolation of fusion approaches for which appropriate data could be found. Figures of merit that were assessed include: current technological state, mass and volume estimates, possible gain values, main advantages and disadvantages of the concept and an attempt to extrapolate current technological state for the next decade or two. Analysis suggests that Magnetic Confinement Fusion (MCF) concepts are not likely to deliver sufficient performance due to size, mass, gain and large technological barriers of the concept. However, ICF and PJMIF did show potential for delivering necessary performance, assuming appropriate techno- logical advances. This paper is a submission of the Project Icarus Study Group.

  10. Comparison of protein coding gene contents of the fungal phyla Pezizomycotina and Saccharomycotina

    DEFF Research Database (Denmark)

    Arvas, Mikko; Kivioja, Teemu; Mitchell, Alex

    2007-01-01

    wide comparison of protein coding gene content of Saccharomycotina and Pezizomycotina, which include industrially important yeasts and filamentous fungi, respectively. RESULTS: Our analysis shows that based on genome redundancy, the traditional model organisms Saccharomyces cerevisiae and Neurospora......BACKGROUND: Several dozen fungi encompassing traditional model organisms, industrial production organisms and human and plant pathogens have been sequenced recently and their particular genomic features analysed in detail. In addition comparative genomics has been used to analyse specific sub...

  11. The Frontier Fields Lens Modeling Comparison Project

    CERN Document Server

    Meneghetti, M; Coe, D; Contini, E; De Lucia, G; Giocoli, C; Acebron, A; Borgani, S; Bradac, M; Diego, J M; Hoag, A; Ishigaki, M; Johnson, T L; Jullo, E; Kawamata, R; Lam, D; Limousin, M; Liesenborgs, J; Oguri, M; Sebesta, K; Sharon, K; Williams, L L R; Zitrin, A

    2016-01-01

    Gravitational lensing by clusters of galaxies offers a powerful probe of their structure and mass distribution. Deriving a lens magnification map for a galaxy cluster is a classic inversion problem and many methods have been developed over the past two decades to solve it. Several research groups have developed techniques independently to map the predominantly dark matter distribution in cluster lenses. While these methods have all provided remarkably high precision mass maps, particularly with exquisite imaging data from the Hubble Space Telescope (HST), the reconstructions themselves have never been directly compared. In this paper, we report the results of comparing various independent lens modeling techniques employed by individual research groups in the community. Here we present for the first time a detailed and robust comparison of methodologies for fidelity, accuracy and precision. For this collaborative exercise, the lens modeling community was provided simulated cluster images -- of two clusters Are...

  12. Wind-induced transmission tower foundation loads. A field study-design code comparison

    Energy Technology Data Exchange (ETDEWEB)

    Savory, E. [Department of Mechanical and Materials Engineering, University of Western Ontario, London, Ont. (Canada); Parke, G.A.R.; Disney, P.; Toy, N. [School of Engineering, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom)

    2008-06-15

    This paper presents a comparison between the wind-induced foundation loads measured on a type L6 transmission line tower during a field study in the UK and those computed using the UK Code of Practice for lattice tower and transmission line design (BS8100). In this work, the Code provisions have been generalised to give the wind-induced strain in each of the tower legs immediately above the foundation as a function of wind direction and wind speed at the top of the tower. The complete data set from the field monitoring has been decomposed to provide a similar formulation for comparison purposes. The analysis shows excellent agreement between the Code calculations and the measured results, within the overall accuracy of the field data. This indicates that, at least for the tower type examined here, the existing design Code provides a reliable transformation of the local wind speed at the top of the tower into tension and compression loads on the foundations. (author)

  13. Comparisons of hadrontherapy-relevant data to nuclear interaction codes in the Geant4 toolkit

    Science.gov (United States)

    Braunn, B.; Boudard, A.; Colin, J.; Cugnon, J.; Cussol, D.; David, J. C.; Kaitaniemi, P.; Labalme, M.; Leray, S.; Mancusi, D.

    2013-03-01

    Comparisons between experimental data, INCL and other nuclear models available in the Geant4 toolkit are presented. The data used for the comparisons come from a fragmentation experiment realised at GANIL facility. The main purpose of this experiment was to measure production rates and angular distributions of emitted particles from the collision of a 95.A MeV 12C beam and thick PMMA (plastic) targets. The latest version of the Intra Nuclear Cascade of Liege code extended to nucleus-nucleus collisions for ion beam therapy application will be described. This code as well as JQMD and the Geant4 binary cascade has been compared with these hadrontherapy-oriented experimental data. The results from the comparisons exhibit an overall qualitative agreement between the models and the experimental data. However, at a quantitative level, it has been shown that none of this three models manage to reproduce precisely all the data. The nucleus-nucleus extension of INCL, which is not predictive enough for ion beam therapy application yet, has nevertheless proven to be competitive with other nuclear collisions codes.

  14. The AGORA High-Resolution Galaxy Simulations Comparison Project

    CERN Document Server

    Kim, Ji-hoon; Agertz, Oscar; Bryan, Greg L; Ceverino, Daniel; Christensen, Charlotte; Conroy, Charlie; Dekel, Avishai; Gnedin, Nickolay Y; Goldbaum, Nathan J; Guedes, Javiera; Hahn, Oliver; Hobbs, Alexander; Hopkins, Philip F; Hummels, Cameron B; Iannuzzi, Francesca; Keres, Dusan; Klypin, Anatoly; Kravtsov, Andrey V; Krumholz, Mark R; Kuhlen, Michael; Leitner, Samuel N; Madau, Piero; Mayer, Lucio; Moody, Christopher E; Nagamine, Kentaro; Norman, Michael L; Oñorbe, Jose; O'Shea, Brian W; Pillepich, Annalisa; Primack, Joel R; Quinn, Thomas; Read, Justin I; Robertson, Brant E; Rocha, Miguel; Rudd, Douglas H; Shen, Sijing; Smith, Britton D; Szalay, Alexander S; Teyssier, Romain; Thompson, Robert; Todoroki, Keita; Turk, Matthew J; Wadsley, James W; Wise, John H; Zolotov, Adi

    2014-01-01

    We introduce the AGORA project, a comprehensive numerical study of well-resolved galaxies within the LCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ~100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle of baryons in and out of 8 galaxies with halo masses M_vir ~= 1e10, 1e11, 1e12, and 1e13 Msun at z=0 and two different ("violent" and "quiescent") assembly histories. The numerical techniques and implementations used in this project include the smoothed particle hydrodynamics codes GADGET and GASOLINE, and the adaptive mesh refinement codes ART, ENZO, and RAMSES. The codes will share common initial conditions and common astrophysics packages including UV background, metal-dependent radiative cooling, metal and energy yields of supernovae, and stellar initial mass function. These are described in detail in the present paper. Subgrid star formation and feedback pr...

  15. The AGORA High-Resolution Galaxy Simulations Comparison Project

    OpenAIRE

    Abel, Tom; Agertz, Oscar; Bryan, Greg L.; Ceverino, Daniel; Christensen, Charlotte R.; Conroy, Charlie; Dekel, Avishai; Gnedin, Nickolay Yu; Goldbaum, Nathan J.; Hahn, Oliver; Hobbs, Alexander; Hopkins, Philip F.; Hummels, Cameron B.; Iannuzzi, Francesca; Klypin, Anatoly A.

    2013-01-01

    The Astrophysical Journal Supplement Series 210.1 (2014): 14 reproduced by permission of the AAS We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ∼100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle o...

  16. Performance Comparison of AVS and H.264/AVC Video Coding Standards

    Institute of Scientific and Technical Information of China (English)

    Xin-Fu Wang; De-Bin Zhao

    2006-01-01

    A new audio and video compression standard of China, known as advanced Audio Video coding Standard (AVS),is emerging. This standard provides a technical solution for many applications within the information industry such as digital broadcast, high-density laser-digital storage media, and so on. The basic part of AVS, AVS1-P2, targets standard definition (SD) and high definition (HD) format video compression, and aims to achieve similar coding efficiency as H.264/AVC but with lower computational complexity. In this paper, we first briefly describe the major coding tools in AVS1-P2, and then perform the coding efficiency comparison between AVS1-P2 Jizhun profile and H.264/AVC main profile. The experimental results show that the AVS1-P2 Jizhun profile has an average of 2.96% efficiency loss relative to H.264/AVC main profile in terms of bit-rate saving on HD progressive-scan sequences, and an average of 28.52% coding loss on interlace-scan sequences.Nevertheless, AVS1-P2 possesses a valuable feature of lower computational complexity.

  17. VVER-440 Ex-Core Neutron Transport Calculations by MCNP-5 Code and Comparison with Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Borodkin, Pavel; Khrennikov, Nikolay [Scientific and Engineering Centre for Nuclear and Radiation Safety (SEC NRS) Malaya Krasnoselskaya ul., 2/8, bld. 5, 107140 Moscow (Russian Federation)

    2008-07-01

    Ex-core neutron transport calculations are needed to evaluate radiation loading parameters (neutron fluence, fluence rate and spectra) on the in-vessel equipment, reactor pressure vessel (RPV) and support constructions of VVER type reactors. Due to these parameters are used for reactor equipment life-time assessment, neutron transport calculations should be carried out by precise and reliable calculation methods. In case of RPVs, especially, of first generation VVER-440s, the neutron fluence plays a key role in the prediction of RPV lifetime. Main part of VVER ex-core neutron transport calculations are performed by deterministic and Monte-Carlo methods. This paper deals with precise calculations of the Russian first generation VVER-440 by MCNP-5 code. The purpose of this work was an application of this code for expert calculations, verification of results by comparison with deterministic calculations and validation by neutron activation measured data. Deterministic discrete ordinates DORT code, widely used for RPV neutron dosimetry and many times tested by experiments, was used for comparison analyses. Ex-vessel neutron activation measurements at the VVER-440 NPP have provided space (in azimuth and height directions) and neutron energy (different activation reactions) distributions data for experimental (E) validation of calculated results. Calculational intercomparison (DORT vs. MCNP-5) and comparison with measured values (MCNP-5 and DORT vs. E) have shown agreement within 10-15% for different space points and reaction rates. The paper submits a discussion of results and makes conclusions about practice use of MCNP-5 code for ex-core neutron transport calculations in expert analysis. (authors)

  18. Remote-Handled Low-Level Waste (RHLLW) Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2010-10-01

    The Remote-Handled Low-Level Waste Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of fiscal year 2015). Development of a new onsite disposal facility, the highest ranked alternative, will provide necessary remote handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability.

  19. Comparison of current state residential energy codes with the 1992 model energy code for one- and two-family dwellings; 1994

    Energy Technology Data Exchange (ETDEWEB)

    Klevgard, L.A.; Taylor, Z.T.; Lucas, R.G.

    1995-01-01

    This report is one in a series of documents describing research activities in support of the US Department of Energy (DOE) Building Energy Codes Program. The Pacific Northwest Laboratory (PNL) leads the program for DOE. The goal of the program is to develop and support the adopting, implementation, and enforcement of Federal, State, and Local energy codes for new buildings. The program approach to meeting the goal is to initiate and manage individual research and standards and guidelines development efforts that are planned and conducted in cooperation with representatives from throughout the buildings community. Projects under way involve practicing architects and engineers, professional societies and code organizations, industry representatives, and researchers from the private sector and national laboratories. Research results and technical justifications for standards criteria are provided to standards development and model code organizations and to Federal, State, and local jurisdictions as a basis to update their codes and standards. This effort helps to ensure that building standards incorporate the latest research results to achieve maximum energy savings in new buildings, yet remain responsive to the needs of the affected professions, organizations, and jurisdictions. Also supported are the implementation, deployment, and use of energy-efficient codes and standards. This report documents findings from an analysis conducted by PNL of the State`s building codes to determine if the codes meet or exceed the 1992 MEC energy efficiency requirements (CABO 1992a).

  20. Comparisons of the simulation results using different codes for ADS spallation target

    CERN Document Server

    Yu Hong Wei; Shen Qing Biao; Wan Jun Sheng; Zhao Zhi Xiang

    2002-01-01

    The calculations to the standard thick target were made by using different codes. The simulation of the thick Pb target with length of 60 cm, diameter of 20 cm bombarded with 800, 1000, 1500 and 2000 MeV energetic proton beam was carried out. The yields and the spectra of emitted neutron were studied. The spallation target was simulated by SNSP, SHIELD, DCM/CEM (Dubna Cascade Model /Cascade Evaporation Mode) and LAHET codes. The Simulation Results were compared with experiments. The comparisons show good agreement between the experiments and the SNSP simulated leakage neutron yield. The SHIELD simulated leakage neutron spectra are in good agreement with the LAHET and the DCM/CEM simulated leakage neutron spectra

  1. Comparison of a Coupled Near and Far Wake Model With a Free Wake Vortex Code

    DEFF Research Database (Denmark)

    Pirrung, Georg; Riziotis, Vasilis; Aagaard Madsen, Helge

    2016-01-01

    This paper presents the integration of a near wake model for trailing vorticity, which is based on a prescribed wake lifting line model proposed by Beddoes, with a BEM-based far wake model and a 2D shed vorticity model. The resulting coupled aerodynamics model is validated against lifting surface...... computations performed using a free wake panel code. The focus of the description of the aerodynamics model is on the numerical stability, the computation speed and the accuracy of 5 unsteady simulations. To stabilize the near wake model, it has to be iterated to convergence, using a relaxation factor that has...... induction modeling at slow time scales. Finally, the unsteady airfoil aerodynamics model is extended to provide the unsteady bound circulation for the near wake model and to improve 10 the modeling of the unsteady behavior of cambered airfoils. The model comparison with results from a free wake panel code...

  2. Comparison of MACCS users calculations for the international comparison exercise on probabilistic accident consequence assessment code, October 1989--June 1993

    Energy Technology Data Exchange (ETDEWEB)

    Neymotin, L. [Brookhaven National Lab., Upton, NY (United States)

    1994-04-01

    Over the past several years, the OECD/NEA and CEC sponsored an international program intercomparing a group of six probabilistic consequence assessment (PCA) codes designed to simulate health and economic consequences of radioactive releases into atmosphere of radioactive materials following severe accidents at nuclear power plants (NPPs): ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this effort, two separate groups performed similar calculations using the MACCS and COSYMA codes. Results produced in the MACCS Users Group (Greece, Italy, Spain, and USA) calculations and their comparison are contained in the present report. Version 1.5.11.1 of the MACCS code was used for the calculations. Good agreement between the results produced in the four participating calculations has been reached, with the exception of the results related to the ingestion pathway dose predictions. The main reason for the scatter in those particular results is attributed to the lack of a straightforward implementation of the specifications for agricultural production and counter-measures criteria provided for the exercise. A significantly smaller scatter in predictions of other consequences was successfully explained by differences in meteorological files and weather sampling, grids, rain distance intervals, dispersion model options, and population distributions.

  3. A comparison between the Monte Carlo radiation transport codes MCNP and MCBEND

    Energy Technology Data Exchange (ETDEWEB)

    Sawamura, Hidenori; Nishimura, Kazuya [Computer Software Development Co., Ltd., Tokyo (Japan)

    2001-01-01

    In Japan, almost of all radiation analysts are using the MCNP code and MVP code on there studies. But these codes have not had automatic variance reduction. MCBEND code made by UKAEA have automatic variance reduction. And, MCBEND code is user friendly more than other Monte Carlo Radiation Transport Codes. Our company was first introduced MCBEND code in Japan. Therefore, we compared with MCBEND code and MCNP code about functions and production capacity. (author)

  4. Comparison of the calculations of the stability properties of a specific stellarator equilibrium with different MHD stability codes

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Y.; Matsumoto, T.; Wakatani, M. [Kyoto Univ. (Japan). Plasma Physics Lab.; Galkin, S.A.; Drozdov, V.V.; Martynov, A.A.; Poshekhonov, Yu.Yu. [Keldysh Institute of Applied Mathematics, Moscow (Russian Federation); Ichiguchi, K. [National Institute for Fusion Science, Nagoya (Japan); Garcia, L. [Universidad Carlos III de Madrid (Spain); Carreras, B.A. [Oak Ridge National Lab., TN (United States)] [and others

    1995-04-01

    A particular configuration of the LHD stellarator with an unusually flat pressure profile has been chosen to be a test case for comparison of the MHD stability property predictions of different three-dimensional and averaged codes for the purpose of code comparison and validation. In particular, two relatively localized instabilities, the fastest growing modes with toroidal mode number n = 2 and n = 3 were studied using several different codes, with the good agreement that has been found providing justification for the use of any of them for equilibria of the type considered.

  5. Multichannel Filtered-X Error Coded Affine Projection-Like Algorithm with Evolving Order

    Directory of Open Access Journals (Sweden)

    J. G. Avalos

    2017-01-01

    Full Text Available Affine projection (AP algorithms are commonly used to implement active noise control (ANC systems because they provide fast convergence. However, their high computational complexity can restrict their use in certain practical applications. The Error Coded Affine Projection-Like (ECAP-L algorithm has been proposed to reduce the computational burden while maintaining the speed of AP, but no version of this algorithm has been derived for active noise control, for which the adaptive structures are very different from those of other configurations. In this paper, we introduce a version of the ECAP-L for single-channel and multichannel ANC systems. The proposed algorithm is implemented using the conventional filtered-x scheme, which incurs a lower computational cost than the modified filtered-x structure, especially for multichannel systems. Furthermore, we present an evolutionary method that dynamically decreases the projection order in order to reduce the dimensions of the matrix used in the algorithm’s computations. Experimental results demonstrate that the proposed algorithm yields a convergence speed and a final residual error similar to those of AP algorithms. Moreover, it achieves meaningful computational savings, leading to simpler hardware implementation of real-time ANC applications.

  6. Comparison of ATF and TJ-II stellarator equilibria as computed by the 3-D VMEC and PIES codes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J.L.; Monticello, D.A.; Reiman, A.H. (Princeton Univ., NJ (United States). Plasma Physics Lab.); Salas, A.; Fraguas, A.L. (Association Euratom-Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Madrid (Spain)); Hirshman, S.P. (Oak Ridge National Lab., TN (United States))

    1992-01-01

    A comparison is made of results from the PIES code, which determines the equilibrium properties of three-dimensional toroidal configurations by direct integration along the magnetic field lines, with those from the VMEC code, which uses an energy minimization in a flux representation to determine the equilibrium configuration, for two devices: the ATF stellarator at Oak Ridge and the TJ-11 heliac which is being built in Madrid. The results obtained from the two codes are in good agreement, providing additional validation for the codes.

  7. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2012-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  8. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2012-04-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  9. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    Austad, S. L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Guillen, L. E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); McKnight, C. W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ferguson, D. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-04-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  10. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2014-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  11. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2011-04-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility, the highest ranked alternative, will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  12. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2011-01-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility, the highest ranked alternative, will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  13. Haloes gone MAD: The Halo-Finder Comparison Project

    Science.gov (United States)

    Knebe, Alexander; Knollmann, Steffen R.; Muldrew, Stuart I.; Pearce, Frazer R.; Aragon-Calvo, Miguel Angel; Ascasibar, Yago; Behroozi, Peter S.; Ceverino, Daniel; Colombi, Stephane; Diemand, Juerg; Dolag, Klaus; Falck, Bridget L.; Fasel, Patricia; Gardner, Jeff; Gottlöber, Stefan; Hsu, Chung-Hsing; Iannuzzi, Francesca; Klypin, Anatoly; Lukić, Zarija; Maciejewski, Michal; McBride, Cameron; Neyrinck, Mark C.; Planelles, Susana; Potter, Doug; Quilis, Vicent; Rasera, Yann; Read, Justin I.; Ricker, Paul M.; Roy, Fabrice; Springel, Volker; Stadel, Joachim; Stinson, Greg; Sutter, P. M.; Turchaninov, Victor; Tweed, Dylan; Yepes, Gustavo; Zemp, Marcel

    2011-08-01

    We present a detailed comparison of fundamental dark matter halo properties retrieved by a substantial number of different halo finders. These codes span a wide range of techniques including friends-of-friends, spherical-overdensity and phase-space-based algorithms. We further introduce a robust (and publicly available) suite of test scenarios that allow halo finder developers to compare the performance of their codes against those presented here. This set includes mock haloes containing various levels and distributions of substructure at a range of resolutions as well as a cosmological simulation of the large-scale structure of the universe. All the halo-finding codes tested could successfully recover the spatial location of our mock haloes. They further returned lists of particles (potentially) belonging to the object that led to coinciding values for the maximum of the circular velocity profile and the radius where it is reached. All the finders based in configuration space struggled to recover substructure that was located close to the centre of the host halo, and the radial dependence of the mass recovered varies from finder to finder. Those finders based in phase space could resolve central substructure although they found difficulties in accurately recovering its properties. Through a resolution study we found that most of the finders could not reliably recover substructure containing fewer than 30-40 particles. However, also here the phase-space finders excelled by resolving substructure down to 10-20 particles. By comparing the halo finders using a high-resolution cosmological volume, we found that they agree remarkably well on fundamental properties of astrophysical significance (e.g. mass, position, velocity and peak of the rotation curve). We further suggest to utilize the peak of the rotation curve, vmax, as a proxy for mass, given the arbitrariness in defining a proper halo edge. Airport code for Madrid, Spain

  14. Using GTO-Velo to Facilitate Communication and Sharing of Simulation Results in Support of the Geothermal Technologies Office Code Comparison Study

    Energy Technology Data Exchange (ETDEWEB)

    White, Signe K.; Purohit, Sumit; Boyd, Lauren W.

    2015-01-26

    The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that provides a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is

  15. Offshore code comparison collaboration continuation (OC4), phase I - Results of coupled simulations of an offshore wind turbine with jacket support structure

    DEFF Research Database (Denmark)

    Popko, Wojciech; Vorpahl, Fabian; Zuga, Adam

    2012-01-01

    In this paper, the exemplary results of the IEA Wind Task 30 "Offshore Code Comparison Collaboration Continuation" (OC4) Project - Phase I, focused on the coupled simulation of an offshore wind turbine (OWT) with a jacket support structure, are presented. The focus of this task has been the verif...... such as the buoyancy calculation and methods of accounting for additional masses (such as hydrodynamic added mass). Finally, recommendations concerning the modeling of the jacket are given. Copyright © 2012 by the International Society of Offshore and Polar Engineers (ISOPE)....

  16. The AGORA High-resolution Galaxy Simulations Comparison Project. II. Isolated Disk Test

    Science.gov (United States)

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; Butler, Michael J.; Ceverino, Daniel; Choi, Jun-Hwan; Feldmann, Robert; Keller, Ben W.; Lupi, Alessandro; Quinn, Thomas; Revaz, Yves; Wallace, Spencer; Gnedin, Nickolay Y.; Leitner, Samuel N.; Shen, Sijing; Smith, Britton D.; Thompson, Robert; Turk, Matthew J.; Abel, Tom; Arraki, Kenza S.; Benincasa, Samantha M.; Chakrabarti, Sukanya; DeGraf, Colin; Dekel, Avishai; Goldbaum, Nathan J.; Hopkins, Philip F.; Hummels, Cameron B.; Klypin, Anatoly; Li, Hui; Madau, Piero; Mandelker, Nir; Mayer, Lucio; Nagamine, Kentaro; Nickerson, Sarah; O'Shea, Brian W.; Primack, Joel R.; Roca-Fàbrega, Santi; Semenov, Vadim; Shimizu, Ikkoh; Simpson, Christine M.; Todoroki, Keita; Wadsley, James W.; Wise, John H.; AGORA Collaboration

    2016-12-01

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, we find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ˜3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.

  17. Modern Code Reviews in Open-Source Projects: Which Problems Do They Fix?

    NARCIS (Netherlands)

    Beller, M.; Bacchelli, A.; Zaidman, A.E.; Juergens, E.

    2014-01-01

    Code review is the manual assessment of source code by humans, mainly intended to identify defects and quality problems. Modern Code Review (MCR), a lightweight variant of the code inspections investigated since the 1970s, prevails today both in industry and open-source software (OSS) systems. The

  18. GMMIP (v1.0) contribution to CMIP6: Global Monsoons Model Inter-comparison Project

    OpenAIRE

    Zhou, Tianjun; Turner, Andrew G.; Kinter, James L.; Qian, Yun; Chen, Xiaolong; Bo WU; Wang, Bin; Liu, Bo; Zou, Liwei; He, Bian

    2016-01-01

    The Global Monsoons Model Inter-comparison Project (GMMIP) has been endorsed by the panel of Coupled Model Inter-comparison Project (CMIP) as one of the participating model inter-comparison projects (MIPs) in the sixth phase of CMIP (CMIP6). The focus of GMMIP is on monsoon climatology, variability, prediction and projection, which is relevant to four of the “Grand Challenges” proposed by the World Climate Research Programme. At present, 21 international modeling groups are ...

  19. Systematic comparison of photoionised plasma codes with application to spectroscopic studies of AGN in X-rays

    CERN Document Server

    Mehdipour, M; Kallman, T

    2016-01-01

    Atomic data and plasma models play a crucial role in diagnosis and interpretation of astrophysical spectra, thus influencing our understanding of the universe. In this investigation we present a systematic comparison of the leading photoionisation codes to determine how much their intrinsic differences impact X-ray spectroscopic studies of hot plasmas in photoionisation equilibrium. We carry out our computations using the Cloudy, SPEX and XSTAR photoionisation codes, and compare their derived thermal and ionisation states for various ionising spectral energy distributions. We examine the resulting absorption-line spectra from these codes for the case of ionised outflows in active galactic nuclei. By comparing the ionic abundances as a function of ionisation parameter $\\xi$, we find that on average there is about 30% deviation between the codes in $\\xi$ where ionic abundances peak. For H-like to B-like sequence ions alone, this deviation in $\\xi$ is smaller at about 10% on average. The comparison of the absorp...

  20. Comparison of a laboratory spectrum of Eu-152 with results of simulation using the MCNP code

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, J. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain); Gallardo, S. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)], E-mail: sergalbe@iqn.upv.es; Ortiz, J. [Laboratorio de Radiactividad Ambiental, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)

    2007-09-21

    Detectors used for gamma spectrometry must be calibrated for each geometry considered in environmental radioactivity laboratories. This calibration is performed using a standard solution containing gamma emitter sources. Nevertheless, the efficiency curves obtained are periodically checked using a source such as {sup 152}Eu emitting many gamma rays that cover a wide energy range (20-1500 keV). {sup 152}Eu presents a problem because it has a lot of peaks affected by True Coincidence Summing (TCS). Two experimental measures have been performed placing the source (a Marinelli beaker) at 0 and 10 cm from the detector. Both spectra are simulated by the MCNP 4C code, where the TCS is not reproduced. Therefore, the comparison between experimental and simulated peak net areas permits one to choose the most convenient peaks to check the efficiency curves of the detector.

  1. Comparison of two numerical modelling codes for hydraulic and transport calculations in the near-field

    Energy Technology Data Exchange (ETDEWEB)

    Kalin, J., E-mail: jan.kalin@zag.s [Slovenian National Building and Civil Engineering Institute, Dimiceva 12, SI-1000 Ljubljana (Slovenia); Petkovsek, B., E-mail: borut.petkovsek@zag.s [Slovenian National Building and Civil Engineering Institute, Dimiceva 12, SI-1000 Ljubljana (Slovenia); Montarnal, Ph., E-mail: philippe.montarnal@cea.f [CEA/Saclay, DM2S/SFME/LSET, Gif-sur-Yvette, 91191 cedex (France); Genty, A., E-mail: alain.genty@cea.f [CEA/Saclay, DM2S/SFME/LSET, Gif-sur-Yvette, 91191 cedex (France); Deville, E., E-mail: estelle.deville@cea.f [CEA/Saclay, DM2S/SFME/LSET, Gif-sur-Yvette, 91191 cedex (France); Krivic, J., E-mail: jure.krivic@geo-zs.s [Geological Survey of Slovenia, Dimiceva 14, SI-1000 Ljubljana (Slovenia); Ratej, J., E-mail: joze.ratej@geo-zs.s [Geological Survey of Slovenia, Dimiceva 14, SI-1000 Ljubljana (Slovenia)

    2011-04-15

    In the past years the Slovenian Performance Analysis/Safety Assessment team has performed many generic studies for the future Slovenian low and intermediate level waste repository, most recently a Special Safety Analysis for the Krsko site. The modelling approach was to split the problem into three parts: near-field (detailed model of the repository), far-field (i.e., geosphere) and biosphere. In the Special Safety Analysis the code used to perform the near-field calculations was Hydrus2D. Recently the team has begun a cooperation with the French Commisariat al'Energie Atomique/Saclay (CEA/Saclay) and, as a part of this cooperation, began investigations into using the Alliances numerical platform for near-field calculations in order to compare the overall approach and calculated results. The article presents the comparison between these two codes for a silo-type repository that was considered in the Special Safety Analysis. The physical layout and characteristics of the repository are presented and a hydraulic and transport model of the repository is developed and implemented in Alliances. Some analysis of sensitivity to mesh fineness and to simulation timestep has been preformed and is also presented. The compared quantity is the output flux of radionuclides on the boundary of the model. Finally the results from Hydrus2D and Alliances are compared and the differences and similarities are commented.

  2. Integrated Codes for Estimating Environmental Accumulation and Individual Dose from Past Hanford Atmospheric Releases: Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Ikenberry, T. A.; Burnett, R. A.; Napier, B. A.; Reitz, N. A.; Shipler, D. B.

    1992-02-01

    Preliminary radiation doses were estimated and reported during Phase I of the Hanford Environmental Dose Reconstruction (HEDR) Project. As the project has progressed, additional information regarding the magnitude and timing of past radioactive releases has been developed, and the general scope of the required calculations has been enhanced. The overall HEDR computational model for computing doses attributable to atmospheric releases from Hanford Site operations is called HEDRIC (Hanford Environmental Dose Reconstruction Integrated Codes). It consists of four interrelated models: source term, atmospheric transport, environmental accumulation, and individual dose. The source term and atmospheric transport models are documented elsewhere. This report describes the initial implementation of the design specifications for the environmental accumulation model and computer code, called DESCARTES (Dynamic EStimates of Concentrations and Accumulated Radionuclides in Terrestrial Environments), and the individual dose model and computer code, called CIDER (Calculation of Individual Doses from Environmental Radionuclides). The computations required of these models and the design specifications for their codes were documented in Napier et al. (1992). Revisions to the original specifications and the basis for modeling decisions are explained. This report is not the final code documentation but gives the status of the model and code development to date. Final code documentation is scheduled to be completed in FY 1994 following additional code upgrades and refinements. The user's guide included in this report describes the operation of the environmental accumulation and individual dose codes and associated pre- and post-processor programs. A programmer's guide describes the logical structure of the programs and their input and output files.

  3. THEORETICAL AND NUMERICAL COMPARISON ON DOUBLE-PROJECTION METHODS FOR VARIATIONAL INEQUALITIES

    Institute of Scientific and Technical Information of China (English)

    WANG Yiju; SUN Wenyu

    2003-01-01

    Recently, double projection methods for solving variational inequalities have received much attention due to their fewer projection times at each iteration. In this paper, we unify these double projection methods within two unified frameworks, which contain the existing double projection methods as special cases. On the basis of this unification, theoretical and numerical comparison between these double projection methods is presented.

  4. Performance Comparison of Latency for RSC-RSC and RS-RSC Concatenated Codes

    Directory of Open Access Journals (Sweden)

    Manish Kumar

    2013-09-01

    Full Text Available In this paper, we compare the latency of serially concatenated convolutional codes. In particular, we compare RSC-RSC   concatenated codes using non-iterative concatenated Viterbi decoding to RS-RSC concatenated codes using concatenation of Viterbi & Berklelamp-Massey decoding. We have also used puncturing to obtain different code rates & analyzed the effect of code rate on latency. On the basis of simulations, it is shown that RSC-RSC code is better than RS-RSC codes for low latency applications. It is also shown that a trade-off is needed between BER & latency for concatenated codes.

  5. Joint research project WASA-BOSS: Further development and application of severe accident codes. Assessment and optimization of accident management measures. Project B: Accident analyses for pressurized water reactors with the application of the ATHLET-CD code; Verbundprojekt WASA-BOSS: Weiterentwicklung und Anwendung von Severe Accident Codes. Bewertung und Optimierung von Stoerfallmassnahmen. Teilprojekt B: Druckwasserreaktor-Stoerfallanalysen unter Verwendung des Severe-Accident-Codes ATHLET-CD

    Energy Technology Data Exchange (ETDEWEB)

    Jobst, Matthias; Kliem, Soeren; Kozmenkov, Yaroslav; Wilhelm, Polina

    2017-02-15

    Within the framework of the project an ATHLET-CD input deck for a generic German PWR of type KONVOI has been created. This input deck was applied to the simulation of severe accidents from the accident categories station blackout (SBO) and small-break loss-of-coolant accidents (SBLOCA). The complete accident transient from initial event at full power until the damage of reactor pressure vessel (RPV) is covered and all relevant severe accident phenomena are modelled: start of core heat up, fission product release, melting of fuel and absorber material, oxidation and release of hydrogen, relocation of molten material inside the core, relocation to the lower plenum, damage and failure of the RPV. The model has been applied to the analysis of preventive and mitigative accident management measures for SBO and SBLOCA transients. Therefore, the measures primary side depressurization (PSD), injection to the primary circuit by mobile pumps and for SBLOCA the delayed injection by the cold leg hydro-accumulators have been investigated and the assumptions and start criteria of these measures have been varied. The time evolutions of the transients and time margins for the initiation of additional measures have been assessed. An uncertainty and sensitivity study has been performed for the early phase of one SBO scenario with PSD (until the start of core melt). In addition to that, a code -to-code comparison between ATHLET-CD and the severe accident code MELCOR has been carried out.

  6. A system for environmental model coupling and code reuse: The Great Rivers Project

    Science.gov (United States)

    Eckman, B.; Rice, J.; Treinish, L.; Barford, C.

    2008-12-01

    As part of the Great Rivers Project, IBM is collaborating with The Nature Conservancy and the Center for Sustainability and the Global Environment (SAGE) at the University of Wisconsin, Madison to build a Modeling Framework and Decision Support System (DSS) designed to help policy makers and a variety of stakeholders (farmers, fish & wildlife managers, hydropower operators, et al.) to assess, come to consensus, and act on land use decisions representing effective compromises between human use and ecosystem preservation/restoration. Initially focused on Brazil's Paraguay-Parana, China's Yangtze, and the Mississippi Basin in the US, the DSS integrates data and models from a wide variety of environmental sectors, including water balance, water quality, carbon balance, crop production, hydropower, and biodiversity. In this presentation we focus on the modeling framework aspect of this project. In our approach to these and other environmental modeling projects, we see a flexible, extensible modeling framework infrastructure for defining and running multi-step analytic simulations as critical. In this framework, we divide monolithic models into atomic components with clearly defined semantics encoded via rich metadata representation. Once models and their semantics and composition rules have been registered with the system by their authors or other experts, non-expert users may construct simulations as workflows of these atomic model components. A model composition engine enforces rules/constraints for composing model components into simulations, to avoid the creation of Frankenmodels, models that execute but produce scientifically invalid results. A common software environment and common representations of data and models are required, as well as an adapter strategy for code written in e.g., Fortran or python, that still enables efficient simulation runs, including parallelization. Since each new simulation, as a new composition of model components, requires calibration

  7. The SCEC-USGS Dynamic Earthquake Rupture Code Comparison Exercise - Simulations of Large Earthquakes and Strong Ground Motions

    Science.gov (United States)

    Harris, R.

    2015-12-01

    I summarize the progress by the Southern California Earthquake Center (SCEC) and U.S. Geological Survey (USGS) Dynamic Rupture Code Comparison Group, that examines if the results produced by multiple researchers' earthquake simulation codes agree with each other when computing benchmark scenarios of dynamically propagating earthquake ruptures. These types of computer simulations have no analytical solutions with which to compare, so we use qualitative and quantitative inter-code comparisons to check if they are operating satisfactorily. To date we have tested the codes against benchmark exercises that incorporate a range of features, including single and multiple planar faults, single rough faults, slip-weakening, rate-state, and thermal pressurization friction, elastic and visco-plastic off-fault behavior, complete stress drops that lead to extreme ground motion, heterogeneous initial stresses, and heterogeneous material (rock) structure. Our goal is reproducibility, and we focus on the types of earthquake-simulation assumptions that have been or will be used in basic studies of earthquake physics, or in direct applications to specific earthquake hazard problems. Our group's goals are to make sure that when our earthquake-simulation codes simulate these types of earthquake scenarios along with the resulting simulated strong ground shaking, that the codes are operating as expected. For more introductory information about our group and our work, please see our group's overview papers, Harris et al., Seismological Research Letters, 2009, and Harris et al., Seismological Research Letters, 2011, along with our website, scecdata.usc.edu/cvws.

  8. A Comparison of Creativity in Project Groups in Science and Engineering Education in Denmark and China

    DEFF Research Database (Denmark)

    Zhou, Chunfang; Valero, Paola

    2015-01-01

    Different pedagogical strategies influence the development of creativity in project groups in science and engineering education. This study is a comparison between two cases: Problem-Based Learning (PBL) in Denmark and Project-Organized Learning (POL) in China.......Different pedagogical strategies influence the development of creativity in project groups in science and engineering education. This study is a comparison between two cases: Problem-Based Learning (PBL) in Denmark and Project-Organized Learning (POL) in China....

  9. GENII (Generation II): The Hanford Environmental Radiation Dosimetry Software System: Volume 3, Code maintenance manual: Hanford Environmental Dosimetry Upgrade Project

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.; Ramsdell, J.V.

    1988-09-01

    The Hanford Environmental Dosimetry Upgrade Project was undertaken to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) in updated versions of the environmental pathway analysis models used at Hanford. The resulting second generation of Hanford environmental dosimetry computer codes is compiled in the Hanford Environmental Dosimetry System (Generation II, or GENII). This coupled system of computer codes is intended for analysis of environmental contamination resulting from acute or chronic releases to, or initial contamination of, air, water, or soil, on through the calculation of radiation doses to individuals or populations. GENII is described in three volumes of documentation. This volume is a Code Maintenance Manual for the serious user, including code logic diagrams, global dictionary, worksheets to assist with hand calculations, and listings of the code and its associated data libraries. The first volume describes the theoretical considerations of the system. The second volume is a Users' Manual, providing code structure, users' instructions, required system configurations, and QA-related topics. 7 figs., 5 tabs.

  10. Optimum projection pattern generation for grey-level coded structured light illumination systems

    Science.gov (United States)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-04-01

    Structured light illumination (SLI) systems are well-established optical inspection techniques for noncontact 3D surface measurements. A common technique is multi-frequency sinusoidal SLI that obtains the phase map at various fringe periods in order to estimate the absolute phase, and hence, the 3D surface information. Nevertheless, multi-frequency SLI systems employ multiple measurement planes (e.g. four phase shifted frames) to obtain the phase at a given fringe period. It is therefore an age old challenge to obtain the absolute surface information using fewer measurement frames. Grey level (GL) coding techniques have been developed as an attempt to reduce the number of planes needed, because a spatio-temporal GL sequence employing p discrete grey-levels and m frames has the potential to unwrap up to pm fringes. Nevertheless, one major disadvantage of GL based SLI techniques is that there are often errors near the border of each stripe, because an ideal stepwise intensity change cannot be measured. If the step-change in intensity is a single discrete grey-level unit, this problem can usually be overcome by applying an appropriate threshold. However, severe errors occur if the intensity change at the border of the stripe exceeds several discrete grey-level units. In this work, an optimum GL based technique is presented that generates a series of projection patterns with a minimal gradient in the intensity. It is shown that when using this technique, the errors near the border of the stripes can be significantly reduced. This improvement is achieved with the choice generated patterns, and does not involve additional hardware or special post-processing techniques. The performance of that method is validated using both simulations and experiments. The reported technique is generic, works with an arbitrary number of frames, and can employ an arbitrary number of grey-levels.

  11. Passive Wireless Hydrogen Sensors Using Orthogonal Frequency Coded Acoustic Wave Devices Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal describes the continued development of passive orthogonal frequency coded (OFC) surface acoustic wave (SAW) based hydrogen sensors for NASA application...

  12. Projections and neurochemical coding of motor neurones to the circular and longitudinal muscle of the guinea pig gastric corpus.

    Science.gov (United States)

    Michel, K; Reiche, D; Schemann, M

    2000-07-01

    The present study identified projection and neurochemical coding patterns of intrinsic circular (CMN) and longitudinal muscle motor neurones (LMN) in the guinea pig stomach by using the retrograde tracer DiI (1,1'-didodecyl-3,3,3',3'-tetramethylindocarbocya-nine perchlorate) in combination with the immunohistochemical demonstration of choline acetyltransferase (ChAT), enkephalin (ENK), neuropeptide Y (NPY), nitric oxide synthase (NOS) and substance P (SP). Populations of LMN and CMN had similar neurochemical coding and a clear polarity of projection. Taking all DiI-labeled cell bodies as 100%, ascending pathways exhibited the coding ChAT/- (CMN:14.7%/LMN: 18.3%), ChAT/ENK (15.7%/10.1%), ChAT/SP/+/-ENK (19.2%/16.4%), or ChAT/NPY (4.4%/7.6%); descending pathways had the coding NOS/- (13.8%/16.9%), NOS/NPY (9.9%/17%), NOS/ENK (4.4%/1.2%) or NOS/NPY/ENK (13.0%/5.5%). The relative contributions of these populations were not different between CMN and LMN. However, target-specific projection patterns were revealed: most LMN (82%) had longitudinal whereas most CMN (58%) had circumferential projection preferences. The results indicate that gastric circular and longitudinal muscle layers are innervated by ascending excitatory and descending inhibitory pathways in the myenteric plexus. The projection patterns of CMN and LMN were different and followed the orientation of the muscle layers. It is suggested that the specific muscle motor pathways in the gastric myenteric plexus coordinate the reflex-mediated phasic and tonic activity of gastric muscle layers.

  13. Comparison of computer codes for estimates of the symmetric coupled bunch instabilities growth times

    CERN Document Server

    Angal-Kalinin, Deepa

    2002-01-01

    The standard computer codes used for estimating the growth times of the symmetric coupled bunch instabilities are ZAP and BBI.The code Vlasov was earlier used for the LHC for the estimates of the coupled bunch instabilities growth time[1]. The results obtained by these three codes have been compared and the options under which their results can be compared are discussed. The differences in the input and the output for these three codes are given for a typical case.

  14. Fast comparison of IS radar code sequences for lag profile inversion

    Directory of Open Access Journals (Sweden)

    M. S. Lehtinen

    2008-08-01

    Full Text Available A fast method for theoretically comparing the posteriori variances produced by different phase code sequences in incoherent scatter radar (ISR experiments is introduced. Alternating codes of types 1 and 2 are known to be optimal for selected range resolutions, but the code sets are inconveniently long for many purposes like ground clutter estimation and in cases where coherent echoes from lower ionospheric layers are to be analyzed in addition to standard F-layer spectra.

    The method is used in practice for searching binary code quads that have estimation accuracy almost equal to that of much longer alternating code sets. Though the code sequences can consist of as few as four different transmission envelopes, the lag profile estimation variances are near to the theoretical minimum. Thus the short code sequence is equally good as a full cycle of alternating codes with the same pulse length and bit length. The short code groups cannot be directly decoded, but the decoding is done in connection with more computationally expensive lag profile inversion in data analysis.

    The actual code searches as well as the analysis and real data results from the found short code searches are explained in other papers sent to the same issue of this journal. We also discuss interesting subtle differences found between the different alternating codes by this method. We assume that thermal noise dominates the incoherent scatter signal.

  15. Game-Coding Workshops in New Zealand Public Libraries: Evaluation of a Pilot Project

    Science.gov (United States)

    Bolstad, Rachel

    2016-01-01

    This report evaluates a game coding workshop offered to young people and adults in seven public libraries round New Zealand. Participants were taken step by step through the process of creating their own simple 2D videogame, learning the basics of coding, computational thinking, and digital game design. The workshops were free and drew 426 people…

  16. Cross–Project Defect Prediction With Respect To Code Ownership Model: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Marian Jureczko

    2015-06-01

    Full Text Available The paper presents an analysis of 83 versions of industrial, open-source and academic projects. We have empirically evaluated whether those project types constitute separate classes of projects with regard to defect prediction. Statistical tests proved that there exist significant differences between the models trained on the aforementioned project classes. This work makes the next step towards cross-project reusability of defect prediction models and facilitates their adoption, which has been very limited so far.

  17. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison phase results

    Science.gov (United States)

    Grenier, Christophe; Rühaak, Wolfram

    2016-04-01

    Climate change impacts in permafrost regions have received considerable attention recently due to the pronounced warming trends experienced in recent decades and which have been projected into the future. Large portions of these permafrost regions are characterized by surface water bodies (lakes, rivers) that interact with the surrounding permafrost often generating taliks (unfrozen zones) within the permafrost that allow for hydrologic interactions between the surface water bodies and underlying aquifers and thus influence the hydrologic response of a landscape to climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model past and future evolution such units (Kurylyk et al. 2014). However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, which can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. A benchmark exercise was initialized at the end of 2014. Participants convened from USA, Canada, Europe, representing 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones (Kurylyk et al. 2014; Grenier et al. in prep.; Rühaak et al. 2015). They range from simpler, purely thermal 1D cases to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in a cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test case databases at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases TH2 & TH3. Both cases

  18. Test results of a 40 kW Stirling engine and comparison with the NASA-Lewis computer code predictions

    Science.gov (United States)

    Allen, D.; Cairelli, J.

    1985-01-01

    A Stirling engine was tested without auxiliaries at NASA-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations with those predicted by the code. The measured data tended to be lower than the computer code predictions. The silicon carbide foam regenerators appear to be structurally suitable, but the foam matrix tested severely reduced performance.

  19. Systematic comparison of photoionised plasma codes with application to spectroscopic studies of AGN in X-rays

    Science.gov (United States)

    Mehdipour, M.; Kaastra, J. S.; Kallman, T.

    2016-12-01

    Atomic data and plasma models play a crucial role in the diagnosis and interpretation of astrophysical spectra, thus influencing our understanding of the Universe. In this investigation we present a systematic comparison of the leading photoionisation codes to determine how much their intrinsic differences impact X-ray spectroscopic studies of hot plasmas in photoionisation equilibrium. We carry out our computations using the Cloudy, SPEX, and XSTAR photoionisation codes, and compare their derived thermal and ionisation states for various ionising spectral energy distributions. We examine the resulting absorption-line spectra from these codes for the case of ionised outflows in active galactic nuclei. By comparing the ionic abundances as a function of ionisation parameter ξ, we find that on average there is about 30% deviation between the codes in ξ where ionic abundances peak. For H-like to B-like sequence ions alone, this deviation in ξ is smaller at about 10% on average. The comparison of the absorption-line spectra in the X-ray band shows that there is on average about 30% deviation between the codes in the optical depth of the lines produced at log ξ 1 to 2, reducing to about 20% deviation at log ξ 3. We also simulate spectra of the ionised outflows with the current and upcoming high-resolution X-ray spectrometers, on board XMM-Newton, Chandra, Hitomi, and Athena. From these simulations we obtain the deviation on the best-fit model parameters, arising from the use of different photoionisation codes, which is about 10 to 40%. We compare the modelling uncertainties with the observational uncertainties from the simulations. The results highlight the importance of continuous development and enhancement of photoionisation codes for the upcoming era of X-ray astronomy with Athena.

  20. Systematic Comparison of Photoionized Plasma Codes with Application to Spectroscopic Studies of AGN in X-Rays

    Science.gov (United States)

    Mehdipour, M.; Kaastra, J. S.; Kallman, T.

    2016-01-01

    Atomic data and plasma models play a crucial role in the diagnosis and interpretation of astrophysical spectra, thus influencing our understanding of the Universe. In this investigation we present a systematic comparison of the leading photoionization codes to determine how much their intrinsic differences impact X-ray spectroscopic studies of hot plasmas in photoionization equilibrium. We carry out our computations using the Cloudy, SPEX, and XSTAR photoionization codes, and compare their derived thermal and ionization states for various ionizing spectral energy distributions. We examine the resulting absorption-line spectra from these codes for the case of ionized outflows in active galactic nuclei. By comparing the ionic abundances as a function of ionization parameter, we find that on average there is about 30 deviation between the codes in where ionic abundances peak. For H-like to B-like sequence ions alone, this deviation in is smaller at about 10 on average. The comparison of the absorption-line spectra in the X-ray band shows that there is on average about 30 deviation between the codes in the optical depth of the lines produced at log 1 to 2, reducing to about 20 deviation at log 3. We also simulate spectra of the ionized outflows with the current and upcoming high-resolution X-ray spectrometers, on board XMM-Newton, Chandra, Hitomi, and Athena. From these simulations we obtain the deviation on the best-fit model parameters, arising from the use of different photoionization codes, which is about 10 to40. We compare the modeling uncertainties with the observational uncertainties from the simulations. The results highlight the importance of continuous development and enhancement of photoionization codes for the upcoming era of X-ray astronomy with Athena.

  1. Code-to-code comparison for analysing the steady-state heat transfer and natural circulation in an air-cooled RCCS using GAMMA+ and Flownex

    Energy Technology Data Exchange (ETDEWEB)

    Rousseau, P.G., E-mail: pgr@mtechindustrial.com [School of Mechanical and Nuclear Engineering, North-West University, Private Bag X 6001, Potchefstroom (South Africa); Toit, C.G. du [School of Mechanical and Nuclear Engineering, North-West University, Private Bag X 6001, Potchefstroom (South Africa); Jun, J.S.; Noh, J.M. [Korea Atomic Energy Research Institute, Daedeok-daero 989-111, Yuseong-gu, Daejeon (Korea, Republic of)

    2015-09-15

    Highlights: • The GAMMA+ and Flownex codes are used in the analyses of the air-cooled RCCS system. • Radiation heat transfer comprises the bulk of the total rate of heat transfer. • It is possible to obtain reverse flow through the RCCS standpipes. • It has been found that the results obtained with the two codes are in good agreement. • RCCS remain functional for very high blockage ratios thus supporting the safety case. - Abstract: The GAMMA+ and Flownex codes are both based on a one-dimensional flow network modelling approach and both can account for any complex network of different heat transfer phenomena occurring simultaneously. However, there are notable differences in some of the detail modelling aspects, such as the way in which the convection in the reactor cavity is represented. Despite this, it was found in the analyses of the air-cooled RCCS system that the results provided by the two codes compare very well if similar input values are used for the pressure drop coefficients, heat transfer coefficients and view factors. The results show that the radiation heat transfer comprises the bulk of the total rate of heat transfer from the RPV surface. It is also shown that it is possible to obtain a stable and sustainable steady-state operational condition where the flow is in the reverse direction through the RCCS standpipes, resulting in excessively high values for the concrete wall temperature. It is therefore crucial in the design to ensure that such a flow reversal will not occur under any circumstances. In general the good comparison between the two codes provides confidence in the ability of both to correctly solve the fundamental conservation and heat transfer relations in an integrated manner for the complete RCCS system. Provided that appropriate input values are available, these codes can therefore be used effectively to evaluate the integrated performance of the system under various operating conditions. It is shown here that the RCCS

  2. Comparisons on International Approaches of Business and Project Risk Management

    OpenAIRE

    Nadia Carmen ENE

    2005-01-01

    In this article we intend to present a comparative approach between three recognized international methodologies for risk management: RISKMAN, Project Management Institute Methodology-PMBoK and Project Risk Analysis and Management Guide (produced by Association for Project Management).

  3. Comparisons on International Approaches of Business and Project Risk Management

    OpenAIRE

    Nadia Carmen ENE

    2005-01-01

    In this article we intend to present a comparative approach between three recognized international methodologies for risk management: RISKMAN, Project Management Institute Methodology-PMBoK and Project Risk Analysis and Management Guide (produced by Association for Project Management).

  4. CODE STEM - Moon, Mars, and Beyond; DLESE-Powered On-Line Classroom Project

    Data.gov (United States)

    National Aeronautics and Space Administration — "CODE (COrps DEvelopment) STEM (Science, Technology, Engineering, and Math) ? Moon Mars and Beyond; DLESE-Powered On-Line Classroom" shares the excitement of...

  5. Comparison of results between the ballooning-modes codes BLOON and BALOON

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.K. Jr.

    1981-08-01

    Ballooning mode equation eigenvalues calculated by two different codes, BLOON (written at General Atomic) and BALOON (written at Oak Ridge National Laboratory) have been compared for a sequence of equilibria having a range of ..beta.. values. The results agree for marginal stability only. Differences away from marginal stability may be due to differences in the coordinate systems used for the analysis in the two codes. Equilibria were generated using the ISLAND code of D. Stevens of New York University. Results of various convergence studies made with the codes are presented together with recommendations for their use.

  6. Comparison of seismic actions and structural design requirements in Chinese Code GB 50011 and International Standard ISO 3010

    Institute of Scientific and Technical Information of China (English)

    王亚勇

    2004-01-01

    This paper presents a comparison between the Chinese Code GB50011-2001 and the International Standard ISO3010: 2001(E), emphasizing the similarities and differences related to design requirements, seismic actions and analytical approaches. Similarities include: earthquake return period, conceptual design, site classification, structural strength and ductility requirements, deformation limits, response spectra, seismic analysis procedures, isolation and energy dissipation,and nonstructural elements. Differences exist in the following areas: seismic levels, earthquake loading, mode damping factors and structural control.

  7. Comparison of a 3-D GPU-Assisted Maxwell Code and Ray Tracing for Reflectometry on ITER

    Science.gov (United States)

    Gady, Sarah; Kubota, Shigeyuki; Johnson, Irena

    2015-11-01

    Electromagnetic wave propagation and scattering in magnetized plasmas are important diagnostics for high temperature plasmas. 1-D and 2-D full-wave codes are standard tools for measurements of the electron density profile and fluctuations; however, ray tracing results have shown that beam propagation in tokamak plasmas is inherently a 3-D problem. The GPU-Assisted Maxwell Code utilizes the FDTD (Finite-Difference Time-Domain) method for solving the Maxwell equations with the cold plasma approximation in a 3-D geometry. Parallel processing with GPGPU (General-Purpose computing on Graphics Processing Units) is used to accelerate the computation. Previously, we reported on initial comparisons of the code results to 1-D numerical and analytical solutions, where the size of the computational grid was limited by the on-board memory of the GPU. In the current study, this limitation is overcome by using domain decomposition and an additional GPU. As a practical application, this code is used to study the current design of the ITER Low Field Side Reflectometer (LSFR) for the Equatorial Port Plug 11 (EPP11). A detailed examination of Gaussian beam propagation in the ITER edge plasma will be presented, as well as comparisons with ray tracing. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No.DE-AC02-09CH11466 and DE-FG02-99-ER54527.

  8. Validation of the BISON 3D Fuel Performance Code: Temperature Comparisons for Concentrically and Eccentrically Located Fuel Pellets

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Hales; D. M. Perez; R. L. Williamson; S. R. Novascone; B. W. Spencer

    2013-03-01

    BISON is a modern finite-element based nuclear fuel performance code that has been under development at the Idaho National Laboratory (USA) since 2009. The code is applicable to both steady and transient fuel behaviour and is used to analyse either 2D axisymmetric or 3D geometries. BISON has been applied to a variety of fuel forms including LWR fuel rods, TRISO-coated fuel particles, and metallic fuel in both rod and plate geometries. Code validation is currently in progress, principally by comparison to instrumented LWR fuel rods. Halden IFA experiments constitute a large percentage of the current BISON validation base. The validation emphasis here is centreline temperatures at the beginning of fuel life, with comparisons made to seven rods from the IFA-431 and 432 assemblies. The principal focus is IFA-431 Rod 4, which included concentric and eccentrically located fuel pellets. This experiment provides an opportunity to explore 3D thermomechanical behaviour and assess the 3D simulation capabilities of BISON. Analysis results agree with experimental results showing lower fuel centreline temperatures for eccentric fuel with the peak temperature shifted from the centreline. The comparison confirms with modern 3D analysis tools that the measured temperature difference between concentric and eccentric pellets is not an artefact and provides a quantitative explanation for the difference.

  9. The Santa Barbara Cluster Comparison Project: A Comparison of Cosmological Hydrodynamics Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C. S.; White, S. D. M.; Bode, P.; Bond, J. R.; Bryan, G. L.; Cen, R.; Couchman, H. M. P.; Evrard, A. E.; Gnedin, N.; Jenkins, A. (and others)

    1999-11-10

    We have simulated the formation of an X-ray cluster in a cold dark matter universe using 12 different codes. The codes span the range of numerical techniques and implementations currently in use, including smoothed particle hydrodynamics (SPH) and grid methods with fixed, deformable, or multilevel meshes. The goal of this comparison is to assess the reliability of cosmological gasdynamical simulations of clusters in the simplest astrophysically relevant case, that in which the gas is assumed to be nonradiative. We compare images of the cluster at different epochs, global properties such as mass, temperature and X-ray luminosity, and radial profiles of various dynamical and thermodynamical quantities. On the whole, the agreement among the various simulations is gratifying, although a number of discrepancies exist. Agreement is best for properties of the dark matter and worst for the total X-ray luminosity. Even in this case, simulations that adequately resolve the core radius of the gas distribution predict total X-ray luminosities that agree to within a factor of 2. Other quantities are reproduced to much higher accuracy. For example, the temperature and gas mass fraction within the virial radius agree to within about 10%, and the ratio of specific dark matter kinetic to gas thermal energies agree to within about 5%. Various factors, including differences in the internal timing of the simulations, contribute to the spread in calculated cluster properties. Based on the overall consistency of results, we discuss a number of general properties of the cluster we have modeled. (c) 1999 The American Astronomical Society.

  10. Child labor and multinational conduct : a comparison of international business and stakeholder codes

    NARCIS (Netherlands)

    Kolk, A.; van Tulder, R.

    2002-01-01

    Increasing attention to the issue of child labor has been reflected in codes of conduct that emerged in the past decade in particular. This paper examines the way in which multinationals, business associations, governmental and non-governmental organizations deal with child labor in their codes. Wit

  11. Child labor and multinational conduct : a comparison of international business and stakeholder codes

    NARCIS (Netherlands)

    Kolk, A.; van Tulder, R.

    2002-01-01

    Increasing attention to the issue of child labor has been reflected in codes of conduct that emerged in the past decade in particular. This paper examines the way in which multinationals, business associations, governmental and non-governmental organizations deal with child labor in their codes.

  12. Contrast and Comparison Between the Old and New Bar Code for Commodity Management Measures

    Institute of Scientific and Technical Information of China (English)

    Huang Xiaolin; Ma Jing

    2005-01-01

    @@ With the development of socialism market economy, the former Bar Code For Commodity Management Measures (being called Old Measures for short hereafter) issued by the National Bureau of Quality and Technical Supervision can not adapt to the requirement of managing for bar code for commodity.

  13. Child labor and multinational conduct : a comparison of international business and stakeholder codes

    NARCIS (Netherlands)

    Kolk, A.; van Tulder, R.

    2002-01-01

    Increasing attention to the issue of child labor has been reflected in codes of conduct that emerged in the past decade in particular. This paper examines the way in which multinationals, business associations, governmental and non-governmental organizations deal with child labor in their codes. Wit

  14. Comparison of different computer platforms for running the Versatile Advection Code

    NARCIS (Netherlands)

    Toth, G.; Keppens, R.; Sloot, P.; Bubak, M.; Hertzberger, B.

    1998-01-01

    The Versatile Advection Code is a general tool for solving hydrodynamical and magnetohydrodynamical problems arising in astrophysics. We compare the performance of the code on different computer platforms, including work stations and vector and parallel supercomputers. Good parallel scaling can be a

  15. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Medley, S. S. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Liu, D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Univ. of California, Irvine, CA (United States). Dept. of Physics and Astronomy; Gorelenkova, M. V. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Heidbrink, W. W. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Univ. of California, Irvine, CA (United States). Dept. of Physics and Astronomy; Stagner, L. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Univ. of California, Irvine, CA (United States). Dept. of Physics and Astronomy

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  16. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    Science.gov (United States)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  17. Comparison of nurse educators' and nursing students' descriptions of teaching codes of ethics.

    Science.gov (United States)

    Numminen, Olivia; Leino-Kilpi, Helena; van der Arend, Arie; Katajisto, Jouko

    2011-09-01

    This study analysed teaching of nurses' codes of ethics in basic nursing education in Finland. A total of 183 educators and 214 students responded to a structured questionnaire. The data was analysed by SPSS. Teaching of nurses' codes was rather extensive. The nurse-patient relationship was highlighted. Educators assessed their teaching statistically significantly more extensive than what students' perceptions were. The use of teaching and evaluation methods was conventional, but differences between the groups concerning the use of these methods were statistically significant. Students' knowledge of and their ability to apply the codes was mediocre. Most educators and students assessed educators' knowledge of the codes as adequate for teaching. These educators also taught the codes more extensively and these students perceived the teaching as more extensive. Otherwise educators' and students' socio-demographic variables had little association with the teaching. Research should focus on the organization and effectiveness of ethics education, and on educators' competence.

  18. Project JADE. Comparison of repository systems. Executive summary of results

    Energy Technology Data Exchange (ETDEWEB)

    Sandstedt, H. [Scandiaconsult Sverige AB, Stockholm (Sweden); Pers, K.; Birgersson, Lars [Kemakta Konsult AB, Stockholm (Sweden); Ageskog, L. [SWECO VBB VIAK AB, Stockholm (Sweden); Munier, R. [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)

    2001-12-01

    KBS-3 has since 1984 been the reference method for disposal of spent fuel in Sweden. Several other methods like WP-Cave, Very Deep Holes and Very Long Holes have been evaluated and compared with KBS-3. Though the methods have been judged to have a high safety potential, KBS-3 has been shown to provide advantages in the combined judgement of 'long-term performance and safety', 'technology' and 'costs'. In the present study, different variants of the KBS-3 method have been analysed and compared with the reference concept KBS-3 V (V for vertical). The variants are: KBS-3 H (H for horizontal) and MLH (medium long holes) - with canisters in a horizontal position, single or in a row respectively. The comparison has been carried out separately for the interim items 'technology', 'long-term performance and safety' and 'costs' respectively. The outcome in each of these comparisons have finally been combined in a ranking. This ranking placed KBS-3 V in the top followed by MLH and KBS-3 H. Vertical deposition of a single canister in one deposition hole, KBS-3 V, is robust as gravity is used for lowering the canister and the bentonite into the deposition hole and since each canister has its own barrier in the near field, which reduces the risk for interference between canisters. The drawback for MLH is the uncertainty about the emplacement technique as well as the impact of weak rock and water leakage into a long deposition hole for several canisters. The advantage is that a smaller volume of rock has to be excavated. This is positive regarding the long-term performance and safety, environmental impact and costs. KBS-3 H does not have the same positive potential. The conclusion of the JADE study is that KBS-3 V should remain as reference concept, and that MLH should be studied further with the aim of clarifying the technical feasibility of emplacement and the means of handling water inflow. It is recommended that KBS

  19. Comparison of protein coding gene contents of the fungal phyla Pezizomycotina and Saccharomycotina

    Directory of Open Access Journals (Sweden)

    Ussery David

    2007-09-01

    Full Text Available Abstract Background Several dozen fungi encompassing traditional model organisms, industrial production organisms and human and plant pathogens have been sequenced recently and their particular genomic features analysed in detail. In addition comparative genomics has been used to analyse specific sub groups of fungi. Notably, analysis of the phylum Saccharomycotina has revealed major events of evolution such as the recent genome duplication and subsequent gene loss. However, little has been done to gain a comprehensive comparative view to the fungal kingdom. We have carried out a computational genome wide comparison of protein coding gene content of Saccharomycotina and Pezizomycotina, which include industrially important yeasts and filamentous fungi, respectively. Results Our analysis shows that based on genome redundancy, the traditional model organisms Saccharomyces cerevisiae and Neurospora crassa are exceptional among fungi. This can be explained by the recent genome duplication in S. cerevisiae and the repeat induced point mutation mechanism in N. crassa. Interestingly in Pezizomycotina a subset of protein families related to plant biomass degradation and secondary metabolism are the only ones showing signs of recent expansion. In addition, Pezizomycotina have a wealth of phylum specific poorly characterised genes with a wide variety of predicted functions. These genes are well conserved in Pezizomycotina, but show no signs of recent expansion. The genes found in all fungi except Saccharomycotina are slightly better characterised and predicted to encode mainly enzymes. The genes specific to Saccharomycotina are enriched in transcription and mitochondrion related functions. Especially mitochondrial ribosomal proteins seem to have diverged from those of Pezizomycotina. In addition, we highlight several individual gene families with interesting phylogenetic distributions. Conclusion Our analysis predicts that all Pezizomycotina unlike

  20. Magnetotelluric 3-D inversion—a review of two successful workshops on forward and inversion code testing and comparison

    Science.gov (United States)

    Miensopust, Marion P.; Queralt, Pilar; Jones, Alan G.; 3D MT modellers

    2013-06-01

    Over the last half decade the need for, and importance of, three-dimensional (3-D) modelling of magnetotelluric (MT) data have increased dramatically and various 3-D forward and inversion codes are in use and some have become commonly available. Comparison of forward responses and inversion results is an important step for code testing and validation prior to `production' use. The various codes use different mathematical approximations to the problem (finite differences, finite elements or integral equations), various orientations of the coordinate system, different sign conventions for the time dependence and various inversion strategies. Additionally, the obtained results are dependent on data analysis, selection and correction as well as on the chosen mesh, inversion parameters and regularization adopted, and therefore, a careful and knowledge-based use of the codes is essential. In 2008 and 2011, during two workshops at the Dublin Institute for Advanced Studies over 40 people from academia (scientists and students) and industry from around the world met to discuss 3-D MT inversion. These workshops brought together a mix of code writers as well as code users to assess the current status of 3-D modelling, to compare the results of different codes, and to discuss and think about future improvements and new aims in 3-D modelling. To test the numerical forward solutions, two 3-D models were designed to compare the responses obtained by different codes and/or users. Furthermore, inversion results of these two data sets and two additional data sets obtained from unknown models (secret models) were also compared. In this manuscript the test models and data sets are described (supplementary files are available) and comparisons of the results are shown. Details regarding the used data, forward and inversion parameters as well as computational power are summarized for each case, and the main discussion points of the workshops are reviewed. In general, the responses

  1. Services provided by community pharmacies in Wayne County, Michigan: a comparison by ZIP code characteristics.

    Science.gov (United States)

    Erickson, Steven R; Workman, Paul

    2014-01-01

    To document the availability of selected pharmacy services and out-of-pocket cost of medication throughout a diverse county in Michigan and to assess possible associations between availability of services and price of medication and characteristics of residents of the ZIP codes in which the pharmacies were located. Cross-sectional telephone survey of pharmacies coupled with ZIP code-level census data. 503 pharmacies throughout the 63 ZIP codes of Wayne County, MI. The out-of-pocket cost for a 30 days' supply of levothyroxine 50 mcg and brand-name atorvastatin (Lipitor-Pfizer) 20 mg, availability of discount generic drug programs, home delivery of medications, hours of pharmacy operation, and availability of pharmacy-based immunization services. Census data aggregated at the ZIP code level included race, annual household income, age, and number of residents per pharmacy. The overall results per ZIP code showed that the average cost for levothyroxine was $10.01 ± $2.29 and $140.45 + $14.70 for Lipitor. Per ZIP code, the mean (± SD) percentages of pharmacies offering discount generic drug programs was 66.9% ± 15.0%; home delivery of medications was 44.5% ± 22.7%; and immunization for influenza was 46.7% ± 24.3% of pharmacies. The mean (± SD) hours of operation per pharmacy per ZIP code was 67.0 ± 25.2. ZIP codes with higher household income as well as higher percentage of residents being white had lower levothyroxine price, greater percentage of pharmacies offering discount generic drug programs, more hours of operation per week, and more pharmacy-based immunization services. The cost of Lipitor was not associated with any ZIP code characteristic. Disparities in the cost of generic levothyroxine, the availability of services such as discount generic drug programs, hours of operation, and pharmacy-based immunization services are evident based on race and household income within this diverse metropolitan county.

  2. Sharing code

    OpenAIRE

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  3. Projection on Proper elements for code control: Verification, numerical convergence, and reduced models. Application to plasma turbulence simulations

    Science.gov (United States)

    Cartier-Michaud, T.; Ghendrih, P.; Sarazin, Y.; Abiteboul, J.; Bufferand, H.; Dif-Pradalier, G.; Garbet, X.; Grandgirard, V.; Latu, G.; Norscini, C.; Passeron, C.; Tamain, P.

    2016-02-01

    The Projection on Proper elements (PoPe) is a novel method of code control dedicated to (1) checking the correct implementation of models, (2) determining the convergence of numerical methods, and (3) characterizing the residual errors of any given solution at very low cost. The basic idea is to establish a bijection between a simulation and a set of equations that generate it. Recovering equations is direct and relies on a statistical measure of the weight of the various operators. This method can be used in any number of dimensions and any regime, including chaotic ones. This method also provides a procedure to design reduced models and quantify its ratio of cost to benefit. PoPe is applied to a kinetic and a fluid code of plasma turbulence.

  4. What Are Those Checkerboard Things?: How QR Codes Can Enrich Student Projects

    Science.gov (United States)

    Tucker, Al

    2011-01-01

    Students enrolled in commercial arts program design and publish their school's yearbook. For the 2010-2011 school year, the students applied Quick Response (QR) code technology to include links to events that occurred after the yearbook's print deadline, including graduation. The technology has many applications in the school setting, and the…

  5. Energy Efficiency Pilot Projects in Jaipur: Testing the Energy Conservation Building Code

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Meredydd; Mathur, Jyotirmay; Yu, Sha

    2014-03-26

    The Malaviya National Institute of Technology (MNIT) in Jaipur, India is constructing two new buildings on its campus that allow it to test implementation of the Energy Conservation Building Code (ECBC), which Rajasthan made mandatory in 2011. PNNL has been working with MNIT to document progress on ECBC implementation in these buildings.

  6. Postprocessing of interframe coded images based on convex projection and regularization

    Science.gov (United States)

    Joung, Shichang; Kim, Sungjin; Paik, Joon-Ki

    2000-04-01

    In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8 X 8 DCT and 16 X 16 macroblock based motion compensation, while that of intra-coded images is caused by 8 X 8 DCT only. According to the observation, we propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporates spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block- based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior. For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior. Those constraints have been used for defining convex sets for restoring differential images.

  7. Comparison study of EMG signals compression by methods transform using vector quantization, SPIHT and arithmetic coding.

    Science.gov (United States)

    Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre

    2016-01-01

    In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT.

  8. Polar Codes

    Science.gov (United States)

    2014-12-01

    QPSK Gaussian channels . .......................................................................... 39 vi 1. INTRODUCTION Forward error correction (FEC...Capacity of BSC. 7 Figure 5. Capacity of AWGN channel . 8 4. INTRODUCTION TO POLAR CODES Polar codes were introduced by E. Arikan in [1]. This paper...Under authority of C. A. Wilgenbusch, Head ISR Division EXECUTIVE SUMMARY This report describes the results of the project “More reliable wireless

  9. Numerical Prediction of the Performance of Integrated Planar Solid-Oxide Fuel Cells, with Comparisons of Results from Several Codes

    Energy Technology Data Exchange (ETDEWEB)

    G. L. Hawkes; J. E. O' Brien; B. A. Haberman; A. J. Marquis; C. M. Baca; D. Tripepi; P. Costamagna

    2008-06-01

    A numerical study of the thermal and electrochemical performance of a single-tube Integrated Planar Solid Oxide Fuel Cell (IP-SOFC) has been performed. Results obtained from two finite-volume computational fluid dynamics (CFD) codes FLUENT and SOHAB and from a two-dimensional inhouse developed finite-volume GENOA model are presented and compared. Each tool uses physical and geometric models of differing complexity and comparisons are made to assess their relative merits. Several single-tube simulations were run using each code over a range of operating conditions. The results include polarization curves, distributions of local current density, composition and temperature. Comparisons of these results are discussed, along with their relationship to the respective imbedded phenomenological models for activation losses, fluid flow and mass transport in porous media. In general, agreement between the codes was within 15% for overall parameters such as operating voltage and maximum temperature. The CFD results clearly show the effects of internal structure on the distributions of gas flows and related quantities within the electrochemical cells.

  10. A Comparison of Athletic Movement Among Talent-Identified Juniors From Different Football Codes in Australia: Implications for Talent Development.

    Science.gov (United States)

    Woods, Carl T; Keller, Brad S; McKeown, Ian; Robertson, Sam

    2016-09-01

    Woods, CT, Keller, BS, McKeown, I, and Robertson, S. A comparison of athletic movement among talent-identified juniors from different football codes in Australia: implications for talent development. J Strength Cond Res 30(9): 2440-2445, 2016-This study aimed to compare the athletic movement skill of talent-identified (TID) junior Australian Rules football (ARF) and soccer players. The athletic movement skill of 17 TID junior ARF players (17.5-18.3 years) was compared against 17 TID junior soccer players (17.9-18.7 years). Players in both groups were members of an elite junior talent development program within their respective football codes. All players performed an athletic movement assessment that included an overhead squat, double lunge, single-leg Romanian deadlift (both movements performed on right and left legs), a push-up, and a chin-up. Each movement was scored across 3 essential assessment criteria using a 3-point scale. The total score for each movement (maximum of 9) and the overall total score (maximum of 63) were used as the criterion variables for analysis. A multivariate analysis of variance tested the main effect of football code (2 levels) on the criterion variables, whereas a 1-way analysis of variance identified where differences occurred. A significant effect was noted, with the TID junior ARF players outscoring their soccer counterparts when performing the overhead squat and push-up. No other criterions significantly differed according to the main effect. Practitioners should be aware that specific sporting requirements may incur slight differences in athletic movement skill among TID juniors from different football codes. However, given the low athletic movement skill noted in both football codes, developmental coaches should address the underlying movement skill capabilities of juniors when prescribing physical training in both codes.

  11. Wavelet Kernels on a DSP: A Comparison between Lifting and Filter Banks for Image Coding

    Science.gov (United States)

    Gnavi, Stefano; Penna, Barbara; Grangetto, Marco; Magli, Enrico; Olmo, Gabriella

    2002-12-01

    We develop wavelet engines on a digital signal processors (DSP) platform, the target application being image and intraframe video compression by means of the forthcoming JPEG2000 and Motion-JPEG2000 standards. We describe two implementations, based on the lifting scheme and the filter bank scheme, respectively, and we present experimental results on code profiling. In particular, we address the following problems: (1) evaluating the execution speed of a wavelet engine on a modern DSP; (2) comparing the actual execution speed of the lifting scheme and the filter bank scheme with the theoretical results; (3) using the on-board direct memory access (DMA) to possibly optimize the execution speed. The results allow to assess the performance of a modern DSP in the image coding task, as well as to compare the lifting and filter bank performance in a realistic application scenario. Finally, guidelines for optimizing the code efficiency are provided by investigating the possible use of the on-board DMA.

  12. Nonlinear Simulation of Alfven Eigenmodes driven by Energetic Particles: Comparison between HMGC and TAEFL Codes

    Science.gov (United States)

    Bierwage, Andreas; Spong, Donald A.

    2009-05-01

    Hybrid-MHD-Gyrokinetic Code (HMGC) [1] and the gyrofluid code TAEFL [2,3] are used for nonlinear simulation of Alfven Eigenmodes in Tokamak plasma. We compare results obtained in two cases: (I) a case designed for cross-code benchmark of TAE excitation; (II) a case based on a dedicated DIII-D shot #132707 where RSAE and TAE activity is observed. Differences between the numerical simulation results are discussed and future directions are outlined. [1] S. Briguglio, G. Vlad, F. Zonca and C. Kar, Phys. Plasmas 2 (1995) 3711. [2] D.A. Spong, B.A. Carreras and C.L. Hedrick, Phys. Fluids B4 (1992) 3316. [3] D.A. Spong, B.A. Carreras and C.L. Hedrick, Phys. Plasmas 1 (1994) 1503.

  13. A comparison of cosmological Boltzmann codes: are we ready for high precision cosmology?

    CERN Document Server

    Seljak, U; White, M; Zaldarriaga, M

    2003-01-01

    We compare three independent, cosmological linear perturbation theory codes to asses the level of agreement between them and to improve upon it by investigating the sources of discrepancy. By eliminating the major sources of numerical instability the final level of agreement between the codes was improved by an order of magnitude. The relative error is now below 0.1% for the dark matter power spectrum. For the cosmic microwave background anisotropies the agreement is below the sampling variance up to l=3000, with close to 0.1% accuracy reached over most of this range of scales. The same level of agreement is also achieved for the polarization spectrum and the temperature-polarization cross-spectrum. Linear perturbation theory codes are thus well prepared for the present and upcoming high precision cosmological observations.

  14. Comparison of Codes and Neutronics Data Used in the United States and Russia for the TOPAZ-2 Nuclear Safety Assessment

    Science.gov (United States)

    Glushkov, Y. S.; Ponomarov-Stepnoy, N. N.; Kompaniets, G. V.; Gomin, Y. A.; Mayorov, L. V.; Lobyntsev, V. A.; Polyakov, D. N.; Sapir, Joe; Pelowitz, Denise; Streetman, J. Robert

    1994-07-01

    The TOPAZ-2 reactor system is a heterogeneous epithermal system fueled with highly-enriched fuel based on uranium oxide, cooled by a sodium-potassium liquid metal (NaK), using a zirconium hydride moderator, with 37 thermionic fuel elements (TFEs) built into the core. The core is surrounded by a radial beryllium reflector which contains rotating regulating drums with moderating segments. An important problem is the guaranteeing of nuclear safety upon the accidental falling of the TOPAZ-2 reactor into water, which leads to the growth of the reactivity of the reactor. It has turned out that it is necessary to use the Monte-Carlo method for the conduct of neutronics calculations of such a complex reactor. In the United States (U.S.) and Russia, different codes based on the Monte-Carlo method are used for calculations - the MCNP code in the U.S., and the MCU-2 code in Russia. The goal of this work is the comparison of the codes and neutronics data used in the U.S. and Russia for the basis of the TOPAZ-2 nuclear safety. With this goal, a joint computer model benchmark of the TOPAZ-2 reactor was developed and the calculations of a series of variants, differing by the presence and absence of water in the reactor cavities and behind the radial reflector, in the position of the regulating drums, in the presence of the radial reflector, etc. were done independently by specialists in both the U.S. and Russia. Along with the reactor calculations, calculations were also done of the nuclei of the core using the MCNP code (U.S.) and the MCU-2 code (Russia). The work done allowed one to obtain results comparing the MCNP code to the MCU-2 code which gave somewhat different results both for the absolute values of Keff and for reactivity effects. In the future it remains to conduct a detailed analysis of the reasons for the discrepancies. For this it is necessary to exchange neutronics data used for TOPAZ-2 reactor calculations in the U.S. and Russia.

  15. Accuracy comparison among different machine learning techniques for detecting malicious codes

    Science.gov (United States)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  16. A comparison of shuttle vernier engine plume contamination with CONTAM 3.4 code predictions

    Science.gov (United States)

    Maag, Carl R.; Jones, Thomas M.; Rao, Shankar M.; Linder, W. Kelly

    1992-01-01

    In 1985, using the CONTAM 3.2 code, it was predicted that the shuttle Primary Reaction Control System (PRCS) and Vernier Reaction Control System (VRCS) engines could be potential contamination sources to sensitive surfaces located within the shuttle payload bay. Spaceflight test data on these engines is quite limited. Shuttle mission STS-32, the Long Duration Exposure Facility retrieval mission, was instrumented with an experiment that provided the design engineer with evidence that contaminant species from the VRCS engines can enter the payload bay. More recently, the most recent version of the analysis code, CONTAM 3.4, has re-examined the contamination potential of these engines.

  17. [Between law and psychiatry: homosexuality in the project of the Swiss penal code (1918)].

    Science.gov (United States)

    Delessert, Thierry

    2005-01-01

    In 1942 the Swiss penal code depenalises homosexual acts between agreeing adults under some conditions. The genesis of the penal article shows that it was constructed before the First World War and bears marks of the forensic theories of the turn of the century. Both by direct contacts and the authority of its eminent figures, Swiss psychiatry exerts an unquestionable influence on the depenalisation. The conceptualisation of homosexuality is also strongly influenced by the German psychiatric theories and discussed in reference to Germanic law. By the penal article, the Swiss lawyers and psychiatrists link the homosexual question with the determination of the irresponsibility of criminal mental patients and degeneracy.

  18. CATHARE-3: A new system code for thermal-hydraulics in the context of the NEPTUNE project

    Energy Technology Data Exchange (ETDEWEB)

    Emonot, P., E-mail: philippe.emonot@cea.fr [CEA DEN/DER/SSTH, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France); Souyri, A., E-mail: annick.souyri@edf.fr [EDF R and D/MFEE, 6 Quai Watier, 78401 Chatou Cedex (France); Gandrille, J.L., E-mail: jeanluc.gandrille@areva.com [AREVA-NP, Tour Areva, 92084 Paris La Defense Cedex (France); Barre, F., E-mail: francois.barre@irsn.fr [IRSN DPAM, BP 3, 13115 Saint-Paul-Lez-Durance Cedex (France)

    2011-11-15

    After a thorough analysis of the industrial needs and of the limitations of current simulation tools, EDF and CEA (Commissariat a l'Energie Atomique) launched the NEPTUNE Project in 2001 (see) with the support of AREVA-NP and IRSN. The NEPTUNE activities include software development, research in physical modeling and numerical methods, development of advanced instrumentation techniques and new experimental programs. Four different simulation scales were addressed including DNS (Direct Numerical Simulation), CFD in open medium (Computational Fluid Dynamics), component (subchannel-type analysis) and system (reactor modeling) scales. In 2006 CEA, EDF, AREVA-NP and IRSN defined the strategy for the system scale of NEPTUNE and the CATHARE-3 development was launched. The main objectives are: Bullet advanced physical modeling of two-phases flows, mainly by using multi-field and turbulence models, Bullet improved 3D modeling by the use of fine and non conforming structured meshes, Bullet generalized coupling capabilities with other thermal-hydraulic scales and with other disciplines (core physics, structural mechanics, Horizontal-Ellipsis), Bullet extension of the applicability to new Gen IV reactors (Sodium Cooled Fast Breeder Reactors, Gas Cooled Reactors, Supercritical Light Water Reactors), Bullet a true object-oriented code architecture. At the same time CATHARE-3 is in continuity with the CATHARE-2 code which is the current industrial version of CATHARE and internationally used for nuclear power plant safety analysis, in simulators and in coupled simulation tools. The road map of these two codes will allow a smooth transition from CATHARE-2 to CATHARE-3 for all users. This paper gives an overview of the choices made for the development of CATHARE-3 including new physical models, validation strategy and experimental programs, numerical improvements, enhanced coupling capability and software architecture evolution. The current status of the project as well as the

  19. [The Austrian penal code and the Codex Ur-nammu--a comparison from the forensic medicine viewpoint].

    Science.gov (United States)

    Feenstra, O; Roll, P; Seybold, I

    1991-01-01

    A comparison between the Mesopotamian Law (Codex Ur-nammu) and the Austrian Penal Code reveals the long-sightedness of the founder of the 3rd dynasty from Ur, called Ur-nammu. It seems extremely remarkable that at those remote times (3rd Millenium B. C.) body injuries were satisfied by simple money-penalties. The Codex Ur-nammu therefore not only represents on exceedingly piece of work from the historical point of view but also from the point of view from Legal Medicine.

  20. Test results of a 40-kW Stirling engine and comparison with the NASA Lewis computer code predictions

    Science.gov (United States)

    Allen, David J.; Cairelli, James E.

    1988-01-01

    A Stirling engine was tested without auxiliaries at Nasa-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were: (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations from those predicted by the code. The silicon carbide foam generators appear to be structurally suitable, but the foam matrix showed severely reduced performance.

  1. Developing Qualitative Coding Frameworks for Educational Research: Immigration, Education and the Children Crossing Borders Project

    Science.gov (United States)

    Adair, Jennifer Keys; Pastori, Giulia

    2011-01-01

    The Children Crossing Borders (CCB) study is a polyvocal, multi-sited project on immigration and early childhood education and care in five countries: Italy, Germany, France, England and the USA. The complicated nature of the data pushed us as a group to expand our methodological resources to not only organize the data but also to make it…

  2. Bringing Fenton Hill into the Digital Age: Data Conversion in Support of the Geothermal Technologies Office Code Comparison Study Challenge Problems

    Energy Technology Data Exchange (ETDEWEB)

    White, Signe K.; Kelkar, Sharad M.; Brown, Don W.

    2016-03-01

    The Geothermal Technologies Office Code Comparison Study (GTO-CCS) was established by the U.S. Department of Energy to facilitate collaboration among members of the geothermal modeling community and to evaluate and improve upon the ability of existing codes to simulate thermal, hydrological, mechanical, and chemical processes associated with complex enhanced geothermal systems (EGS). The first stage of the project, which has been completed, involved comparing simulations for seven benchmark problems that were primarily designed using well-prescribed, simplified data sets. In the second stage, the participating teams are tackling two challenge problems based on the EGS research conducted in hot dry rock (HDR) at Fenton Hill, near Los Alamos, New Mexico. The Fenton Hill project, conducted by Los Alamos National Laboratory (LANL) from 1970 to 1995, was the world’s first HDR demonstration project. One of the criteria for selecting this experiment as the basis for the challenge problems was the amount and availability of data for generating model inputs. The Fenton Hill HDR system consisted of two reservoirs – an earlier Phase I reservoir tested from 1974 to 1981 and a deeper Phase II reservoir tested from 1980 to 1995. Detailed accounts of both phases of the HDR project have been presented in a number of books and reports, including a recently published summary of the lessons learned and a final report with a chronological description of the Fenton Hill project, prepared by LANL. Project documents and records have been archived and made public through the National Geothermal Data System (NGDS). Some of the data acquired from Phase II are available in electronic format readable on modern computers. These include the microseismic data from some of the important experiments (e.g. the massive hydraulic fracturing test conducted in 1983) and the injection/production wellhead data from the circulation tests conducted between 1992-1995. However, much of the data collected

  3. [Scar or recurrence--comparison of MRI and color-coded ultrasound with echo signal amplifiers].

    Science.gov (United States)

    Aichinger, U; Schulz-Wendtland, R; Krämer, S; Lell, M; Bautz, W

    2002-11-01

    MRI is the most reliable method to differentiate scar and recurrent carcinoma of the breast after surgical treatment. This study compares MRI and color-coded ultrasound with and without echo signal amplifier (ESA). Forty-two patients with suspected recurrent tumors were enrolled in this prospective study, with 38 patients after breast conserving therapy and 4 after mastectomy. All patients had a clinical examination, mammography (n = 38), real time ultrasound (US), color-coded ultrasound without and with ESA (Levovist(R), Schering, Berlin), and dynamic MRI. The criteria used for duplex ultrasound were tumor vascularisation and flow pattern. The results were compared with histologic findings or the results of follow-up examinations for at least 12 months. The detection of penetrating or central vessels proved to be an accurate sign of malignancy in duplex ultrasound. With the application of ESA, additional vessels were detected within the lesions, increasing the diagnostic accuracy (83 % with ESA versus 79 % without ESA). The sensitivity of color-coded ultrasound improved from 64 % to 86 % with echo signal amplifier. The specificity was 86 % without and 82 % with echo signal amplifier. MRI was found to have a sensitivity of 100 % and a specificity of 82 %. The same 5 lesions were false positive on MRI and color-coded US after Levovist(R). No lesion without signs of vascularity within or in its vicinity was malignant. Color-coded ultrasound seems to be a promising method in the differentiation between scar and recurrence. Lesions with penetrating or central vessels have a high probability of being malignant, whereas lesions without any signs of vascularity inside or nearby have a high probability of being benign. Advantage of contrast-enhanced US is its ubiquitous availability.

  4. A Comparison of Natural Language Processing Methods for Automated Coding of Motivational Interviewing.

    Science.gov (United States)

    Tanana, Michael; Hallgren, Kevin A; Imel, Zac E; Atkins, David C; Srikumar, Vivek

    2016-06-01

    Motivational interviewing (MI) is an efficacious treatment for substance use disorders and other problem behaviors. Studies on MI fidelity and mechanisms of change typically use human raters to code therapy sessions, which requires considerable time, training, and financial costs. Natural language processing techniques have recently been utilized for coding MI sessions using machine learning techniques, rather than human coders, and preliminary results have suggested these methods hold promise. The current study extends this previous work by introducing two natural language processing models for automatically coding MI sessions via computer. The two models differ in the way they semantically represent session content, utilizing either 1) simple discrete sentence features (DSF model) and 2) more complex recursive neural networks (RNN model). Utterance- and session-level predictions from these models were compared to ratings provided by human coders using a large sample of MI sessions (N=341 sessions; 78,977 clinician and client talk turns) from 6 MI studies. Results show that the DSF model generally had slightly better performance compared to the RNN model. The DSF model had "good" or higher utterance-level agreement with human coders (Cohen's kappa>0.60) for open and closed questions, affirm, giving information, and follow/neutral (all therapist codes); considerably higher agreement was obtained for session-level indices, and many estimates were competitive with human-to-human agreement. However, there was poor agreement for client change talk, client sustain talk, and therapist MI-inconsistent behaviors. Natural language processing methods provide accurate representations of human derived behavioral codes and could offer substantial improvements to the efficiency and scale in which MI mechanisms of change research and fidelity monitoring are conducted.

  5. Comparison of different methods used in integral codes to model coagulation of aerosols

    Science.gov (United States)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  6. A Radiation-Hydrodynamics Code Comparison for Laser-Produced Plasmas: FLASH versus HYDRA and the Results of Validation Experiments

    CERN Document Server

    Orban, Chris; Chawla, Sugreev; Wilks, Scott C; Lamb, Donald Q

    2013-01-01

    The potential for laser-produced plasmas to yield fundamental insights into high energy density physics (HEDP) and deliver other useful applications can sometimes be frustrated by uncertainties in modeling the properties and expansion of these plasmas using radiation-hydrodynamics codes. In an effort to overcome this and to corroborate the accuracy of the HEDP capabilities recently added to the publicly available FLASH radiation-hydrodynamics code, we present detailed comparisons of FLASH results to new and previously published results from the HYDRA code used extensively at Lawrence Livermore National Laboratory. We focus on two very different problems of interest: (1) an Aluminum slab irradiated by 15.3 and 76.7 mJ of "pre-pulse" laser energy and (2) a mm-long triangular groove cut in an Aluminum target irradiated by a rectangular laser beam. Because this latter problem bears a resemblance to astrophysical jets, Grava et al., Phys. Rev. E, 78, (2008) performed this experiment and compared detailed x-ray int...

  7. Elastic-plastic analysis of the PVRC burst disk tests with comparison to the ASME code -- Primary stress limits

    Energy Technology Data Exchange (ETDEWEB)

    Jones, D.P.; Holliday, J.E.

    1999-02-01

    This paper provides a comparison between finite element analysis results and test data from the Pressure Vessel Research Council (PVRC) burst disk program. Testing sponsored by the PVRC over 20 years ago was done by pressurizing circular flat disks made from three different materials until failure by bursting. The purpose of this re-analysis is to investigate the use of finite element analysis (FEA) to assess the primary stress limits of the ASME Boiler and Pressure Vessel Code (1998) and to qualify the use of elastic-plastic (EP-FEA) for limit load calculations. The three materials tested represent the range of strength and ductility found in modern pressure vessel construction and include a low strength high ductility material, a medium strength medium ductility material, and a high strength low ductility low alloy material. Results of elastic and EP-FEA are compared to test data. Stresses from the elastic analyses are linearized for comparison of Code primary stress limits to test results. Elastic-plastic analyses are done using both best-estimate and elastic-perfectly plastic (EPP) stress-strain curves. Both large strain-large displacement (LSLD) and small strain-small displacement (SSSD) assumptions are used with the EP-FEA. Analysis results are compared to test results to evaluate the various analysis methods, models, and assumptions as applied to the bursting of thin disks.

  8. A multigroup radiation diffusion test problem: Comparison of code results with analytic solution

    Energy Technology Data Exchange (ETDEWEB)

    Shestakov, A I; Harte, J A; Bolstad, J H; Offner, S R

    2006-12-21

    We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.

  9. Applications of the 3-dim ICRH global wave code FISIC and comparison with other models

    Energy Technology Data Exchange (ETDEWEB)

    Kruecken, T.; Brambilla, M. (Association Euratom-Max-Planck-Institut fuer Plasmaphysik, Garching (Germany, F.R.))

    1989-02-01

    Numerical simulations of two ICRF heating experiments in ASDEX are presented, using the FISIC code to solve the integrodifferential wave equations in the finite Larmor radius (FLR) approximation model and of ray tracing. The different models show on the whole good agreement; we can however identify a few interesting toroidal effects, in particular on the efficiency of mode conversion and on the propagation of ion Bernstein waves. (author).

  10. Microscopic Diffusion in Stellar Evolution Codes: First Comparison results of ESTA-Task~3

    CERN Document Server

    Lebreton, Y; Christensen-Dalsgaard, J; Théado, S; Hui-Bon-Hoa, A; Monteiro, M J P F G; Degl'Innocenti, S; Marconi, M; Morel, P; Moroni, P G P; Weiss, A

    2007-01-01

    We present recent work undertaken by the Evolution and Seismic Tools Activity (ESTA) team of the CoRoT Seismology Working Group. The new ESTA-Task 3 aims at testing, comparing and optimising stellar evolution codes which include microscopic diffusion of the chemical elements resulting from pressure, temperature and concentration gradients. The results already obtained are globally satisfactory, but some differences between the different numerical tools appear that require further investigations.

  11. Comparison of thick-target (alpha,n) yield calculation codes

    Science.gov (United States)

    Fernandes, Ana C.; Kling, Andreas; Vlaskin, Gennadiy N.

    2017-09-01

    Neutron production yields and energy distributions from (α,n) reactions in light elements were calculated using three different codes (SOURCES, NEDIS and USD) and compared with the existing experimental data in the 3.5-10 MeV alpha energy range. SOURCES and NEDIS display an agreement between calculated and measured yields in the decay series of 235U, 238U and 232Th within ±10% for most materials. The discrepancy increases with alpha energy but still an agreement of ±20% applies to all materials with reliable elemental production yields (the few exceptions are identified). The calculated neutron energy distributions describe the experimental data, with NEDIS retrieving very well the detailed features. USD generally underestimates the measured yields, in particular for compounds with heavy elements and/or at high alpha energies. The energy distributions exhibit sharp peaks that do not match the observations. These findings may be caused by a poor accounting of the alpha particle energy loss by the code. A big variability was found among the calculated neutron production yields for alphas from Sm decay; the lack of yield measurements for low ( 2 MeV) alphas does not allow to conclude on the codes' accuracy in this energy region.

  12. Wavelet Kernels on a DSP: A Comparison between Lifting and Filter Banks for Image Coding

    Directory of Open Access Journals (Sweden)

    Gnavi Stefano

    2002-01-01

    Full Text Available We develop wavelet engines on a digital signal processors (DSP platform, the target application being image and intraframe video compression by means of the forthcoming JPEG2000 and Motion-JPEG2000 standards. We describe two implementations, based on the lifting scheme and the filter bank scheme, respectively, and we present experimental results on code profiling. In particular, we address the following problems: (1 evaluating the execution speed of a wavelet engine on a modern DSP; (2 comparing the actual execution speed of the lifting scheme and the filter bank scheme with the theoretical results; (3 using the on-board direct memory access (DMA to possibly optimize the execution speed. The results allow to assess the performance of a modern DSP in the image coding task, as well as to compare the lifting and filter bank performance in a realistic application scenario. Finally, guidelines for optimizing the code efficiency are provided by investigating the possible use of the on-board DMA.

  13. Comparison of MPEG-2 and AVC coding on synthetic test materials

    Science.gov (United States)

    Fenimore, Charles; Roberts, John

    2006-08-01

    The resources for evaluation of moving imagery coding include a variety of subjective and objective methods for quality measurement. These are applied to a variety of imagery, ranging from synthetically-generated to live capture. NIST has created a family of synthetic motion imagery (MI) materials providing image elements such as moving spirals, blocks, text, and spinning wheels. Through the addition of a colored noise background, the materials support the generation of graded levels of MI coding impairments such as image blocking and mosquito noise, impairments that are found in imagery coded with Motion Pictures Expert Group (MPEG) and similar codecs. For typical available synthetic imagery, human viewers respond unfavorably to repeated viewings; so in this case, the use of objective (computed) metrics for evaluation of quality is preferred. Three such quality metrics are described: a standard peak-signal-to-noise measure, a new metric of edge-blurring, and another of added-edge-energy. As applied to the NIST synthetic clips, the metrics confirm an approximate doubling [1] of compression efficiency between two commercial codecs, one an implementation of AVC/H.264 and the other of MPEG-2.

  14. Multi-Variety Code-Switching in Conversation 903 of the Køge Project

    Directory of Open Access Journals (Sweden)

    Jens Normann Jørgensen

    2004-01-01

    Full Text Available This article documents some of the ways in which the languages, or varieties, are taken into possession by the young speakers and made their own. It is illustrated how they play with language, in particular switches between codes, both as contributions to social negotiations and as pure performance. The material comes from a group conversation between four male bilingual students in the last grade of the Danish public school system. The young people have Turkish as their mother tongue, and Danish is their L2. By grade 9, they have had several years of experience with English, and almost all of the students have had two years of German. The conversation is a part of the Køge material (see Turan 1999. The four boys were asked to create a collage or a picture series with free post cards and glue them on a large piece of cardboard. The theme of the collage was to be “My worst nightmare”. The conversation lasts about half an hour, and all four boys participate actively in the conversation. The conversation has been transcribed according to the CHILDES conventions (MacWhinney 1995, but have been simplified slightly for the excerpts given in the article. In the excerpts, Turkish is italicized. The lines beginning with %eng give translations into English. Lines beginning with %com give background information or comments to the transcript.

  15. Comparison of non-coding RNAs in human and canine cancer

    Directory of Open Access Journals (Sweden)

    Siegfried eWagner

    2013-04-01

    Full Text Available The discovery of the post-transcriptional gene silencing by small non-protein-coding RNAs is considered as a major breakthrough in biology. In the last decade we just started to realize the biologic function and complexity of gene regulation by small non-coding RNAs. Post-transcriptional gene silencing (PTGS is a conserved phenomenon which was observed in various species such as fungi, worms, plants and mammals. Micro RNAs (miRNA and small interfering RNAs (siRNAs are two gene silencing mediators constituting an evolutionary conserved class of non-coding RNAs regulating many biological processes in eukaryotes. As this small RNAs appear to regulate gene expression at translational and transcriptional level it is not surprising that during the last decade many human diseases among them Alzheimer's disease, cardiovascular diseases and various cancer types were associated with deregulated miRNA expression. Consequently small RNAs are considered to hold big promises as therapeutic agents. However despite of the enormous therapeutic potential many questions remain unanswered. A major critical point, when evaluating novel therapeutic approaches, is the transfer of in vitro settings to an in vivo model. Classical animal models rely on the laboratory kept animals under artificial conditions and often missing an intact immune system. Model organisms with spontaneously occurring tumors as e.g. dogs provide the possibility to evaluate therapeutic agents under the surveillance of an in intact immune system and thereby providing an authentic tumor reacting scenario. Considering the genomic similarity between canines and humans and the advantages of the dog as cancer model system for human neoplasias the analyses of the complex role of small RNAs in canine tumor development could be of major value for both species. Herein we discuss comparatively the role of miRNAs in human and canine cancer development and highlight the potential and advantages of the model

  16. Comparison of metrics obtained with analytic perturbation theory and a numerical code

    CERN Document Server

    Cuchí, Javier E; Ruiz, Eduardo

    2012-01-01

    We compare metrics obtained through analytic perturbation theory with their numerical counterparts. The analytic solutions are computed with the CMMR post-Minkowskian and slow rotation approximation due to Cabezas et al. (2007) for an asymptotically flat stationary spacetime containing a rotating perfect fluid compact source. The same spacetime is studied with the AKM numerical multi-domain spectral code (Ansorg et al., 2002,2003). We then study their differences inside the source, near the infinity and in the matching surface, or equivalently, the global character of the analytic perturbation scheme.

  17. Integrating model of the Project Independence Evaluation System. Volume V. Code documentation

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, M L; Allen, B J; Lutz, M S; Gale, J E; O& #x27; Hara, N E; Wood, R K

    1978-07-01

    This volume is a description of the Project Independence Evaluation System as a computer system. It is intended for readers wanting a basic understanding of the computer implementation of PIES rather than an understanding of the modeling methodology. It can assist those who wish to run PIES on the EIA computer facility or to use PIES on their own facilities, or to analyze the PIES computer processing. The document contains: an overview of the computer implementation; a description of the data and naming conventions used in PIES; a functional description of PIES data processing; PIES hardware and software requirements; and an operational description of the PIES processing flow. This overview defines the scope of PIES in this report and thus governs the computer system descriptions that follow. It also provides an historical view of the development of PIES.

  18. Comparison of Scientific Research Projects of Education Faculties

    Science.gov (United States)

    Altunay, Esen; Tonbul, Yilmaz

    2015-01-01

    Many studies indicate that knowledge and knowledge production are the main predictors of social development, welfare and the ability to face the future with confidence. It could be argued that knowledge production is mainly carried out by universities. This study compares 1266 scientific research projects (SRPs) completed by faculties of education…

  19. Comparison of Scientific Research Projects of Education Faculties

    Science.gov (United States)

    Altunay, Esen; Tonbul, Yilmaz

    2015-01-01

    Many studies indicate that knowledge and knowledge production are the main predictors of social development, welfare and the ability to face the future with confidence. It could be argued that knowledge production is mainly carried out by universities. This study compares 1266 scientific research projects (SRPs) completed by faculties of education…

  20. A 2D and 3D Code Comparison of Turbulent Mixing in Spherical Implosions

    Science.gov (United States)

    Flaig, Markus; Thornber, Ben; Grieves, Brian; Youngs, David; Williams, Robin; Clark, Dan; Weber, Chris

    2016-10-01

    Turbulent mixing due to Richtmyer-Meshkov and Rayleigh-Taylor instabilities has proven to be a major obstacle on the way to achieving ignition in inertial confinement fusion (ICF) implosions. Numerical simulations are an important tool for understanding the mixing process, however, the results of such simulations depend on the choice of grid geometry and the numerical scheme used. In order to clarify this issue, we compare the simulation codes FLASH, TURMOIL, HYDRA, MIRANDA and FLAMENCO for the problem of the growth of single- and multi-mode perturbations on the inner interface of a dense imploding shell. We consider two setups: A single-shock setup with a convergence ratio of 4, as well as a higher convergence multi-shock setup that mimics a typical NIF mixcap experiment. We employ both singlemode and ICF-like broadband perturbations. We find good agreement between all codes concerning the evolution of the mix layer width, however, the are differences in the small scale mixing. We also develop a Bell-Plesset model that is able to predict the mix layer width and find excellent agreement with the simulation results. This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government.

  1. Comparison of methods for auto-coding causation of injury narratives.

    Science.gov (United States)

    Bertke, S J; Meyers, A R; Wurzelbacher, S J; Measure, A; Lampl, M P; Robins, D

    2016-03-01

    Manually reading free-text narratives in large databases to identify the cause of an injury can be very time consuming and recently, there has been much work in automating this process. In particular, the variations of the naïve Bayes model have been used to successfully auto-code free text narratives describing the event/exposure leading to the injury of a workers' compensation claim. This paper compares the naïve Bayes model with an alternative logistic model and found that this new model outperformed the naïve Bayesian model. Further modest improvements were found through the addition of sequences of keywords in the models as opposed to consideration of only single keywords. The programs and weights used in this paper are available upon request to researchers without a training set wishing to automatically assign event codes to large data-sets of text narratives. The utility of sharing this program was tested on an outside set of injury narratives provided by the Bureau of Labor Statistics with promising results.

  2. Material properties of Grade 91 steel at elevated temperature and their comparison with a design code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyeong Yeon; Kim, Woo Gon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Han Sang; Kim, Yun Jae [Korea Univ., Seoul (Korea, Republic of)

    2013-10-15

    In this study, the material properties of tensile strength, creep properties, and creep crack growth model for Gr.91 steel at elevated temperature were obtained from material tests at KAERI, and the test data were compared with those of the French elevated temperature design code, RCC-MRx. The conservatism of the material properties in the French design code is highlighted. Mod.9Cr-1Mo (ASME Grade 91; Gr.91) steel is widely adopted as candidate material for Generation IV nuclear systems as well as for advanced thermal plants. In a Gen IV sodium-cooled fast reactor of the PGSFR (Prototype Gen IV Sodium-cooled Fast Reactor) being developed by KAERI (Korea Atomic Energy Research Institute), Gr.91 steel is selected as the material for the steam generator, secondary piping, and decay heat exchangers. However, as this material has a relatively shorter history of usage in an actual plant than austenitic stainless steel, there are still many issues to be addressed including the long-term creep rupture life extrapolation and ratcheting behavior with cyclic softening characteristics.

  3. Code of Conduct for wind-power projects - Phases 1 and 2; Code of Conduct fuer windkraftprojekte. Phase 1 und 2 - Systemanalyse, Lessons Learned und Bewertung bestehender Instrumente

    Energy Technology Data Exchange (ETDEWEB)

    Strub, P. [Pierre Strub, freischaffender Berater, Binningen (Switzerland); Ziegler, Ch. [Inter Act, Basel (Switzerland)

    2008-08-15

    This paper discusses the results of the first two phases of a project concerning wind-power projects. The paper deals with the results of a system analysis, takes a look at lessons learned and presents an appraisal of existing instruments. A system-analysis of wind-power projects is presented with emphasis on social factors and the role of stakeholders. The success factors concerning social acceptance of wind-power projects and their special characteristics are discussed. Lessons learned are examined. Instruments for the sustainable implementation of projects are looked at, in particular with a focus on social acceptance

  4. Are Psychotherapeutic Changes Predictable? Comparison of a Chicago Counseling Center Project with a Penn Psychotherapy Project.

    Science.gov (United States)

    Luborsky, Lester; And Others

    1979-01-01

    Compared studies predicting outcomes of psychotherapy. Level of prediction success in both projects was modest. Particularly for the rated benefits score, the profile of variables showed similar levels of success between the projects. Successful predictions were based on adequacy of personality functioning, match on marital status, and length of…

  5. Application of Projection Pursuit Evaluation Model Based on Real-Coded Accelerating Genetic Algorithm in Evaluating Wetland Soil Quality Variations in the Sanjiang Plain,China

    Institute of Scientific and Technical Information of China (English)

    FU QIANG; XIE YONGGANG; WEI ZIMIN

    2003-01-01

    A new technique of dimension reduction named projection pursuit is applied to model and evaluatewetland soil quality variations in the Sanjiang Plain, Helongjiang Province, China. By adopting the im-proved real-coded accelerating genetic algorithm (RAGA), the projection direction is optimized and multi-dimensional indexes are converted into low-dimensional space. Classification of wetland soils and evaluationof wetland soil quality variations are realized by pursuing optimum projection direction and projection func-tion value. Therefore, by adopting this new method, any possible human interference can be avoided andsound results can be achieved in researching quality changes and classification of wetland soils.

  6. Comparison of IAEA TECDOC 717 technical basis with consensus codes and standards

    Energy Technology Data Exchange (ETDEWEB)

    Nickell, R.E. [Applied Science and Technology, Inc., Poway, CA (United States); Saegusa, T.; Ito, C. [Central Research Inst. of Electric Power Industry, Abiko, Chiba (Japan). Civil Engineering Lab.; Sorenson, K.B. [Sandia National Labs., Albuquerque, NM (United States)

    1995-09-01

    The original IAEA TECDOC 717 prepared at the consensus Consultants Service Meetings contained specific guidance with respect to the application of linear-elastic fracture mechanics principles to the evaluation of potential non-ductile failure for radioactive material shipping package containment boundaries. No specific guidance was provided with respect to elastic-plastic fracture mechanics procedures, due to a lack of consensus. This paper proposes that the inclusion of three alternative elastic-plastic evaluation approaches may provide the basis for consensus guidance to be added to a revised TECDOC. These three alternatives have been incorporated into consensus ASME Code non-mandatory appendices, and are widely accepted in combination. One of the three alternatives, an applied J-integral/crack resistance curve approach, is examined in some detail. (author).

  7. Comparison of Geant4-DNA simulation of S-values with other Monte Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    André, T. [Université Bordeaux 1, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Morini, F. [Research Group of Theoretical Chemistry and Molecular Modelling, Hasselt University, Agoralaan Gebouw D, B-3590 Diepenbeek (Belgium); Karamitros, M. [Université Bordeaux 1, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, INCIA, UMR 5287, F-33400 Talence (France); Delorme, R. [LPSC, Université Joseph Fourier Grenoble 1, CNRS/IN2P3, Grenoble INP, 38026 Grenoble (France); CEA, LIST, F-91191 Gif-sur-Yvette (France); Le Loirec, C. [CEA, LIST, F-91191 Gif-sur-Yvette (France); Campos, L. [Departamento de Física, Universidade Federal de Sergipe, São Cristóvão (Brazil); Champion, C. [Université Bordeaux 1, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Groetz, J.-E.; Fromm, M. [Université de Franche-Comté, Laboratoire Chrono-Environnement, UMR CNRS 6249, Besançon (France); Bordage, M.-C. [Laboratoire Plasmas et Conversion d’Énergie, UMR 5213 CNRS-INPT-UPS, Université Paul Sabatier, Toulouse (France); Perrot, Y. [Laboratoire de Physique Corpusculaire, UMR 6533, Aubière (France); Barberet, Ph. [Université Bordeaux 1, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); and others

    2014-01-15

    Monte Carlo simulations of S-values have been carried out with the Geant4-DNA extension of the Geant4 toolkit. The S-values have been simulated for monoenergetic electrons with energies ranging from 0.1 keV up to 20 keV, in liquid water spheres (for four radii, chosen between 10 nm and 1 μm), and for electrons emitted by five isotopes of iodine (131, 132, 133, 134 and 135), in liquid water spheres of varying radius (from 15 μm up to 250 μm). The results have been compared to those obtained from other Monte Carlo codes and from other published data. The use of the Kolmogorov–Smirnov test has allowed confirming the statistical compatibility of all simulation results.

  8. Comparison of 2006 IECC and 2009 IECC Commercial Energy Code Requirements for Kansas City, MO

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Yunzhi; Gowri, Krishnan

    2011-03-22

    This report summarizes code requirements and energy savings of commercial buildings in climate zone 4 built to the 2009 IECC when compared to the 2006 IECC. In general, the 2009 IECC has higher insulation requirements for exterior walls, roof, and windows and have higher efficiency requirements for HVAC equipment (HVAC equipment efficiency requirements are governed by National Appliance Conversion Act of 1987 (NAECA), and are applicable irrespective of the IECC version adopted). The energy analysis results show that residential and nonresidential commercial buildings meeting the 2009 IECC requirements save between 6.1% and 9.0% site energy, and between 6.4% and 7.7% energy cost when compared to 2006 IECC. Analysis also shows that semiheated buildings have energy and cost savings of 3.9% and 5.6%.

  9. A comparison of approaches for finding minimum identifying codes on graphs

    Science.gov (United States)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  10. Parametric studies of radiolytic oxidation of iodide solutions with and without paint: comparison with code calculations

    Energy Technology Data Exchange (ETDEWEB)

    Poletiko, C.; Hueber, C. [Inst. de Protection et de Surete Nucleaire, C.E. Cadarache, St. Paul-lez-Durance (France); Fabre, B. [CISI, C.E. Cadarache, St. Paul-lez-Durance (France)

    1996-12-01

    In case of severe nuclear accident, radioactive material may be released into the environment. Among the fission products involved, are the very volatile iodine isotopes. However, the chemical forms are not well known due to the presence of different species in the containment with which iodine may rapidly react to form aerosols, molecular iodine, hydroiodic acid and iodo-organics. Tentative explanations of different mechanisms were performed through benchscale tests. A series of tests has been performed at AEA Harwell (GB) to study parameters such as pH, dose rate, concentration, gas flow rate, temperature in relation to molecular iodine production, under dynamic conditions. Another set of tests has been performed in AECL Whiteshell (CA) to study the behaviour of painted coupons, standing in gas phase or liquid phase or both, with iodine compounds under radiation. The purpose of our paper is to synthesize the data and compare the results to the IODE code calculation. Some parameters of the code were studied to fit the experimental result the best. A law, concerning the reverse reaction of iodide radiolytic oxidation, has been proposed versus: pH, concentrations and gas flow-rate. This law does not apply for dose rate variations. For the study of painted coupons, it has been pointed out that molecular iodine tends to be adsorbed or chemically absorbed on the surface in gas phase, but the mechanism should be more sophisticated in the aqueous phase. The iodo-organics present in liquid phase tend to be partly or totally destroyed by oxidation under radiation (depending upon the dose delivered). These points are discussed. (author) 18 figs., 3 tabs., 15 refs.

  11. Helium ions at the heidelberg ion beam therapy center: comparisons between FLUKA Monte Carlo code predictions and dosimetric measurements

    Science.gov (United States)

    Tessonnier, T.; Mairani, A.; Brons, S.; Sala, P.; Cerutti, F.; Ferrari, A.; Haberer, T.; Debus, J.; Parodi, K.

    2017-08-01

    In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo

  12. Photon attenuation coefficients of Heavy-Metal Oxide glasses by MCNP code, XCOM program and experimental data: A comparison study

    Science.gov (United States)

    El-Khayatt, A. M.; Ali, A. M.; Singh, Vishwanath P.

    2014-01-01

    The mass attenuation coefficients, μ/ρ, total interaction cross-section, σt, and mean free path (MFP) of some Heavy Metal Oxides (HMO) glasses, with potential applications as gamma ray shielding materials, have been investigated using the MCNP-4C code. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of HMO glasses. The numerical simulations parameters are compared with experimental data wherever possible. Comparisons are also made with predictions from the XCOM program in the energy region from 1 keV to 100 MeV. Good agreement noticed indicates that the chosen Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different glass systems, a capability particularly useful in cases where no analogous experimental data exist.

  13. Comparison Based Analysis of Different Cryptographic and Encryption Techniques Using Message Authentication Code (MAC) in Wireless Sensor Networks (WSN)

    CERN Document Server

    Rehman, Sadaqat Ur; Ahmad, Basharat; Yahya, Khawaja Muhammad; Ullah, Anees; Rehman, Obaid Ur

    2012-01-01

    Wireless Sensor Networks (WSN) are becoming popular day by day, however one of the main issue in WSN is its limited resources. We have to look to the resources to create Message Authentication Code (MAC) keeping in mind the feasibility of technique used for the sensor network at hand. This research work investigates different cryptographic techniques such as symmetric key cryptography and asymmetric key cryptography. Furthermore, it compares different encryption techniques such as stream cipher (RC4), block cipher (RC2, RC5, RC6 etc) and hashing techniques (MD2, MD4, MD5, SHA, SHA1 etc). The result of our work provides efficient techniques for communicating device, by selecting different comparison matrices i.e. energy consumption, processing time, memory and expenses that satisfies both the security and restricted resources in WSN environment to create MAC.

  14. Comparison Based Analysis of Different Cryptographic and Encryption Techniques Using Message Authentication Code (MAC in Wireless Sensor Networks (WSN

    Directory of Open Access Journals (Sweden)

    Sadaqat Ur Rehman

    2012-01-01

    Full Text Available Wireless Sensor Networks (WSN are becoming popular day by day, however one of the main issue in WSN is its limited resources. We have to look to the resources to create Message Authentication Code (MAC and need to choose a technique which is feasible for sensor networks. This research work investigates different cryptographic techniques such as symmetric key cryptography and asymmetric key cryptography, furthermore it compares different encryption techniques such as stream cipher (RC4, block cipher (RC2, RC5, RC6 etc and hashing techniques (MD2, MD4, MD5, SHA, SHA1 etc. The result of our work provides efficient techniques for communicator, by selecting different comparison matrices i.e. energy consumption, processing time, memory and expenses that satisfies both the security and restricted resources in WSN environment to create MAC

  15. Comparison between two computer codes for PIXE studies applied to trace element analysis in amniotic fluid

    Science.gov (United States)

    Gertner, I.; Heber, O.; Zajfman, J.; Zajfman, D.; Rosner, B.

    1989-01-01

    Two different methods of analysis applicable for PIXE data are introduced and compared. In the first method Gaussian shaped peaks are fitted to the X-ray spectrum, and the complete analysis can be done on a microcomputer. The second is based on the Bayesian deconvolution method for simultaneous peak fitting and has to be carried out on a larger IBM computer. The advantage of the second method becomes evident for regions of poor statistics or where many overlapping peaks occur in the spectrum. The comparisons between the methods made on PIXE measurements obtained from 55 amniotic fluid samples gave satisfactory agreement.

  16. Study of X-ray photoionized Fe plasma and comparisons with astrophysical modeling codes

    Energy Technology Data Exchange (ETDEWEB)

    Foord, M E; Heeter, R F; Chung, H; vanHoof, P M; Bailey, J E; Cuneo, M E; Liedahl, D A; Fournier, K B; Jonauskas, V; Kisielius, R; Ramsbottom, C; Springer, P T; Keenan, K P; Rose, S J; Goldstein, W H

    2005-04-29

    The charge state distributions of Fe, Na and F are determined in a photoionized laboratory plasma using high resolution x-ray spectroscopy. Independent measurements of the density and radiation flux indicate the ionization parameter {zeta} in the plasma reaches values {zeta} = 20-25 erg cm s{sup -1} under near steady-state conditions. A curve-of-growth analysis, which includes the effects of velocity gradients in a one-dimensional expanding plasma, fits the observed line opacities. Absorption lines are tabulated in the wavelength region 8-17 {angstrom}. Initial comparisons with a number of astrophysical x-ray photoionization models show reasonable agreement.

  17. The Aspen--Amsterdam Void Finder Comparison Project

    CERN Document Server

    Colberg, Joerg M; Foster, Caroline; Platen, Erwin; Brunino, Riccardo; Neyrinck, Mark; Basilakos, Spyros; Fairall, Anthony; Feldman, Hume; Gottloeber, Stefan; Hahn, Oliver; Hoyle, Fiona; Mueller, Volker; Nelson, Lorne; Plionis, Manolis; Porciaini, Cristiano; Shandarin, Sergei; Vogeley, Michael S; van de Weygaert, Rien

    2008-01-01

    Despite a history that dates back at least a quarter of a century studies of voids in the large--scale structure of the Universe are bedevilled by a major problem: there exist a large number of quite different void--finding algorithms, a fact that has so far got in the way of groups comparing their results without worrying about whether such a comparison in fact makes sense. Because of the recent increased interest in voids, both in very large galaxy surveys and in detailed simulations of cosmic structure formation, this situation is very unfortunate. We here present the first systematic comparison study of thirteen different void finders constructed using particles, haloes, and semi--analytical model galaxies extracted from a subvolume of the Millennium simulation. The study includes many groups that have studied voids over the past decade. We show their results and discuss their differences and agreements. As it turns out, the basic results of the various methods agree very well with each other in that they...

  18. Increasing the Value of Research: A Comparison of the Literature on Critical Success Factors for Projects, IT Projects and Enterprise Resource Planning Projects

    Directory of Open Access Journals (Sweden)

    Annie Maddison Warren

    2016-11-01

    Full Text Available Since the beginning of modern project management in the 1960s, academic researchers have sought to identify a definitive list of Critical Success Factors (CSFs, the key things that project managers must get right in order to deliver a successful product. With the advent of Information Technology (IT projects and, more recently, projects to deliver Enterprise Resource Planning (ERP systems, attention has turned to identifying definitive lists of CSFs for these more specific project types. The purpose of this paper is to take stock of this research effort by examining how thinking about each type of project has evolved over time, before producing a consolidated list of CSFs for each as a basis for comparison. This process reveals a high degree of similarity, leading to the conclusion that the goal of identifying a generic list of CSFs for project management has been achieved. Therefore, rather than continuing to describe lists of CSFs, researchers could increase the value of their contribution by taking a step forward and focusing on why, despite this apparent knowledge of how to ensure their success, ERP projects continue to fail.

  19. Description of Transport Codes for Space Radiation Shielding

    Science.gov (United States)

    Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.

    2011-01-01

    This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.

  20. Comparison of different quantization strategies for subband coding of medical images

    Science.gov (United States)

    Castagno, Roberto; Lancini, Rosa C.; Egger, Olivier

    1996-04-01

    In this paper different methods for the quantization of wavelet transform coefficients are compared in view of medical imaging applications. The goal is to provide users with a comprehensive and application-oriented review of these techniques. The performance of four quantization methods (namely standard scalar quantization, embedded zerotree, variable dimension vector quantization and pyramid vector quantization) are compared with regard to their application in the field of medical imaging. In addition to the standard rate-distortion criterion, we took into account the possibility of bitrate control, the feasibility of real-time implementation, the genericity (for use in non-dedicated multimedia environments) of each approach. In addition, the diagnostical reliability of the decompressed images has been assessed during a viewing session and with the help of a specialist. Classical scalar quantization methods are briefly reviewed. As a result, it is shown that despite the relatively simple design of the optimum quantizers, their performance in terms of rate-distortion tradeoff are quite poor. For high quality subband coding, it is of major importance to exploit the existing zero-correlation across subbands as proposed with the embedded zerotree wavelet (EZW) algorithm. In this paper an improved EZW-algorithm is used which is termed embedded zerotree lossless (EZL) algorithm -- due to the importance of lossless compression in medical imaging applications -- having the additional possibility of producing an embedded lossless bitstream. VQ based methods take advantage of statistical properties of a block or a vector of data values, yielding good quality results of reconstructed images at the same bitrates. In this paper, we take in account two classes of VQ methods, random quantizers (VQ) and geometric quantizers (PVQ). Algorithms belonging to the first group (the most widely known being that developed by Linde-Buzo-Gray) suffer from the common drawback of requiring a

  1. Binary neutron-star mergers with Whisky and SACRA: First quantitative comparison of results from independent general-relativistic hydrodynamics codes

    Science.gov (United States)

    Baiotti, Luca; Shibata, Masaru; Yamamoto, Tetsuro

    2010-09-01

    We present the first quantitative comparison of two independent general-relativistic hydrodynamics codes, the whisky code and the sacra code. We compare the output of simulations starting from the same initial data and carried out with the configuration (numerical methods, grid setup, resolution, gauges) which for each code has been found to give consistent and sufficiently accurate results, in particular, in terms of cleanness of gravitational waveforms. We focus on the quantities that should be conserved during the evolution (rest mass, total mass energy, and total angular momentum) and on the gravitational-wave amplitude and frequency. We find that the results produced by the two codes agree at a reasonable level, with variations in the different quantities but always at better than about 10%.

  2. Binary neutron-star mergers with Whisky and SACRA: First quantitative comparison of results from independent general-relativistic hydrodynamics codes

    CERN Document Server

    Baiotti, Luca; Yamamoto, Tetsuro

    2010-01-01

    We present the first quantitative comparison of two independent general-relativistic hydrodynamics codes, the Whisky code and the SACRA code. We compare the output of simulations starting from the same initial data and carried out with the configuration (numerical methods, grid setup, resolution, gauges) which for each code has been found to give consistent and sufficiently accurate results, in particular in terms of cleanness of gravitational waveforms. We focus on the quantities that should be conserved during the evolution (rest mass, total mass energy, and total angular momentum) and on the gravitational-wave amplitude and frequency. We find that the results produced by the two codes agree at a reasonable level, with variations in the different quantities but always at better than about 10%.

  3. The European JASMIN Project for the Development of a New Safety Simulation Code, ASTEC-Na, for Na-cooled Fast Neutron Reactors

    OpenAIRE

    GIRAULT N.; VAN DORSSELAERE J.p.; Jacq, F.; BRILLANT G.; KISSANE Martin; BANDINI, G; Buck,M.; CHAMPIGNY J.; Hering, W; Perez-Martin, S.; Herranz, L; RAISON Philippe; Reinke, N; TUCEK Kamil; VERWAERDE D.

    2012-01-01

    The 4-year JASMIN collaborative project, involving 9 organizations, was launched by IRSN end of 2011 within the 7th European R&D Framework Programme on the enhancement of Na-cooled Fast Neutron Reactors (SFR) safety for a higher resistance to severe accidents. The project aims at developing a new European simulation code, ASTEC-Na, with a modern architecture, sufficiently flexible to account for innovative reactor designs and eventually new types of fuel and claddings and accounting for resul...

  4. Comparison of Wavelet Filters in Image Coding and Denoising using Embedded Zerotree Wavelet Algorithm

    Directory of Open Access Journals (Sweden)

    V. Elamaran

    2012-12-01

    Full Text Available In this study, we present Embedded Zerotree Wavelet (EZW algorithm to compress the image using different wavelet filters such as Biorthogonal, Coiflets, Daubechies, Symlets and Reverse Biorthogonal and to remove noise by setting appropriate threshold value while decoding. Compression methods are important in telemedicine applications by reducing number of bits per pixel to adequately represent the image. Data storage requirements are reduced and transmission efficiency is improved because of compressing the image. The EZW algorithm is an effective and computationally efficient technique in image coding. Obtaining the best image quality for a given bit rate and accomplishing this task in an embedded fashion are the two problems addressed by the EZW algorithm. A technique to decompose the image using wavelets has gained a great deal of popularity in recent years. Apart from very good compression performance, EZW algorithm has the property that the bitstream can be truncated at any point and still be decoded with a good quality image. All the standard wavelet filters are used and the results are compared with different thresholds in the encoding section. Bit rate versus PSNR simulation results are obtained for the image 256x256 barbara with different wavelet filters. It shows that the computational overhead involved with Daubechies wavelet filters but are produced better results. Like even missing details i.e., higher frequency components are picked by them which are missed by other family of wavelet filters.

  5. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    Science.gov (United States)

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  6. Comparison of small molecules and oligonucleotides that target a toxic, non-coding RNA.

    Science.gov (United States)

    Costales, Matthew G; Rzuczek, Suzanne G; Disney, Matthew D

    2016-06-01

    Potential RNA targets for chemical probes and therapeutic modalities are pervasive in the transcriptome. Oligonucleotide-based therapeutics are commonly used to target RNA sequence. Small molecules are emerging as a modality to target RNA structures selectively, but their development is still in its infancy. In this work, we compare the activity of oligonucleotides and several classes of small molecules that target the non-coding r(CCUG) repeat expansion (r(CCUG)(exp)) that causes myotonic dystrophy type 2 (DM2), an incurable disease that is the second-most common cause of adult onset muscular dystrophy. Small molecule types investigated include monomers, dimers, and multivalent compounds synthesized on-site by using RNA-templated click chemistry. Oligonucleotides investigated include phosphorothioates that cleave their target and vivo-morpholinos that modulate target RNA activity via binding. We show that compounds assembled on-site that recognize structure have the highest potencies amongst small molecules and are similar in potency to a vivo-morpholino modified oligonucleotide that targets sequence. These studies are likely to impact the design of therapeutic modalities targeting other repeats expansions that cause fragile X syndrome and amyotrophic lateral sclerosis, for example.

  7. Intercode Advanced Fuels and Cladding Comparison Using BISON, FRAPCON, and FEMAXI Fuel Performance Codes

    Science.gov (United States)

    Rice, Aaren

    As part of the Department of Energy's Accident Tolerant Fuels (ATF) campaign, new cladding designs and fuel types are being studied in order to help make nuclear energy a safer and more affordable source for power. This study focuses on the implementation and analysis of the SiC cladding and UN, UC, and U3Si2 fuels into three specific nuclear fuel performance codes: BISON, FRAPCON, and FEMAXI. These fuels boast a higher thermal conductivity and uranium density than traditional UO2 fuel which could help lead to longer times in a reactor environment. The SiC cladding has been studied for its reduced production of hydrogen gas during an accident scenario, however the SiC cladding is a known brittle and unyielding material that may fracture during PCMI (Pellet Cladding Mechanical Interaction). This work focuses on steady-state operation with advanced fuel and cladding combinations. By implementing and performing analysis work with these materials, it is possible to better understand some of the mechanical interactions that could be seen as limiting factors. In addition to the analysis of the materials themselves, a further analysis is done on the effects of using a fuel creep model in combination with the SiC cladding. While fuel creep is commonly ignored in the traditional UO2 fuel and Zircaloy cladding systems, fuel creep can be a significant factor in PCMI with SiC.

  8. PN code acquisition algorithm in DS-UWB system based on threshold comparison criterion

    Institute of Scientific and Technical Information of China (English)

    Qi Lina; Gan Zongliang; Zhu Hongbo

    2009-01-01

    The direct sequence ultra-wideband (DS-UWB) is a promising technology for short-range wireless communications. The UWB signal is a stream of very low power density and ultra-short pulses, and the great potential of DS-UWB depends critically on the success of timing acquisition. A rapid acquisition algorithm for reducing the acquisition time of the coarse pseudo-noise (PN) sequences is proposed. The algorithm utilizes the auxiliary sequence and biscarch strategy based on the threshold comparison criterion. Both theoretical analysis and simulation tests show that with the proposed search strategy and simple operations over the symbol duration at the receiver, the proposed algorithm can considerably reduce the acquisition time even as it maintains the PN sequence acquisition probability in the DS-UWB system over the dense multipath environment.

  9. FY05 LDRD Final Report Molecular Radiation Biodosimetry LDRD Project Tracking Code: 04-ERD-076

    Energy Technology Data Exchange (ETDEWEB)

    Jones, I M; A.Coleman, M; Lehmann, J; Manohar, C F; Marchetti, F; Mariella, R; Miles, R; Nelson, D O; Wyrobek, A J

    2006-02-03

    been, these methods are not suitable. The best current option for triage radiation biodosimetry is self-report of time to onset of emesis after the event, a biomarker that is subject to many false positives. The premise of this project is that greatly improved radiation dosimetry can be achieved by research and development directed toward detection of molecular changes induced by radiation in cells or other biological materials. Basic research on the responses of cells to radiation at the molecular level, particularly of message RNA and proteins, has identified biomolecules whose levels increase (or decrease) as part of cellular responses to radiation. Concerted efforts to identify markers useful for triage and clinical applications have not been reported as yet. Such studies would scan responses over a broad range of doses, below, at and above the threshold of clinical significance in the first weeks after exposure, and would collect global proteome and/or transcriptome information on all tissue samples accessible to either first responders or clinicians. For triage, the goal is to identify those needing medical treatment. Treatment will be guided by refined dosimetry. Achieving this goal entails determining whether radiation exposure was below or above the threshold of concern, using one sample collected within days of an event, with simple devices that first responders either use or distribute for self-testing. For the clinic, better resolution of dose and tissue damage is needed to determine the nature and time sensitivity of therapy, but multiple sampling times may be acceptable and clinical staff and equipment can be utilized. Two complementary areas of research and development are needed once candidate biomarkers are identified, validation of the biomarker responses and validation of devices/instrumentation for detection of responses. Validation of biomarkers per se is confirmation that the dose, time, and tissue specific responses meet the reporting

  10. A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator

    Science.gov (United States)

    Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein

    2014-01-01

    Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model. PMID:24696804

  11. Project JADE. Comparison of technologies; Projekt JADE Jaemfoerelse av teknik

    Energy Technology Data Exchange (ETDEWEB)

    Sandstedt, Haakan [Scandiaconsult Sverige AB, Stockholm (Sweden); Munier, Raymond [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)

    2001-08-01

    This report presents a comparison of the technical aspects of three disposal methods, all of which are variations of the KBS-3 method: - KBS-3 V, Vertical disposal - KBS-3 H, Horizontal disposal - MLH, Medium Long Holes. The comparison is based on the criteria listed below. Most weight has been given those criteria influencing the long-term function and safety of the repository that are difficult to alter. Such criteria can only be altered by adopting different technical designs: Technical feasibility; Geological investigations; Design; Construction; Deposition; Environmental impact; Human intrusion after sealing. It is practically possible to carry out all of the disposal methods. KBS-3 V has been studied most completely and therefore has been ranked before the other two methods with respect to 'Technical feasibility'. In principle, the methods are based on the same repository layout and disposal depth, therefore there are no conclusive differences between the methods with respect to 'Geological investigations' and 'Design'. As the disposal tunnels and disposal holes have the same form in the KBS-3 V and KBS-3 H facilities, well-tested excavation methods will be adopted during the construction phase for these two alternatives. Machines suitable for boring the long, horizontal disposal holes of the MLH alternative are available on the market, but the technique must be developed further. Therefore, MLH is currently ranked after KBS-3 V and KBS-3 H with respect to 'Construction'. However, the present degree of technical development reached for KBS-3 V och H could also be achieved for the MLH alternative with a moderate amount of development work. With the current design, the bentonite-barrier in KBS-3 V will have a higher density and therefore a lower conductivity than in the other alternatives. A clear advantage for KBS-3 V and H is that the canisters are disposed individually in deposition holes. Every disposal procedure will

  12. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    Science.gov (United States)

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  13. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    Science.gov (United States)

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  14. Weld-induced residual stresses in a prototype dragline cluster and comparison with design codes

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, S.; Semetay, C.; Price, J.W.H.; Nied, H.F. [Concordia University, Montreal, PQ (Canada). Dept. of Mechanical & Industrial Engineering

    2010-02-15

    The Australian coal mining industry employs a large fleet of thin-walled Circular Hollow Section (CHS) welded draglines built of several clusters along the length of the main boom, which are often very heavily overlapped with co-eccentric multiple tubular structures. Heat treatment processes for relieving thermally generated weld-induced residual stresses are usually not employed owing to the high costs and potential dragline downtime. However, it is estimated that these weld-induced residual stresses are usually within a tolerable range and are not the major motivating factor in the initiation and propagation of fatigue-induced cracking. This paper presents the simulation of welding-induced residual stresses in a CHS T-Joint, which would form the first of the four lacings welded on to the main chord of a typical mining dragline cluster. The paper compares numerically generated residual stresses during the welding process in a single weld pass with the approach used in two Standards: (I) R6-Revision 4, Assessment of the Integrity of Structures Containing Defects and (ii) American Petroleum Institute API 579-1/ ASME FFS-1 2007. The comparison attests to the observation that while residual stresses in the fused area at some points could be higher than the yield stress, they are generally not capable of inducing cracks in their own right.

  15. Progress in developing the ASPECT Mantle Convection Code - New Features, Benchmark Comparisons and Applications

    Science.gov (United States)

    Dannberg, Juliane; Bangerth, Wolfgang; Sobolev, Stephan

    2014-05-01

    Since there is no direct access to the deep Earth, numerical simulations are an indispensible tool for exploring processes in the Earth's mantle. Results of these models can be compared to surface observations and, combined with constraints from seismology and geochemistry, have provided insight into a broad range of geoscientific problems. In this contribution we present results obtained from a next-generation finite-element code called ASPECT (Advanced Solver for Problems in Earth's ConvecTion), which is especially suited for modeling thermo-chemical convection due to its use of many modern numerical techniques: fully adaptive meshes, accurate discretizations, a nonlinear artificial diffusion method to stabilize the advection equation, an efficient solution strategy based on a block triangular preconditioner utilizing an algebraic multigrid, parallelization of all of the steps above and finally its modular and easily extensible implementation. In particular the latter features make it a very versatile tool applicable also to lithosphere models. The equations are implemented in the form of the Anelastic Liquid Approximation with temperature, pressure, composition and strain rate dependent material properties including associated non-linear solvers. We will compare computations with ASPECT to common benchmarks in the geodynamics community such as the Rayleigh-Taylor instability (van Keken et al., 1997) and demonstrate recently implemented features such as a melting model with temperature, pressure and composition dependent melt fraction and latent heat. Moreover, we elaborate on a number of features currently under development by the community such as free surfaces, porous flow and elasticity. In addition, we show examples of how ASPECT is applied to develop sophisticated simulations of typical geodynamic problems. These include 3D models of thermo-chemical plumes incorporating phase transitions (including melting) with the accompanying density changes, Clapeyron

  16. Comparison of AMOS computer code wakefield real part impedances with analytic results

    Energy Technology Data Exchange (ETDEWEB)

    Mayhall, D J; Nelson, S D

    2000-11-30

    We have performed eleven AMOS (Azimuthal Mode Simulator)[1] code runs with a simple, right circular cylindrical accelerating cavity inserted into a circular, cylindrical, lossless beam pipe to calculate the real part of the n = 1(dipole) transverse wakefield impedance of this structure. We have compared this wakefield impedance in units of ohms/m(Wm) over the frequency range of 0-1 GHz to analytic predictions from Equation (2.3.8) of Briggs et al[2]. The results from Equation (2.3.8) were converted from the CGS units of statohms to the MKS units of ohms({Omega}) and then multiplied by (2{pi}f)/c = {Omega}/c = 2{pi}/{lambda}, where f is the frequency in Hz, c is the speed of light in vacuum in m/sec, {omega} is the angular frequency in radians/sec, and {lambda} is the wavelength in m. The dipole transverse wakefield impedance written to file from AMOS must be multiplied by c/o to convert it from units of {Omega}/m to units of {Omega}. The agreement between the AMOS runs and the analytic predictions are excellent for computational grids with square cells (dz = dr) and good for grids with rectangular cells (dz < dr). The quantity dz is the fixed-size axial grid spacing, and dr is the fixed-size radial grid spacing. We have also performed one AMOS run for the same geometry to calculate the real part of the n = 0(monopole) longitudinal wakefield impedance of this structure. We have compared this wakefield impedance in units of {Omega} with analytic predictions from Equation (1.4.8) of Briggs et al[1] converted to the MKS units of {Omega}. The agreement between the two results is excellent in this case. For the monopole longitudinal wakefield impedance written to file from AMOS, nothing must be done to convert the results to units of {Omega}. In each case, the computer calculations were carried out to 50 nsec of simulation time.

  17. A Comparison of Project Delivery Systems on United States Federal Construction Projects.

    Science.gov (United States)

    2007-11-02

    of federal projects. Quality measures were based on evaluation criteria measured through post - occupancy -surveys conducted by some federal agencies...However, both the Navy’s Post Occupancy Evaluation (POE) program (Naval Facilities Engineering Command, 1995) and current quality research (Corbett, 1997...Washington, D.C. Naval Facilities and Engineering Command. 1995. Post Occupancy Evaluation Program Draft Survey. Norfolk, VA. Neter, John

  18. Population coding of visual space: comparison of spatial representations in the dorsal and ventral pathways

    Directory of Open Access Journals (Sweden)

    Anne B Sereno

    2011-02-01

    Full Text Available Although the representation of space is as fundamental to visual processing as the representation of shape, it has received relatively little attention from neurophysiological investigations. In this study we characterize representations of space within visual cortex, and examine how they differ in a first direct comparison between dorsal and ventral subdivisions of the visual pathways. Neural activities were recorded in anterior inferotemporal cortex (AIT and lateral intraparietal cortex (LIP of awake behaving monkeys, structures associated with the ventral and dorsal visual pathways respectively, as a stimulus was presented at different locations within the visual field. In spatially selective cells, we find greater modulation of cell responses in LIP with changes in stimulus position. Further, using a novel population-based statistical approach (namely, multidimensional scaling, we recover the spatial map implicit within activities of neural populations, allowing us to quantitatively compare the geometry of neural space with physical space. We show that a population of spatially selective LIP neurons, despite having large receptive fields, is able to almost perfectly reconstruct stimulus locations within a low-dimensional representation. In contrast, a population of AIT neurons, despite each cell being spatially selective, provide less accurate low-dimensional reconstructions of stimulus locations. They produce instead only a topologically (categorically correct rendition of space, which nevertheless might be critical for object and scene recognition. Furthermore, we found that the spatial representation recovered from population activity shows greater translation invariance in LIP than in AIT. We suggest that LIP spatial representations may be dimensionally isomorphic with 3D physical space, while in AIT spatial representations may reflect a more categorical representation of space (e.g., next to or above.

  19. Comparison of Particle Flow Code and Smoothed Particle Hydrodynamics Modelling of Landslide Run outs

    Science.gov (United States)

    Preh, A.; Poisel, R.; Hungr, O.

    2009-04-01

    In most continuum mechanics methods modelling the run out of landslides the moving mass is divided into a number of elements, the velocities of which can be established by numerical integration of Newtońs second law (Lagrangian solution). The methods are based on fluid mechanics modelling the movements of an equivalent fluid. In 2004, McDougall and Hungr presented a three-dimensional numerical model for rapid landslides, e.g. debris flows and rock avalanches, called DAN3D.The method is based on the previous work of Hungr (1995) and is using an integrated two-dimensional Lagrangian solution and meshless Smooth Particle Hydrodynamics (SPH) principle to maintain continuity. DAN3D has an open rheological kernel, allowing the use of frictional (with constant porepressure ratio) and Voellmy rheologies and gives the possibility to change material rheology along the path. Discontinuum (granular) mechanics methods model the run out mass as an assembly of particles moving down a surface. Each particle is followed exactly as it moves and interacts with the surface and with its neighbours. Every particle is checked on contacts with every other particle in every time step using a special cell-logic for contact detection in order to reduce the computational effort. The Discrete Element code PFC3D was adapted in order to make possible discontinuum mechanics models of run outs. Punta Thurwieser Rock Avalanche and Frank Slide were modelled by DAN as well as by PFC3D. The simulations showed correspondingly that the parameters necessary to get results coinciding with observations in nature are completely different. The maximum velocity distributions due to DAN3D reveal that areas of different maximum flow velocity are next to each other in Punta Thurwieser run out whereas the distribution of maximum flow velocity shows almost constant maximum flow velocity over the width of the run out regarding Frank Slide. Some 30 percent of total kinetic energy is rotational kinetic energy in

  20. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    Science.gov (United States)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They

  1. Comparison of WDM/Pulse-Position-Modulation (WDM/PPM) with Code/Pulse-Position-Swapping (C/PPS) Based on Wavelength/Time Codes

    Energy Technology Data Exchange (ETDEWEB)

    Mendez, A J; Hernandez, V J; Gagliardi, R M; Bennett, C V

    2009-06-19

    Pulse position modulation (PPM) signaling is favored in intensity modulated/direct detection (IM/DD) systems that have average power limitations. Combining PPM with WDM over a fiber link (WDM/PPM) enables multiple accessing and increases the link's throughput. Electronic bandwidth and synchronization advantages are further gained by mapping the time slots of PPM onto a code space, or code/pulse-position-swapping (C/PPS). The property of multiple bits per symbol typical of PPM can be combined with multiple accessing by using wavelength/time [W/T] codes in C/PPS. This paper compares the performance of WDM/PPM and C/PPS for equal wavelengths and bandwidth.

  2. Los Alamos and Lawrence Livermore National Laboratories Code-to-Code Comparison of Inter Lab Test Problem 1 for Asteroid Impact Hazard Mitigation

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Robert P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Miller, Paul [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Howley, Kirsten [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gisler, Galen Ross [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Plesko, Catherine Suzanne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Managan, Rob [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Owen, Mike [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wasem, Joseph [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bruck-Syal, Megan [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-15

    The NNSA Laboratories have entered into an interagency collaboration with the National Aeronautics and Space Administration (NASA) to explore strategies for prevention of Earth impacts by asteroids. Assessment of such strategies relies upon use of sophisticated multi-physics simulation codes. This document describes the task of verifying and cross-validating, between Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL), modeling capabilities and methods to be employed as part of the NNSA-NASA collaboration. The approach has been to develop a set of test problems and then to compare and contrast results obtained by use of a suite of codes, including MCNP, RAGE, Mercury, Ares, and Spheral. This document provides a short description of the codes, an overview of the idealized test problems, and discussion of the results for deflection by kinetic impactors and stand-off nuclear explosions.

  3. Construction of TH code development and validation environment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyungjun; Kim, Hee-Kyung; Bae, Kyoo-Hwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, each component of code development and validation system, i.e. IVS and Mercurial will be introduced and Redmine, the integrated platform of IVS and Mercurial, will be explained later. Integrated TH code validation system, IVS and code development and management environment are constructed. The code validation could be achieved by a comparison of results with corresponding experiments. The development of thermal-hydraulic (TH) system code for nuclear reactor requires much time and effort, also for its validation and verification(V and V). In previous, TASS/SMR-S code (hereafter TASS) for SMART is developed by KAERI through V and V process. On the way of code development, the version control of source code has great importance. Also, during the V and V process, the way to reduce repeated labor- and time-consuming work of running the code before releasing new version of TH code, is required. Therefore, the integrated platform for TH code development and validation environment is constructed. Finally, Redmine, the project management and issue tracking system, is selected as platform, Mercurial (hg) for source version control and IVS (Integrated Validation System) for TASS is constructed as a prototype for automated V and V. IVS is useful before release a new code version. The code developer can validate code result easily using IVS. Even during code development, IVS could be used for validation of code modification. Using Redmine and Mercurial, users and developers can use IVS result more effectively.

  4. Neutron flux distribution inside the cylindrical core of minor excess of reactivity in the IPEN/MB-01 reactor and comparison with citation code and MCNP- 5 code

    Energy Technology Data Exchange (ETDEWEB)

    Aredes, Vitor Ottoni; Bitelli, Ulysses d' Utra; Mura, Luiz Ernesto C.; Santos, Diogo Feliciano dos; Lima, Ana Cecilia de Souza, E-mail: ubitelli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    This study aims to determine the distribution of thermal neutron flux in the IPEN/MB-01 nuclear reactor core assembled with cylindrical core configuration of minor excess of reactivity with 568 fuel rods (28 fuel rods in diameter). The thermal neutron flux at the positions of irradiation derive from the method of reaction rate using gold foils. The experiment consists in inserting gold activations foils with and without cadmium coverage (cadmium boxes with 0.0502 cm thickness) in several positions throughout the active core. After irradiation, activity induced by nuclear reaction rates over gold foils is assessed by gamma ray spectrometry using a high-purity germanium (HPGe) detector. Experimental results are compared to those derived from calculations performed using a three dimensional CITATION diffusion code and MCNP-5 code and a proper nuclear data library. While calculated neutron flux data shows good agreement with experimental values in regions with little disturbance in the neutron flux, also showing that in the region of the reflectors of neutrons and near the control rods, the diffusion theory is not very precise. The average value of thermal neutron flux obtained experimentally compared to the calculated value by CITATION code and MCNP-5 code respectively show a difference of 1.18% and 0.84% at a nuclear power level of 74.65 ± 3.28 % watts. The average measured value of thermal neutron flux is 4.10 10{sup 8} ± 5.25% n/cm{sup 2}s. (author)

  5. In-Depth Analysis of Simulation Engine Codes for Comparison with DOE s Roof Savings Calculator and Measured Data

    Energy Technology Data Exchange (ETDEWEB)

    New, Joshua Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Levinson, Ronnen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Huang, Yu [White Box Technologies, Salt Lake City, UT (United States); Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mellot, Joe [The Garland Company, Cleveland, OH (United States); Childs, Kenneth W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kriner, Scott [Green Metal Consulting, Inc., Macungie, PA (United States)

    2014-06-01

    The Roof Savings Calculator (RSC) was developed through collaborations among Oak Ridge National Laboratory (ORNL), White Box Technologies, Lawrence Berkeley National Laboratory (LBNL), and the Environmental Protection Agency in the context of a California Energy Commission Public Interest Energy Research project to make cool-color roofing materials a market reality. The RSC website and a simulation engine validated against demonstration homes were developed to replace the liberal DOE Cool Roof Calculator and the conservative EPA Energy Star Roofing Calculator, which reported different roof savings estimates. A preliminary analysis arrived at a tentative explanation for why RSC results differed from previous LBNL studies and provided guidance for future analysis in the comparison of four simulation programs (doe2attic, DOE-2.1E, EnergyPlus, and MicroPas), including heat exchange between the attic surfaces (principally the roof and ceiling) and the resulting heat flows through the ceiling to the building below. The results were consolidated in an ORNL technical report, ORNL/TM-2013/501. This report is an in-depth inter-comparison of four programs with detailed measured data from an experimental facility operated by ORNL in South Carolina in which different segments of the attic had different roof and attic systems.

  6. Development of an interface between MCNP and ORIGEN codes for calculations of fuel evolution in nuclear systems. Initial project; Desenvolvimento de uma interface entre os codigos MCNP e ORIGEN para calculos de evolucao de combustiveis em sistemas nucleares. Projeto inicial

    Energy Technology Data Exchange (ETDEWEB)

    Campolina, Daniel de Almeida Magalhaes

    2009-07-01

    In Many situations of nuclear system study, it is necessary to know the detailed particle flux in a geometry. Deterministic 1-D and 2-D methods aren't suitable to represent some strong 3-D behavior configurations, for example in cores where the neutron flux varies considerably in the space and Monte Carlo analysis are necessary. The majority of Monte Carlo transport calculation codes, performs time static simulations, in terms of fuel isotopic composition. This work is a initial project to incorporate depletion capability to the MCNP code, by means of a connection with ORIGEN2.1 burnup code. The method to develop the program proposed followed the methodology of other programs used to the same purpose. Essentially, MCNP data library are used to generate one group microscopic cross sections that override default ORIGEN libraries. To verify the actual implemented part, comparisons which MCNPX (version 2.6.0) results were made. The neutron flux and criticality value of core agree. The neutron flux and criticality value of the core agree, especially in beginning of burnup when the influence of fission products are not very considerable. The small difference encountered was probably caused by the difference in the number of isotopes considered in the transport models (89 MCNPX x 25 GB). Next step of this work is to adapt MCNP version 4C to work with a memory higher than its standard value (4MB), in order to allow a greater number of isotopes in the transport model. (author)

  7. Comparison of a Ring On-Chip Network and a Code-Division Multiple-Access On-Chip Network

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2007-01-01

    Full Text Available Two network-on-chip (NoC designs are examined and compared in this paper. One design applies a bidirectional ring connection scheme, while the other design applies a code-division multiple-access (CDMA connection scheme. Both of the designs apply globally asynchronous locally synchronous (GALS scheme in order to deal with the issue of transferring data in a multiple-clock-domain environment of an on-chip system. The two NoC designs are compared with each other by their network structures, data transfer principles, network node structures, and their asynchronous designs. Both the synchronous and the asynchronous designs of the two on-chip networks are realized using a hardware-description language (HDL in order to make the entire designs suit the commonly used synchronous design tools and flow. The performance estimation and comparison of the two NoC designs which are based on the HDL realizations are addressed. By comparing the two NoC designs, the advantages and disadvantages of applying direct connection and CDMA connection schemes in an on-chip communication network are discussed.

  8. Utilizing Strategic Project Management Processes and the NATO Code of Best Practice to Improve Management of Experimentation Events

    Science.gov (United States)

    2009-06-01

    information have been suggested as being resources of projects ( PMBOK , 2002). Therefore, they are one of the key aspects of strategic project management...Assessment, Revision 2002.” DoD CCRP Press, Washington, DC PMBOK -Project Management Body of Knowledge, Project Management Institute Press, 2002 Tolk, A

  9. The BOUT Project; Validation and Benchmark of BOUT Code and Experimental Diagnostic Tools for Fusion Boundary Turbulence

    Institute of Scientific and Technical Information of China (English)

    徐学桥

    2001-01-01

    A boundary plasma turbulence code BOUT is presented. The preliminary encour aging results have been obtained when comparing with probe measurements for a typical Ohmic discharge in HT-7 tokamak. The validation and benchmark of BOUT code and experimental diagnostic tools for fusion boundary plasma turbulence is proposed.

  10. GMMIP (v1.0) contribution to CMIP6: Global Monsoons Model Inter-comparison Project

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Tianjun; Turner, Andrew G.; Kinter, James L.; Wang, Bin; Qian, Yun; Chen, Xiaolong; Wu, Bo; Wang, Bin; Liu, Bo; Zou, Liwei; He, Bian

    2016-10-10

    The Global Monsoons Model Inter-comparison Project (GMMIP) has been endorsed by the panel of Coupled Model Inter-comparison Project (CMIP) as one of the participating model inter-comparison projects (MIPs) in the sixth phase of CMIP (CMIP6). The focus of GMMIP is on monsoon climatology, variability, prediction and projection, which is relevant to four of the “Grand Challenges” proposed by the World Climate Research Programme. At present, 21 international modeling groups are committed to joining GMMIP. This overview paper introduces the motivation behind GMMIP and the scientific questions it intends to answer. Three tiers of experiments, of decreasing priority, are designed to examine (a) model skill in simulating the climatology and interannual-to-multidecadal variability of global monsoons forced by the sea surface temperature during historical climate period; (b) the roles of the Interdecadal Pacific Oscillation and Atlantic Multidecadal Oscillation in driving variations of the global and regional monsoons; and (c) the effects of large orographic terrain on the establishment of the monsoons. The outputs of the CMIP6 Diagnostic, Evaluation and Characterization of Klima experiments (DECK), “historical” simulation and endorsed MIPs will also be used in the diagnostic analysis of GMMIP to give a comprehensive understanding of the roles played by different external forcings, potential improvements in the simulation of monsoon rainfall at high resolution and reproducibility at decadal timescales. The implementation of GMMIP will improve our understanding of the fundamental physics of changes in the global and regional monsoons over the past 140 years and ultimately benefit monsoons prediction and projection in the current century.

  11. GMMIP (v1.0) contribution to CMIP6: Global Monsoons Model Inter-comparison Project

    Science.gov (United States)

    Zhou, Tianjun; Turner, Andrew G.; Kinter, James L.; Wang, Bin; Qian, Yun; Chen, Xiaolong; Wu, Bo; Wang, Bin; Liu, Bo; Zou, Liwei; He, Bian

    2016-10-01

    The Global Monsoons Model Inter-comparison Project (GMMIP) has been endorsed by the panel of Coupled Model Inter-comparison Project (CMIP) as one of the participating model inter-comparison projects (MIPs) in the sixth phase of CMIP (CMIP6). The focus of GMMIP is on monsoon climatology, variability, prediction and projection, which is relevant to four of the "Grand Challenges" proposed by the World Climate Research Programme. At present, 21 international modeling groups are committed to joining GMMIP. This overview paper introduces the motivation behind GMMIP and the scientific questions it intends to answer. Three tiers of experiments, of decreasing priority, are designed to examine (a) model skill in simulating the climatology and interannual-to-multidecadal variability of global monsoons forced by the sea surface temperature during historical climate period; (b) the roles of the Interdecadal Pacific Oscillation and Atlantic Multidecadal Oscillation in driving variations of the global and regional monsoons; and (c) the effects of large orographic terrain on the establishment of the monsoons. The outputs of the CMIP6 Diagnostic, Evaluation and Characterization of Klima experiments (DECK), "historical" simulation and endorsed MIPs will also be used in the diagnostic analysis of GMMIP to give a comprehensive understanding of the roles played by different external forcings, potential improvements in the simulation of monsoon rainfall at high resolution and reproducibility at decadal timescales. The implementation of GMMIP will improve our understanding of the fundamental physics of changes in the global and regional monsoons over the past 140 years and ultimately benefit monsoons prediction and projection in the current century.

  12. Effective dose equivalent and effective dose: comparison for common projections in oral and maxillofacial radiology.

    Science.gov (United States)

    Gibbs, S J

    2000-10-01

    Effective dose equivalents (H(E)) and effective doses (E) for radiographic projections common in dentistry, calculated from the same organ dose distributions, are presented to determine whether the 2 quantities can be directly compared. Doses to all organs and tissues in the head, neck, trunk, and proximal extremities were determined for each projection (intraoral full-mouth radiographic survey, panoramic, cephalometric, temporomandibular tomograms, and submentovertex view) by computer simulation with Monte Carlo methods. H(E) and E were calculated from these complete distributions and by methods prescribed by the International Commission on Radiological Protection (ICRP). H(E) and E computed from complete dose distributions were found comparable within a few percentage points. However, those computed by strict application of ICRP methods were not. For radiographic projections with highly localized dose distributions, such as those common in dentistry, direct comparison of H(E) and E may not be meaningful, unless both computation algorithms are known.

  13. Near Shannon Limit Low Peak Mean To Envelope Power Ratio (PMEPR) Turbo Block Coded OFDM for Space Communications Project

    Data.gov (United States)

    National Aeronautics and Space Administration — It is proposed to study and develop an innovative Turbo-block coded modulation scheme suitable for Orthogonal Frequency Division Modulation (OFDM) system. The new...

  14. Challenges implementing bar-coded medication administration in the emergency room in comparison to medical surgical units.

    Science.gov (United States)

    Glover, Nancy

    2013-03-01

    Bar-coded medication administration has been successfully implemented and utilized to decrease medication errors at a number of hospitals in recent years. The purpose of this article was to discuss the varying success in utilization of bar-coded medication administration on medical-surgical units and in the emergency department. Utilization reports were analyzed to better understand the challenges between the units. Many factors negatively impacted utilization in the emergency department, including the inability to use bar-coded medication administration for verbal orders or to document medications distributed by the prescribing providers, unique aspects of emergency department nursing workflow, additional steps to chart when using bar-coded medication administration, and alert fatigue. Hardware problems affected all users. Bar-coded medication administration in its current form is more suitable for use on medical-surgical floors than in the emergency department. New solutions should be developed for bar-coded medication administration in the emergency department, keeping in mind requirements to chart medications when there is no order in the system, document medications distributed by prescribing providers, adapt to unpredictable nursing workflow, minimize steps to chart with bar-coded medication administration, limit alerts to those that are clinically meaningful, and choose reliable hardware with adequate bar-code scanning capability.

  15. Comparison of Analytical and Measured Performance Results on Network Coding in IEEE 802.11 Ad-Hoc Networks

    DEFF Research Database (Denmark)

    Zhao, Fang; Médard, Muriel; Hundebøll, Martin

    2012-01-01

    Network coding is a promising technology that has been shown to improve throughput in wireless mesh networks. In this paper, we compare the analytical and experimental performance of COPE-style network coding in IEEE 802.11 ad-hoc networks. In the experiments, we use a lightweight scheme called...

  16. Development and comparison of metrics for evaluating climate models and estimation of projection uncertainty

    Science.gov (United States)

    Ring, Christoph; Pollinger, Felix; Kaspar-Ott, Irena; Hertig, Elke; Jacobeit, Jucundus; Paeth, Heiko

    2017-04-01

    The COMEPRO project (Comparison of Metrics for Probabilistic Climate Change Projections of Mediterranean Precipitation), funded by the Deutsche Forschungsgemeinschaft (DFG), is dedicated to the development of new evaluation metrics for state-of-the-art climate models. Further, we analyze implications for probabilistic projections of climate change. This study focuses on the results of 4-field matrix metrics. Here, six different approaches are compared. We evaluate 24 models of the Coupled Model Intercomparison Project Phase 3 (CMIP3), 40 of CMIP5 and 18 of the Coordinated Regional Downscaling Experiment (CORDEX). In addition to the annual and seasonal precipitation the mean temperature is analysed. We consider both 50-year trend and climatological mean for the second half of the 20th century. For the probabilistic projections of climate change A1b, A2 (CMIP3) and RCP4.5, RCP8.5 (CMIP5,CORDEX) scenarios are used. The eight main study areas are located in the Mediterranean. However, we apply our metrics to globally distributed regions as well. The metrics show high simulation quality of temperature trend and both precipitation and temperature mean for most climate models and study areas. In addition, we find high potential for model weighting in order to reduce uncertainty. These results are in line with other accepted evaluation metrics and studies. The comparison of the different 4-field approaches reveals high correlations for most metrics. The results of the metric-weighted probabilistic density functions of climate change are heterogeneous. We find for different regions and seasons both increases and decreases of uncertainty. The analysis of global study areas is consistent with the regional study areas of the Medeiterrenean.

  17. Research Report on Feasibility Study of Building a QT Gui Testing Tool, AX Program Code Group Computer Science R&D Project

    Energy Technology Data Exchange (ETDEWEB)

    Grover, B T

    2003-05-05

    The main goal of this project was to determine if a tool could be built to test Qt. In determining the feasibility of building a tool the following requirements needed to be researched: (1) Determine if the underlying Qt signal/slot architecture could be leveraged. (2) Research how much impact implementing such a tool would have on existing code, i.e. how much extra code would need to be inserted to use the tool. (3) Determine with the above information if a tool could be built. With the above steps completed, the information needed to make a decision on building a tool could be made. Armed with this information I felt I could make a more educated decision on the possibility of building a tool. This project was divided into two main steps. The first step was to understand the underlying Qt source code much better. The second step was to build a small prototype that I could use to test ideas. The first step was actually much shorter than I had originally anticipated. Understanding the underlying architecture of Qt only took about two weeks. After studying the architecture of qt and working with the support people at Trolltech, the company that develops Qt, I found a way to test Qt. This project was very successful. I accomplished everything I intended to do. I learned and understood the inner workings of the Qt library enough that I could build a simple tool that could leverage some of the information in Qt to test the GUI. I was also able to find a tool that was commercially available to test Qt GUI's. These two things were the main goals of this project. Therefore I consider it a success. In fact I was able to progress farther with my prototype testing then I had originally planned.

  18. Comparison between Unknown Input Estimation of a System Using Projection Operator Approach and Generalized Matrix Inverse Method

    Directory of Open Access Journals (Sweden)

    Ashis De

    2014-01-01

    Full Text Available In this paper a detailed comparison between the estimation results of unknown inputs of a linear time invariant system using projection operator approach and using the method of generalized matrix inverse have been discussed. The full order observer constructed using projection operator approach has been extended and implemented for this purpose.

  19. PoPe (Projection on Proper elements) for code control: verification, numerical convergence and reduced models. Application to plasma turbulence simulations

    CERN Document Server

    Cartier-Michaud, T; Sarazin, Y; Abiteboul, J; Bufferand, H; Dif-Pradalier, G; Garbet, X; Grandgirard, V; Latu, G; Norscini, C; Passeron, C; Tamain, P

    2015-01-01

    The Projection on Proper elements (PoPe) is a novel method of code control dedicated to 1) checking the correct implementation of models, 2) determining the convergence of numerical methods and 3) characterizing the residual errors of any given solution at very low cost. The basic idea is to establish a bijection between a simulation and a set of equations that generate it. Recovering equations is direct and relies on a statistical measure of the weight of the various operators. This method can be used in any dimensions and any regime, including chaotic ones. This method also provides a procedure to design reduced models and quantify the ratio costs to benefits. PoPe is applied to a kinetic and a fluid code of plasma turbulence.

  20. Using paleo-climate comparisons to constrain future projections in CMIP5

    Directory of Open Access Journals (Sweden)

    G. A. Schmidt

    2013-02-01

    Full Text Available We present a description of the theoretical framework and "best practice" for using the paleo-climate model component of the Coupled Model Intercomparison Project (Phase 5 (CMIP5 to constrain future projections of climate using the same models. The constraints arise from measures of skill in hindcasting paleo-climate changes from the present over 3 periods: the Last Glacial Maximum (LGM (21 thousand years before present, ka, the mid-Holocene (MH (6 ka and the Last Millennium (LM (850–1850 CE. The skill measures may be used to validate robust patterns of climate change across scenarios or to distinguish between models that have differing outcomes in future scenarios. We find that the multi-model ensemble of paleo-simulations is adequate for addressing at least some of these issues. For example, selected benchmarks for the LGM and MH are correlated to the rank of future projections of precipitation/temperature or sea ice extent to indicate that models that produce the best agreement with paleoclimate information give demonstrably different future results than the rest of the models. We also find that some comparisons, for instance associated with model variability, are strongly dependent on uncertain forcing timeseries, or show time dependent behaviour, making direct inferences for the future problematic. Overall, we demonstrate that there is a strong potential for the paleo-climate simulations to help inform the future projections and urge all the modeling groups to complete this subset of the CMIP5 runs.

  1. Preliminary comparison of the conventional and quasi-snowflake divertor configurations with the 2D code EDGE2D/EIRENE in the FAST tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Viola, B.; Maddaluno, G.; Pericoli Ridolfini, V. [EURATOM-ENEA Association, C.R. Frascati, Via E. Fermi 45, 00044 Frascati (Rome) (Italy); Corrigan, G.; Harting, D. [Culham Centre of Fusion Energy, EURATOM-Association, Abingdon (United Kingdom); Mattia, M. [Dipartimento di Informatica, Sistemi e Produzione, Universita di Roma, Tor Vergata, Via del Politecnico, 00133 Roma (Italy); Zagorski, R. [Institute of Plasma Physics and Laser Microfusion-EURATOM Association, 01-497 Warsaw (Poland)

    2014-06-15

    The new magnetic configurations for tokamak divertors, snowflake and super-X, proposed to mitigate the problem of the power exhaust in reactors have clearly evidenced the need for an accurate and reliable modeling of the physics governing the interaction with the plates. The initial effort undertaken jointly by ENEA and IPPLM has been focused to exploit a simple and versatile modeling tool, namely the 2D TECXY code, to obtain preliminary comparison between the conventional and snowflake configurations for the proposed new device FAST that should realize an edge plasma with properties quite close to those of a reactor. The very interesting features found for the snowflake, namely a power load mitigation much larger than expected directly from the change of the magnetic topology, has further pushed us to check these results with the more sophisticated computational tool EDGE2D coupled with the neutral code module EIRENE. After a preparatory work that has been carried out in order to adapt this code combination to deal with non-conventional, single null equilibria and in particular with second order nulls in the poloidal field generated in the snowflake configuration, in this paper we describe the first activity to compare these codes and discuss the first results obtained for FAST. The outcome of these EDGE2D runs is in qualitative agreement with those of TECXY, confirming the potential benefit obtainable from a snowflake configuration. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  2. Coding the direct/indirect pathways by D1 and D2 receptors is not valid for accumbens projections.

    Science.gov (United States)

    Kupchik, Yonatan M; Brown, Robyn M; Heinsbroek, Jasper A; Lobo, Mary Kay; Schwartz, Danielle J; Kalivas, Peter W

    2015-09-01

    It is widely accepted that D1 dopamine receptor-expressing striatal neurons convey their information directly to the output nuclei of the basal ganglia, whereas D2-expressing neurons do so indirectly via pallidal neurons. Combining optogenetics and electrophysiology, we found that this architecture does not apply to mouse nucleus accumbens projections to the ventral pallidum. Thus, current thinking attributing D1 and D2 selectivity to accumbens projections akin to dorsal striatal pathways needs to be reconsidered.

  3. Comparison of spectra for validation of Penelope code for the energy range used in mammography; Comparacao de espectros para validacao do codigo PENELOPE para faixa de energia usada em mamografia

    Energy Technology Data Exchange (ETDEWEB)

    Albuquerque, M.A.G.; Ferreira, N.M.P.D., E-mail: malbuqueque@hotmail.co [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil); Pires, E.; Ganizeu, M.D.; Almeida, C.E. de, E-mail: marianogd@uol.com.b [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil); Prizio, R.; Peixoto, J.G., E-mail: guilherm@ird.gov.b [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    The spectra simulated by the Penelope code were compared with the spectra experimentally obtained through the silicon PIN photodiode detector, and with spectra calculated by the code of IPEN, and the comparison exhibited a concordance of 93.3 %, and make them an option for study of X-ray spectroscopy in the voltage range used in mammography

  4. FY08 LDRD Final Report A New Method for Wave Propagation in Elastic Media LDRD Project Tracking Code: 05-ERD-079

    Energy Technology Data Exchange (ETDEWEB)

    Petersson, A

    2009-01-29

    The LDRD project 'A New Method for Wave Propagation in Elastic Media' developed several improvements to the traditional finite difference technique for seismic wave propagation, including a summation-by-parts discretization which is provably stable for arbitrary heterogeneous materials, an accurate treatment of non-planar topography, local mesh refinement, and stable outflow boundary conditions. This project also implemented these techniques in a parallel open source computer code called WPP, and participated in several seismic modeling efforts to simulate ground motion due to earthquakes in Northern California. This research has been documented in six individual publications which are summarized in this report. Of these publications, four are published refereed journal articles, one is an accepted refereed journal article which has not yet been published, and one is a non-refereed software manual. The report concludes with a discussion of future research directions and exit plan.

  5. 中国与印度“大火规”水处理系统的设计要求对比%Comparison Chinese with India Design Requirement of Water Treatment System in Power Plant Code

    Institute of Scientific and Technical Information of China (English)

    徐淑姣

    2015-01-01

    国内电力设计行业涉足印度发电工程日益增多。本文对比了中印电力标准中水处理系统的设计要求,对两国工程设计标准的异同进行了总结,为国内后续开展的印度发电工程水处理系统设计提供参考和借鉴。%More and more Indian power plant projects are designed by Chinese electric power design institute. This paper coMPared the design requirement of water treatment system in power plant code between China and India, and it summarized the differences and similarities in design code of the two countries. It is hoped that the coMParison will provide reference for Chinese to design water treatment system of Indian power plant in the future.

  6. Line-Shape Code Comparison through Modeling and Fitting of Experimental Spectra of the C ii 723-nm Line Emitted by the Ablation Cloud of a Carbon Pellet

    Directory of Open Access Journals (Sweden)

    Mohammed Koubiti

    2014-07-01

    Full Text Available Various codes of line-shape modeling are compared to each other through the profile of the C ii 723-nm line for typical plasma conditions encountered in the ablation clouds of carbon pellets, injected in magnetic fusion devices. Calculations were performed for a single electron density of 1017 cm−3 and two plasma temperatures (T = 2 and 4 eV. Ion and electron temperatures were assumed to be equal (Te = Ti = T. The magnetic field, B, was set equal to either to zero or 4 T. Comparisons between the line-shape modeling codes and two experimental spectra of the C ii 723-nm line, measured perpendicularly to the B-field in the Large Helical Device (LHD using linear polarizers, are also discussed.

  7. Spread codes and spread decoding in network coding

    OpenAIRE

    Manganiello, F; Gorla, E.; Rosenthal, J.

    2008-01-01

    In this paper we introduce the class of spread codes for the use in random network coding. Spread codes are based on the construction of spreads in finite projective geometry. The major contribution of the paper is an efficient decoding algorithm of spread codes up to half the minimum distance.

  8. CSTminer: a web tool for the identification of coding and noncoding conserved sequence tags through cross-species genome comparison.

    Science.gov (United States)

    Castrignanò, Tiziana; Canali, Alessandro; Grillo, Giorgio; Liuni, Sabino; Mignone, Flavio; Pesole, Graziano

    2004-07-01

    The identification and characterization of genome tracts that are highly conserved across species during evolution may contribute significantly to the functional annotation of whole-genome sequences. Indeed, such sequences are likely to correspond to known or unknown coding exons or regulatory motifs. Here, we present a web server implementing a previously developed algorithm that, by comparing user-submitted genome sequences, is able to identify statistically significant conserved blocks and assess their coding or noncoding nature through the measure of a coding potential score. The web tool, available at http://www.caspur.it/CSTminer/, is dynamically interconnected with the Ensembl genome resources and produces a graphical output showing a map of detected conserved sequences and annotated gene features.

  9. HEFF---A user`s manual and guide for the HEFF code for thermal-mechanical analysis using the boundary-element method; Version 4.1: Yucca Mountain Site Characterization Project

    Energy Technology Data Exchange (ETDEWEB)

    St. John, C.M.; Sanjeevan, K. [Agapito (J.F.T.) and Associates, Inc., Grand Junction, CO (United States)

    1991-12-01

    The HEFF Code combines a simple boundary-element method of stress analysis with the closed form solutions for constant or exponentially decaying heat sources in an infinite elastic body to obtain an approximate method for analysis of underground excavations in a rock mass with heat generation. This manual describes the theoretical basis for the code, the code structure, model preparation, and step taken to assure that the code correctly performs its intended functions. The material contained within the report addresses the Software Quality Assurance Requirements for the Yucca Mountain Site Characterization Project. 13 refs., 26 figs., 14 tabs.

  10. Offshore code comparison collaboration continuation within IEA Wind Task 30: Phase II results regarding a floating semisubmersible wind system

    DEFF Research Database (Denmark)

    Robertson, Amy; Jonkman, Jason M.; Vorpahl, Fabian

    2014-01-01

    in a greater understanding of offshore floating wind turbine dynamics and modeling techniques, and better knowledge of the validity of various approximations. The lessons learned from this exercise have improved the participants’ codes, thus improving the standard of offshore wind turbine modeling....

  11. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg.

  12. The aeroelastic code FLEXLAST

    Energy Technology Data Exchange (ETDEWEB)

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  13. A first assessment of the NEPTUNE{sub C}FD code: Instabilities in a stratified flow comparison between the VOF method and a two-field approach

    Energy Technology Data Exchange (ETDEWEB)

    Bartosiewicz, Yann [Universite Catholique de Louvain (UCL), Faculty of Applied Sciences, Mechanical Engineering Department, TERM Division, Place du Levant 2, 1348 Louvain-la-Neuve (Belgium)], E-mail: yann.bartosiewicz@uclouvain.be; Lavieville, Jerome [Universite Catholique de Louvain (UCL), Faculty of Applied Sciences, Mechanical Engineering Department, TERM Division, Place du Levant 2, 1348 Louvain-la-Neuve (Belgium); Seynhaeve, Jean-Marie [Universite Catholique de Louvain (UCL), Faculty of Applied Sciences, Mechanical Engineering Department, TERM Division, Place du Levant 2, 1348 Louvain-la-Neuve (Belgium)], E-mail: jm.seynhaeve@uclouvain.be

    2008-04-15

    This paper presents some results concerning a first benchmark for the new European research code for thermal hydraulics computations: NEPTUNE{sub C}FD. This benchmark relies on the Thorpe experiment to model the occurrence of instabilities in a stratified two-phase flow. The first part of this work is to create a numerical trial case with the VOF approach. The results, in terms of time of onset of the instability, critical wave-number or wave phase speed, are rather good compared to linear inviscid theory and experimental data. Additional numerical tests showed the effect of the surface tension and density ratio on the growing dynamics of the instability and the structure of the waves. In the second part, a code to code (VOF/multi-field) comparison is performed for a case with zero surface tension. The results showed some discrepancies in terms of wave amplitudes, growing rates and a time shifting in the global dynamics. Afterward, two surface tension formulations are proposed in the multi-field approach. Both formulations provided similar results. The time for onset of the instability, the most amplified wave-number and its amplitude were in rather good agreement with the linear analysis and VOF results. However, the time-shifted dynamics was still observed.

  14. Analysis of the quench propagation along Nb3Sn Rutherford cables with the THELMA code. Part II: Model predictions and comparison with experimental results

    Science.gov (United States)

    Manfreda, G.; Bellina, F.; Bajas, H.; Perez, J. C.

    2016-12-01

    To improve the technology of the new generation of accelerator magnets, prototypes are being manufactured and tested in several laboratories. In parallel, many numerical analyses are being carried out to predict the magnets behaviour and interpret the experimental results. This paper focuses on the quench propagation velocity, which is a crucial parameter as regards the energy dissipation along the magnet conductor. The THELMA code, originally developed for cable-in-conduit conductors for fusion magnets, has been used to study such quench propagation. To this purpose, new code modules have been added to describe the Rutherford cable geometry, the material non-linear thermal properties and to describe the thermal conduction problem in transient regime. THELMA can describe the Rutherford cable at the strand level, modelling both the electrical and thermal contact resistances between strands and enabling the analysis of the effects of local hot spots and quench heaters. This paper describes the model application to a sample of Short Model Coil tested at CERN: a comparison is made between the experimental results and the model prediction, showing a good agreement. A comparison is also made with the prediction of the most common analytical models, which give large inaccuracies when dealing with low n-index cables like Nb3Sn cables.

  15. Simulation and Analysis of Small Break LOCA for AP1000 Using RELAP5-MV and Its Comparison with NOTRUMP Code

    Directory of Open Access Journals (Sweden)

    Eltayeb Yousif

    2017-01-01

    Full Text Available Many reactor safety simulation codes for nuclear power plants (NPPs have been developed. However, it is very important to evaluate these codes by testing different accident scenarios in actual plant conditions. In reactor analysis, small break loss of coolant accident (SBLOCA is an important safety issue. RELAP5-MV Visualized Modularization software is recognized as one of the best estimate transient simulation programs of light water reactors (LWR. RELAP5-MV has new options for improved modeling methods and interactive graphics display. Though the same models incorporated in RELAP5/MOD 4.0 are in RELAP5-MV, the significant difference of the latter is the interface for preparing the input deck. In this paper, RELAP5-MV is applied for the transient analysis of the primary system variation of thermal hydraulics parameters in primary loop under SBLOCA in AP1000 NPP. The upper limit of SBLOCA (10 inches is simulated in the cold leg of the reactor and the calculations performed up to a transient time of 450,000.0 s. The results obtained from RELAP5-MV are in good agreement with those of NOTRUMP code obtained by Westinghouse when compared under the same conditions. It can be easily inferred that RELAP5-MV, in a similar manner to RELAP5/MOD4.0, is suitable for simulating a SBLOCA scenario.

  16. Comparisons between Arabidopsis thaliana and Drosophila melanogaster in relation to Coding and Noncoding Sequence Length and Gene Expression

    Directory of Open Access Journals (Sweden)

    Rachel Caldwell

    2015-01-01

    Full Text Available There is a continuing interest in the analysis of gene architecture and gene expression to determine the relationship that may exist. Advances in high-quality sequencing technologies and large-scale resource datasets have increased the understanding of relationships and cross-referencing of expression data to the large genome data. Although a negative correlation between expression level and gene (especially transcript length has been generally accepted, there have been some conflicting results arising from the literature concerning the impacts of different regions of genes, and the underlying reason is not well understood. The research aims to apply quantile regression techniques for statistical analysis of coding and noncoding sequence length and gene expression data in the plant, Arabidopsis thaliana, and fruit fly, Drosophila melanogaster, to determine if a relationship exists and if there is any variation or similarities between these species. The quantile regression analysis found that the coding sequence length and gene expression correlations varied, and similarities emerged for the noncoding sequence length (5′ and 3′ UTRs between animal and plant species. In conclusion, the information described in this study provides the basis for further exploration into gene regulation with regard to coding and noncoding sequence length.

  17. Practical implications of procedures developed in IDEA project--comparison with traditional methods.

    Science.gov (United States)

    Andrasi, A; Bouvier, C; Brandl, A; de Carlan, L; Fischer, H; Franck, D; Höllriegl, V; Li, W B; Oeh, U; Ritt, J; Roth, P; Schlagbauer, M; Schmitzer, Ch; Wahl, W; Zombori, P

    2007-01-01

    The idea of the IDEA project aimed to improve assessment of incorporated radionuclides through developments of more reliable and possibly faster in vivo and bioassay monitoring techniques and making use of such enhancements for improvements in routine monitoring. In direct in vivo monitoring technique the optimum choice of the detectors to be applied for different monitoring tasks has been investigated in terms of material, size and background in order to improve conditions namely to increase counting efficiency and reduce background. Detailed studies have been performed to investigate the manifold advantageous applications and capabilities of numerical simulation method for the calibration and optimisation of in vivo counting systems. This calibration method can be advantageously applied especially in the measurement of low-energy photon emitting radionuclides, where individual variability is a significant source of uncertainty. In bioassay measurements the use of inductively coupled plasma mass spectrometry (ICP-MS) can improve considerably both the measurement speed and the lower limit of detection currently achievable with alpha spectrometry for long-lived radionuclides. The work carried out in this project provided detailed guidelines for optimum performance of the technique of ICP-MS applied mainly for the determination of uranium and thorium nuclides in the urine including sampling procedure, operational parameters of the instruments and interpretation of the measured data. The paper demonstrates the main advantages of investigated techniques in comparison with the performances of methods commonly applied in routine monitoring practice.

  18. Identifying personal microbiomes using metagenomic codes.

    Science.gov (United States)

    Franzosa, Eric A; Huang, Katherine; Meadow, James F; Gevers, Dirk; Lemon, Katherine P; Bohannan, Brendan J M; Huttenhower, Curtis

    2015-06-02

    Community composition within the human microbiome varies across individuals, but it remains unknown if this variation is sufficient to uniquely identify individuals within large populations or stable enough to identify them over time. We investigated this by developing a hitting set-based coding algorithm and applying it to the Human Microbiome Project population. Our approach defined body site-specific metagenomic codes: sets of microbial taxa or genes prioritized to uniquely and stably identify individuals. Codes capturing strain variation in clade-specific marker genes were able to distinguish among 100s of individuals at an initial sampling time point. In comparisons with follow-up samples collected 30-300 d later, ∼30% of individuals could still be uniquely pinpointed using metagenomic codes from a typical body site; coincidental (false positive) matches were rare. Codes based on the gut microbiome were exceptionally stable and pinpointed >80% of individuals. The failure of a code to match its owner at a later time point was largely explained by the loss of specific microbial strains (at current limits of detection) and was only weakly associated with the length of the sampling interval. In addition to highlighting patterns of temporal variation in the ecology of the human microbiome, this work demonstrates the feasibility of microbiome-based identifiability-a result with important ethical implications for microbiome study design. The datasets and code used in this work are available for download from huttenhower.sph.harvard.edu/idability.

  19. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T. (Sandia National Labs., Albuquerque, NM (USA)); Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs.

  20. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA)); Jow, H.N. (Sandia National Labs., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projections, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management.

  1. Expert system interaction with existing analysis codes

    Energy Technology Data Exchange (ETDEWEB)

    Ransom, V.H.; Fink, R.K.; Bertch, W.J.; Callow, R.A.

    1986-01-01

    Coupling expert systems with existing engineering analysis codes is a promising area in the field of artificial intelligence. The added intelligence can provide for easier and less costly use of the code and also reduce the potential for code misuse. This paper will discuss the methods available to allow interaction between an expert system and a large analysis code running on a mainframe. Concluding remarks will identify potential areas of expert system application with specific areas that are being considered in a current research program. The difficulty of interaction between an analysis code and an expert system is due to the incompatibility between the FORTRAN environment used for the analysis code and the AI environment used for the expert system. Three methods, excluding file transfer techniques, are discussed to help overcome this incompatibility. The first method is linking the FORTRAN routines to the LISP environment on the same computer. Various LISP dialects available on mainframes and their interlanguage communication capabilities are discussed. The second method involves network interaction between a LISP machine and a mainframe computer. Comparisons between the linking method and networking are noted. The third method involves the use of an expert system tool that is campatible with a FORTRAN environment. Several available tools are discussed. With the interaction methods identified, several potential application areas are considered. Selection of the specific areas that will be developed for the pilot project and applied to a thermal-hydraulic energy analysis code are noted.

  2. Description of premixing with the MC3D code including molten jet behavior modeling. Comparison with FARO experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Berthoud, G.; Crecy, F. de; Meignen, R.; Valette, M. [CEA-G, DRN/DTP/SMTH, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France)

    1998-01-01

    The premixing phase of a molten fuel-coolant interaction is studied by the way of mechanistic multidimensional calculation. Beside water and steam, corium droplet flow and continuous corium jet flow are calculated independent. The 4-field MC3D code and a detailed hot jet fragmentation model are presented. MC3D calculations are compared to the FARO L14 experiment results and are found to give satisfactory results; heat transfer and jet fragmentation models are still to be improved to predict better final debris size values. (author)

  3. Parameter calculation tool for the application of radiological dose projection codes; Herramienta de calculo de parametros para la aplicacion de codigos de proyeccion de dosis radiologicas

    Energy Technology Data Exchange (ETDEWEB)

    Galindo G, I. F.; Vergara del C, J. A.; Galvan A, S. J. [Instituto Nacional de Electricidad y Energias Limpias, Reforma 113, Col. Palmira, 62490 Cuernavaca, Morelos (Mexico); Tijerina S, F., E-mail: francisco.tijerina@cfe.gob.mx [CFE, Central Nucleoelectrica Laguna Verde, Carretera Federal Cardel-Nautla Km 42.5, 91476 Municipio Alto Lucero, Veracruz (Mexico)

    2016-09-15

    The use of specialized codes to estimate the radiation dose projection to an emergency postulated event at a nuclear power plant requires that certain plant data be available according to the event being simulated. The calculation of the possible radiological release is the critical activity to carry out the emergency actions. However, not all of the plant data required are obtained directly from the plant but need to be calculated. In this paper we present a computational tool that calculates the plant data required to use the radiological dose estimation codes. The tool provides the required information when there is a gas emergency venting event in the primary containment atmosphere, whether well or dry well and also calculates the time in which the spent fuel pool would be discovered in the event of a leak of water on some of the walls or floor of the pool. The tool developed has mathematical models for the processes involved such as: compressible flow in pipes considering area change and for constant area, taking into account the effects of friction and for the case of the spent fuel pool hydraulic models to calculate the time in which a container is emptied. The models implemented in the tool are validated with data from the literature for simulated cases. The results with the tool are very similar to those of reference. This tool will also be very supportive so that in postulated emergency cases can use the radiological dose estimation codes to adequately and efficiently determine the actions to be taken in a way that affects as little as possible. (Author)

  4. Gamma Knife Simulation Using the MCNP4C Code and the Zubal Phantom and Comparison with Experimental Data

    Directory of Open Access Journals (Sweden)

    Somayeh Gholami

    2010-06-01

    Full Text Available Introduction: Gamma Knife is an instrument specially designed for treating brain disorders. In Gamma Knife, there are 201 narrow beams of cobalt-60 sources that intersect at an isocenter point to treat brain tumors. The tumor is placed at the isocenter and is treated by the emitted gamma rays. Therefore, there is a high dose at this point and a low dose is delivered to the normal tissue surrounding the tumor. Material and Method: In the current work, the MCNP simulation code was used to simulate the Gamma Knife. The calculated values were compared to the experimental ones and previous works. Dose distribution was compared for different collimators in a water phantom and the Zubal brain-equivalent phantom. The dose profiles were obtained along the x, y and z axes. Result: The evaluation of the developed code was performed using experimental data and we found a good agreement between our simulation and experimental data. Discussion: Our results showed that the skull bone has a high contribution to both scatter and absorbed dose. In other words, inserting the exact material of brain and other organs of the head in digital phantom improves the quality of treatment planning. This work is regarding the measurement of absorbed dose and improving the treatment planning procedure in Gamma-Knife radiosurgery in the brain.

  5. Excitation functions of proton induced reactions on {sup nat}Os up to 65 MeV: Experiments and comparison with results from theoretical codes

    Energy Technology Data Exchange (ETDEWEB)

    Hermanne, A.; Adam Rebeles, R. [Cyclotron Laboratory, Vrije Universiteit Brussel, Brussels 1090 (Belgium); Tárkányi, F.; Takács, S. [Institute of Nuclear Research, Hungarian Academy of Science, 4026 Debrecen (Hungary)

    2015-02-15

    Activation of thin {sup nat}Os targets, electrodeposited on Ni backings, was investigated for the first time in stacked foil irradiations with 65 MeV and 34 MeV proton beams. Assessments of the produced radionuclides by high resolution gamma-ray spectroscopy yielded excitation functions for formation of {sup 184,} {sup 185,} {sup 186m,m+g,} {sup 187m+g,} {sup 188m+g,} {sup 189m2+m1+g,} {sup 190m2,m1+g,} {sup 192m1+g}Ir and {sup 185cum,} {sup 191m+g}Os, {sup 183m+g}Re. Where available comparisons with the reaction cross sections obtained in 2 earlier studies on enriched {sup 192}Os were made. Reduced uncertainty on cross sections is obtained by simultaneous remeasurement of the {sup 27}Al(p,x){sup 22,24}Na, {sup nat}Ni(p,x){sup 57}Ni and {sup nat}Ti(p,x){sup 48}V monitor reactions over wide relevant energy ranges. Confirmation of monitoring took place by assessment of excitation functions of {sup 61}Cu, {sup 56}Ni, {sup 55,56,57,58}Co and {sup 52}Mn induced in the Ni backings and comparison with a recent compilation for most of these radionuclides. Contributing reactions and overall cross sections are discussed and were evaluated in comparison with the results of the theoretical code TALYS 1.6 (values from the on-line library TENDL-2013)

  6. Project of decree relative to the licensing and statement system of nuclear activities and to their control and bearing various modifications of the public health code and working code; Projet de decret relatif au regime d'autorisation et de declaration des activites nucleaires et a leur controle et portant diverses modifications du code de la sante publique et du code du travail

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    This decree concerns the control of high level sealed radioactive sources and orphan sources. It has for objective to introduce administrative simplification, especially the radiation sources licensing and statement system, to reinforce the control measures planed by the public health code and by the employment code, to bring precision and complements in the editing of several already existing arrangements. (N.C.)

  7. Simulation approaches for the two-phase flow in saline repositories using the code TOUGH2-GRS. Report in the frame of the project ZIESEL. Two-phase flow in a saline repository using the example ERAM; Ansaetze zur Simulation der Zweiphasenstroemung in salinaren Endlagern mit dem Code TOUGH2-GRS. Bericht im Vorhaben ZIESEL. Zweiphasenfluss in einem salinaren Endlager am Beispiel des ERAM

    Energy Technology Data Exchange (ETDEWEB)

    Navarro, Martin; Fischer, Heidemarie; Seher, Holger; Weyand, Torben

    2016-10-15

    The simulation approaches for the two-phase flow in saline repositories using the code TOUGH2-GRS cover the following issues: simulation of gravitational flows in horizontal galleries without vertical discretization, homogenization approach for the simulation of the two-phase flow in converging partly backfilled galleries, qualification of the convergence approach implemented by GRS into the code TOUGH2-GRS, discretization effects during replacement of liquid by gas, consequences for the system analyses in the frame of the project ZIESEL.

  8. WITHDRAWAL OF PREVIOUS COMPLAINT. A COMPARISON OF THE OLD AND THE NEW CRIMINAL CODE. PROBLEMS OF COMPARATIVE LAW

    Directory of Open Access Journals (Sweden)

    Alin Sorin NICOLESCU

    2015-07-01

    Full Text Available In criminal law previous complaint has a double legal valence, material and procedural in nature, constituting a condition for criminal liability, but also a functional condition in cases expressly and limitatively provided by law, a consequence of criminal sanction condition. For certain offenses criminal law determines the initiation of the criminal complaint by the introduction of previous complaint by the injured party, without its absence being a question of removing criminal liability. From the perspective of criminal material law conditioning of the existence of previous complaint ,its lack and withdrawal, are regulated by art. 157 and 158 of the New Penal Code, with significant changes in relation to the old regulation of the institution . In terms of procedural aspect , previous complaint is regulated in art. 295-298 of the New Code of Criminal Procedure. Regarding the withdrawal of the previuos complaint, in the case of offenses for which the initiation of criminal proceedings is subject to the existence of such a complaint, we note that in the current Criminal Code this legal institution is regulated separately, representing both a cause for removal of criminal liability and a cause that preclude criminal action. This unilateral act of the will of the injured party - the withdrawal of the previous complaint, may be exercised only under certain conditions, namely: it can only be promoted in the case of the offenses for which the initiation of criminal proceedings is subject to the introduction of a previous complaint; it is made exclusively by the rightholder, by legal representatives or with the consent of the persons required by law for persons lacking legal capacity or having limited legal capacity;it must intervene until giving final judgment and it must represent an express and explicit manifestation. A novelty isrepresented by the possibility of withdrawing previous complaint if the prosecution was driven ex officio, although for

  9. Monte Carlo determination of the conversion coefficients Hp(3)/Ka in a right cylinder phantom with 'PENELOPE' code. Comparison with 'MCNP' simulations.

    Science.gov (United States)

    Daures, J; Gouriou, J; Bordy, J M

    2011-03-01

    This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.

  10. The challenges of evaluating and comparing projects – An empirical study of designing a comparison framework

    DEFF Research Database (Denmark)

    Svejvig, Per; Hedegaard, Flemming

    2016-01-01

    Project Half Double is an industry-driven initiative with the purpose to develop a new and radical project paradigm to increase the competitiveness of the Danish industry. The research part of Project Half Double will assess the degree to which the new project paradigm is more successful than tra...... organizations lack the project maturity to take advantage of the frameworks....

  11. Accuracy of the electron transport in mcnp5 and its suitability for ionization chamber response simulations: A comparison with the egsnrc and penelope codes

    Energy Technology Data Exchange (ETDEWEB)

    Koivunoro, Hanna; Siiskonen, Teemu; Kotiluoto, Petri; Auterinen, Iiro; Hippelaeinen, Eero; Savolainen, Sauli [Department of Physics, University of Helsinki, P.O. Box 64, FI-00014 Helsinki University (Finland) and Department of Oncology, Helsinki University Central Hospital, FI-00029 HUS (Finland); STUK-Radiation and Nuclear Safety Authority, P.O. Box 14, FI-00881 Helsinki (Finland); VTT Technical Research Centre of Finland, P.O. Box 1000, FI-02044 VTT (Finland); Department of Physics, University of Helsinki, P.O. Box 64, FI-00014 Helsinki University (Finland); HUS Medical Imaging Centre, Helsinki University Central Hospital, FI-00029 HUS (Finland)

    2012-03-15

    Purpose: In this work, accuracy of the mcnp5 code in the electron transport calculations and its suitability for ionization chamber (IC) response simulations in photon beams are studied in comparison to egsnrc and penelope codes. Methods: The electron transport is studied by comparing the depth dose distributions in a water phantom subdivided into thin layers using incident energies (0.05, 0.1, 1, and 10 MeV) for the broad parallel electron beams. The IC response simulations are studied in water phantom in three dosimetric gas materials (air, argon, and methane based tissue equivalent gas) for photon beams ({sup 60}Co source, 6 MV linear medical accelerator, and mono-energetic 2 MeV photon source). Two optional electron transport models of mcnp5 are evaluated: the ITS-based electron energy indexing (mcnp5{sub ITS}) and the new detailed electron energy-loss straggling logic (mcnp5{sub new}). The electron substep length (ESTEP parameter) dependency in mcnp5 is investigated as well. Results: For the electron beam studies, large discrepancies (>3%) are observed between the mcnp5 dose distributions and the reference codes at 1 MeV and lower energies. The discrepancy is especially notable for 0.1 and 0.05 MeV electron beams. The boundary crossing artifacts, which are well known for the mcnp5{sub ITS}, are observed for the mcnp5{sub new} only at 0.1 and 0.05 MeV beam energies. If the excessive boundary crossing is eliminated by using single scoring cells, the mcnp5{sub ITS} provides dose distributions that agree better with the reference codes than mcnp5{sub new}. The mcnp5 dose estimates for the gas cavity agree within 1% with the reference codes, if the mcnp5{sub ITS} is applied or electron substep length is set adequately for the gas in the cavity using the mcnp5{sub new}. The mcnp5{sub new} results are found highly dependent on the chosen electron substep length and might lead up to 15% underestimation of the absorbed dose. Conclusions: Since the mcnp5 electron

  12. Comparison of direct and quasi-static methods for neutron kinetic calculations with the EDF R and D COCAGNE code

    Energy Technology Data Exchange (ETDEWEB)

    Girardi, E.; Guerin, P. [Electricite de France - RandD, 1 av. du General de Gaulle, 92141, Clamart (France); Dulla, S.; Nervo, M.; Ravetto, P. [Dipartimento di Energetica, Politecnico di Torino, 24, c.so Duca degli Abruzzi, 10129, Torino (Italy)

    2012-07-01

    Quasi-Static (QS) methods are quite popular in the reactor physics community and they exhibit two main advantages. First, these methods overcome both the limits of the Point Kinetic (PK) approach and the issues of the computational effort related to the direct discretization of the time-dependent neutron transport equation. Second, QS methods can be implemented in such a way that they can be easily coupled to very different external spatial solvers. In this paper, the results of the coupling between the QS methods developed by Politecnico di Torino and the EDF R and D core code COCAGNE are presented. The goal of these activities is to evaluate the performances of QS methods (in term of computational cost and precision) with respect to the direct kinetic solver (e.g. {theta}-scheme) already available in COCAGNE. Additionally, they allow to perform an extensive cross-validation of different kinetic models (QS and direct methods). (authors)

  13. Underwater Photogrammetry, Coded Target and Plenoptic Technology: a Set of Tools for Monitoring Red Coral in Mediterranean Sea in the Framework of the "perfect" Project

    Science.gov (United States)

    Drap, P.; Royer, J. P.; Nawaf, M. M.; Saccone, M.; Merad, D.; López-Sanz, À.; Ledoux, J. B.; Garrabou, J.

    2017-02-01

    PErfECT "Photogrammetry, gEnetic, Ecology for red coral ConservaTion" is a project leaded by the Laboratoire des Sciences de lInformation et des Systmes (LSIS - UMR 7296 CNRS) from the Aix-Marseille University (France) in collaboration with the Spanish National Agency for Scientific Research (CSIC, Spain). The main objective of the project is to develop innovative Tools for the conservation of the Mediterranean red coral, Corallium rubrum. PErfECT was funded by the Total Fundation. The adaptation of digital photogrammetric techniques for use in submarine is rapidly increasing in recent years. In fact, these techniques are particularly well suited for use in underwater environments. PErfECT developed different photogrammetry tools to enhance the red coral population surveys based in: (i) automatic orientation on coded quadrats, (ii) use of NPR (Non Photo realistic Rendering) techniques, (iii) the calculation of distances between colonies within local populations and finally (iv) the use of plenoptic approaches in underwater conditions.

  14. Comparison study of the thermal mechanical performance of fuel rods during BWR fuel preconditioning operations using the computer codes FUELSIM and FEMAXI-V

    Energy Technology Data Exchange (ETDEWEB)

    Pantoja C, R. [IPN, Escuela Superior de Fisica y Matematicas, Departamento de Ingenieria Nuclear, Av. Instituto Politecnico Nacional s/n, Col. San Pedro Zacatenco, 07738 Mexico D. F. (Mexico); Ortiz V, J.; Castillo D, R., E-mail: rafael.pantoja10@yahoo.com.m [ININ, Departamento de Sistemas Nucleares, Carretera Mexico-Toluca s/n, Ocoyoacac 52750, Estado de Mexico (Mexico)

    2010-10-15

    The safety of nuclear power plants requires monitoring those parameters having some direct or indirect effect on safety. The thermal limits are values set for those parameters considered having most impact on the safe operation of a nuclear power reactor. Some thermal limits monitoring requires the thermal-mechanical analysis of the rods containing the nuclear fuel. The fuel rod thermal-mechanical behaviour under irradiation is a complex process in which there exists a great deal of interrelated physical and chemical phenomena, so that the fuel rod performance analysis in the core of a nuclear power reactor is generally accomplished by using computer codes, which integrate several of the phenomena that are expected to occur during the lifetime of the fuel rod in the core. In the operation of a nuclear power reactor, pre-conditioning simulations are necessary to determine in advance limit values for the power that can be generated in a fuel rod during any power ramp, and mainly during reactor startup, and thus avoiding any rod damage. In this work, a first analysis of the thermal-mechanical performance of typical fuel rods used in nuclear reactors of the type BWR is performed. This study includes two types of fuel rods: one from a fuel assembly design with array 8 x 8, and the other one from a 10 x 10 fuel assembly design, and a comparison of the thermal-mechanical performance between the two different rod designs is performed. The performance simulations were performed by the code FUELSIM, and compared against results previously obtained from similar simulation with the code FEMAXI-V. (Author)

  15. A Cross-species Comparison of Facial Morphology and Movement in Humans and Chimpanzees Using the Facial Action Coding System (FACS).

    Science.gov (United States)

    Vick, Sarah-Jane; Waller, Bridget M; Parr, Lisa A; Smith Pasqualini, Marcia C; Bard, Kim A

    2007-03-01

    A comparative perspective has remained central to the study of human facial expressions since Darwin's [(1872/1998). The expression of the emotions in man and animals (3rd ed.). New York: Oxford University Press] insightful observations on the presence and significance of cross-species continuities and species-unique phenomena. However, cross-species comparisons are often difficult to draw due to methodological limitations. We report the application of a common methodology, the Facial Action Coding System (FACS) to examine facial movement across two species of hominoids, namely humans and chimpanzees. FACS [Ekman & Friesen (1978). Facial action coding system. CA: Consulting Psychology Press] has been employed to identify the repertoire of human facial movements. We demonstrate that FACS can be applied to other species, but highlight that any modifications must be based on both underlying anatomy and detailed observational analysis of movements. Here we describe the ChimpFACS and use it to compare the repertoire of facial movement in chimpanzees and humans. While the underlying mimetic musculature shows minimal differences, important differences in facial morphology impact upon the identification and detection of related surface appearance changes across these two species.

  16. Validation and Comparison of Carbon Sequestration Project Cost Models with Project Cost Data Obtained from the Southwest Partnership

    Energy Technology Data Exchange (ETDEWEB)

    Robert Lee; Reid Grigg; Brian McPherson

    2011-04-15

    Obtaining formal quotes and engineering conceptual designs for carbon dioxide (CO{sub 2}) sequestration sites and facilities is costly and time-consuming. Frequently, when looking at potential locations, managers, engineers and scientists are confronted with multiple options, but do not have the expertise or the information required to quickly obtain a general estimate of what the costs will be without employing an engineering firm. Several models for carbon compression, transport and/or injection have been published that are designed to aid in determining the cost of sequestration projects. A number of these models are used in this study, including models by J. Ogden, MIT's Carbon Capture and Sequestration Technologies Program Model, the Environmental Protection Agency and others. This report uses the information and data available from several projects either completed, in progress, or conceptualized by the Southwest Regional Carbon Sequestration Partnership on Carbon Sequestration (SWP) to determine the best approach to estimate a project's cost. The data presented highlights calculated versus actual costs. This data is compared to the results obtained by applying several models for each of the individual projects with actual cost. It also offers methods to systematically apply the models to future projects of a similar scale. Last, the cost risks associated with a project of this scope are discussed, along with ways that have been and could be used to mitigate these risks.

  17. Measuring Student Career Interest within the Context of Technology-Enhanced STEM Projects: A Cross-Project Comparison Study Based on the Career Interest Questionnaire

    Science.gov (United States)

    Peterman, Karen; Kermish-Allen, Ruth; Knezek, Gerald; Christensen, Rhonda; Tyler-Wood, Tandra

    2016-03-01

    This article describes Energy for ME and Going Green! Middle Schoolers Out to Save the World, two Science, Technology, Engineering, and Mathematics (STEM) education programs with the common goal of improving students' attitudes about scientific careers. The authors represent two project teams, each with funding from the National Science Foundation's ITEST program. Using different approaches and technology, both projects challenged students to use electricity monitoring system data to create action plans for conserving energy in their homes and communities. The impact of each project on students' career interests was assessed via a multi-method evaluation that included the Career Interest Questionnaire (CIQ), a measure that was validated within the context of ITEST projects and has since become one of the instruments used most commonly across the ITEST community. This article explores the extent to which the CIQ can be used to document the effects of technology-enhanced STEM educational experiences on students' career attitudes and intentions in different environments. The results indicate that the CIQ, and the Intent subscale in particular, served as significant predictors of students' self-reported STEM career aspirations across project context. Results from each project also demonstrated content gains by students and demonstrated the impact of project participation and gender on student outcomes. The authors conclude that the CIQ is a useful tool for providing empirical evidence to document the impact of technology-enhanced science education programs, particularly with regard to Intent to purse a STEM career. The need for additional cross-project comparison studies is also discussed.

  18. Measuring Student Career Interest within the Context of Technology-Enhanced STEM Projects: A Cross-Project Comparison Study Based on the Career Interest Questionnaire

    Science.gov (United States)

    Peterman, Karen; Kermish-Allen, Ruth; Knezek, Gerald; Christensen, Rhonda; Tyler-Wood, Tandra

    2016-12-01

    This article describes Energy for ME and Going Green! Middle Schoolers Out to Save the World, two Science, Technology, Engineering, and Mathematics (STEM) education programs with the common goal of improving students' attitudes about scientific careers. The authors represent two project teams, each with funding from the National Science Foundation's ITEST program. Using different approaches and technology, both projects challenged students to use electricity monitoring system data to create action plans for conserving energy in their homes and communities. The impact of each project on students' career interests was assessed via a multi-method evaluation that included the Career Interest Questionnaire (CIQ), a measure that was validated within the context of ITEST projects and has since become one of the instruments used most commonly across the ITEST community. This article explores the extent to which the CIQ can be used to document the effects of technology-enhanced STEM educational experiences on students' career attitudes and intentions in different environments. The results indicate that the CIQ, and the Intent subscale in particular, served as significant predictors of students' self-reported STEM career aspirations across project context. Results from each project also demonstrated content gains by students and demonstrated the impact of project participation and gender on student outcomes. The authors conclude that the CIQ is a useful tool for providing empirical evidence to document the impact of technology-enhanced science education programs, particularly with regard to Intent to purse a STEM career. The need for additional cross-project comparison studies is also discussed.

  19. Quantification of Small Non-Coding RNAs Allows an Accurate Comparison of miRNA Expression Profiles

    Directory of Open Access Journals (Sweden)

    Andrea Masotti

    2009-01-01

    Full Text Available MicroRNAs (miRNAs are highly conserved ∼22-mer RNA molecules, encoded by plants and animals that regulate the expression of genes binding to the 3′-UTR of specific target mRNAs. The amount of miRNAs in a total RNA sample depends on the recovery efficiency that may be significantly affected by the different purification methods employed. Traditional approaches may be inefficient at recovering small RNAs, and common spectrophotometric determination is not adequate to quantify selectively these low molecular weight (LMW species from total RNA samples. Here, we describe the use of qualitative and quantitative lab-on-a-chip tools for the analysis of these LMW RNA species. Our data emphasize the close correlation of LMW RNAs with the expression levels of some miRNAs. We therefore applied our result to the comparison of some miRNA expression profiles in different tissues. Finally, the methods we used in this paper allowed us to analyze the efficiency of extraction protocols, to study the small (but significant differences among various preparations and to allow a proper comparison of some miRNA expression profiles in various tissues.

  20. SUPPLEMENTARY COMPARISON Final report on EUROMET.L-S19 (EUROMET Project 910): Comparison of squareness measurements

    Science.gov (United States)

    Mokros, Jiri

    2010-01-01

    A bilateral comparison has been undertaken between two European national metrology institutes (NMIs) in the subject field of squareness measurement. Specifically, both NMIs made measurements of a cylindrical squareness standard, made of steel. Both participants made measurements of the internal angles at several positions around the circumference, as well as measurement of the corresponding straightness profiles. The report lists the results obtained by both participants, as well as an analysis of the results and their uncertainties. The results are in good agreement with one another. This bilateral comparison followed the same technical protocol as a previous supplementary comparison (EUROMET.L-S10). Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by EURAMET, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  1. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  2. Evaluation of Computational Fluids Dynamics (CFD) code Open FOAM in the study of the pressurized thermal stress of PWR reactors. Comparison with the commercial code Ansys-CFX; Evaluacion del codigo de Dinamica de Fluidos Computacional (CFD) Open FOAM en el estudio del estres termico presurizado de los reactores PWR. Comparacion con el codigo comercial Ansys-CFX

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.; Barrachina, T.; Miro, R.; Verdu Martin, G.; Chiva, S.

    2012-07-01

    In this work is proposed to evaluate the potential of the OpenFOAM code for the simulation of typical fluid flows in reactors PWR, in particular for the study of pressurized thermal stress. Test T1-1 has been simulated , within the OECD ROSA project, with the objective of evaluating the performance of the code OpenFOAM and models of turbulence that has implemented to capture the effect of the thrust forces in the case study.

  3. Electron impact excitation of N IV: calculations with the DARC code and a comparison with ICFT results

    CERN Document Server

    Aggarwal, K M; Lawson, K D

    2016-01-01

    There have been discussions in the recent literature regarding the accuracy of the available electron impact excitation rates (equivalently effective collision strengths $\\Upsilon$) for transitions in Be-like ions. In the present paper we demonstrate, once again, that earlier results for $\\Upsilon$ are indeed overestimated (by up to four orders of magnitude), for over 40\\% of transitions and over a wide range of temperatures. To do this we have performed two sets of calculations for N~IV, with two different model sizes consisting of 166 and 238 fine-structure energy levels. As in our previous work, for the determination of atomic structure the GRASP (General-purpose Relativistic Atomic Structure Package) is adopted and for the scattering calculations (the standard and parallelised versions of) the Dirac Atomic R-matrix Code ({\\sc darc}) are employed. Calculations for collision strengths and effective collision strengths have been performed over a wide range of energy (up to 45~Ryd) and temperature (up to 2.0$...

  4. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results

    Science.gov (United States)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

    1990-01-01

    Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

  5. Extension of radiative transfer code MOMO, matrix-operator model to the thermal infrared - Clear air validation by comparison to RTTOV and application to CALIPSO-IIR

    Science.gov (United States)

    Doppler, Lionel; Carbajal-Henken, Cintia; Pelon, Jacques; Ravetta, François; Fischer, Jürgen

    2014-09-01

    1-D radiative transfer code Matrix-Operator Model (MOMO), has been extended from [0.2-3.65 μm] the band to the whole [0.2-100 μm] spectrum. MOMO can now be used for the computation of a full range of radiation budgets (shortwave and longwave). This extension to the longwave part of the electromagnetic radiation required to consider radiative transfer processes that are features of the thermal infrared: the spectroscopy of the water vapor self- and foreign-continuum of absorption at 12 μm and the emission of radiation by gases, aerosol, clouds and surface. MOMO's spectroscopy module, Coefficient of Gas Absorption (CGASA), has been developed for computation of gas extinction coefficients, considering continua and spectral line absorptions. The spectral dependences of gas emission/absorption coefficients and of Planck's function are treated using a k-distribution. The emission of radiation is implemented in the adding-doubling process of the matrix operator method using Schwarzschild's approach in the radiative transfer equation (a pure absorbing/emitting medium, namely without scattering). Within the layer, the Planck-function is assumed to have an exponential dependence on the optical-depth. In this paper, validation tests are presented for clear air case studies: comparisons to the analytical solution of a monochromatic Schwarzschild's case without scattering show an error of less than 0.07% for a realistic atmosphere with an optical depth and a blackbody temperature that decrease linearly with altitude. Comparisons to radiative transfer code RTTOV are presented for simulations of top of atmosphere brightness temperature for channels of the space-borne instrument MODIS. Results show an agreement varying from 0.1 K to less than 1 K depending on the channel. Finally MOMO results are compared to CALIPSO Infrared Imager Radiometer (IIR) measurements for clear air cases. A good agreement was found between computed and observed radiance: biases are smaller than 0.5 K

  6. Cirrus Parcel Model Comparison Project. Phase 1: The Critical Components to Simulate Cirrus Initiation Explicitly.

    Science.gov (United States)

    Lin, Ruei-Fong; O'C. Starr, David; Demott, Paul J.; Cotton, Richard; Sassen, Kenneth; Jensen, Eric; Kärcher, Bernd; Liu, Xiaohong

    2002-08-01

    The Cirrus Parcel Model Comparison Project, a project of the GCSS [Global Energy and Water Cycle Experiment (GEWEX) Cloud System Studies] Working Group on Cirrus Cloud Systems, involves the systematic comparison of current models of ice crystal nucleation and growth for specified, typical, cirrus cloud environments. In Phase 1 of the project reported here, simulated cirrus cloud microphysical properties from seven models are compared for `warm' (40°C) and `cold' (60°C) cirrus, each subject to updrafts of 0.04, 0.2, and 1 m s1. The models employ explicit microphysical schemes wherein the size distribution of each class of particles (aerosols and ice crystals) is resolved into bins or the evolution of each individual particle is traced. Simulations are made including both homogeneous and heterogeneous ice nucleation mechanisms (all-mode simulations). A single initial aerosol population of sulfuric acid particles is prescribed for all simulations. Heterogeneous nucleation is disabled for a second parallel set of simulations in order to isolate the treatment of the homogeneous freezing (of haze droplets) nucleation process. Analysis of these latter simulations is the primary focus of this paper.Qualitative agreement is found for the homogeneous-nucleation-only simulations; for example, the number density of nucleated ice crystals increases with the strength of the prescribed updraft. However, significant quantitative differences are found. Detailed analysis reveals that the homogeneous nucleation rate, haze particle solution concentration, and water vapor uptake rate by ice crystal growth (particularly as controlled by the deposition coefficient) are critical components that lead to differences in the predicted microphysics.Systematic differences exist between results based on a modified classical theory approach and models using an effective freezing temperature approach to the treatment of nucleation. Each method is constrained by critical freezing data from

  7. On conditions and parameters important to model sensitivity for unsaturated flow through layered, fractured tuff; Results of analyses for HYDROCOIN [Hydrologic Code Intercomparison Project] Level 3 Case 2: Yucca Mountain Project

    Energy Technology Data Exchange (ETDEWEB)

    Prindle, R.W.; Hopkins, P.L.

    1990-10-01

    The Hydrologic Code Intercomparison Project (HYDROCOIN) was formed to evaluate hydrogeologic models and computer codes and their use in performance assessment for high-level radioactive-waste repositories. This report describes the results of a study for HYDROCOIN of model sensitivity for isothermal, unsaturated flow through layered, fractured tuffs. We investigated both the types of flow behavior that dominate the performance measures and the conditions and model parameters that control flow behavior. We also examined the effect of different conceptual models and modeling approaches on our understanding of system behavior. The analyses included single- and multiple-parameter variations about base cases in one-dimensional steady and transient flow and in two-dimensional steady flow. The flow behavior is complex even for the highly simplified and constrained system modeled here. The response of the performance measures is both nonlinear and nonmonotonic. System behavior is dominated by abrupt transitions from matrix to fracture flow and by lateral diversion of flow. The observed behaviors are strongly influenced by the imposed boundary conditions and model constraints. Applied flux plays a critical role in determining the flow type but interacts strongly with the composite-conductivity curves of individual hydrologic units and with the stratigraphy. One-dimensional modeling yields conservative estimates of distributions of groundwater travel time only under very limited conditions. This study demonstrates that it is wrong to equate the shortest possible water-travel path with the fastest path from the repository to the water table. 20 refs., 234 figs., 10 tabs.

  8. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  9. Projected Climate Impacts to South African Maize and Wheat Production in 2055: A Comparison of Empirical and Mechanistic Modeling Approaches

    Science.gov (United States)

    Estes, Lyndon D.; Beukes, Hein; Bradley, Bethany A.; Debats, Stephanie R.; Oppenheimer, Michael; Ruane, Alex C.; Schulze, Roland; Tadross, Mark

    2013-01-01

    Crop model-specific biases are a key uncertainty affecting our understanding of climate change impacts to agriculture. There is increasing research focus on intermodel variation, but comparisons between mechanistic (MMs) and empirical models (EMs) are rare despite both being used widely in this field. We combined MMs and EMs to project future (2055) changes in the potential distribution (suitability) and productivity of maize and spring wheat in South Africa under 18 downscaled climate scenarios (9 models run under 2 emissions scenarios). EMs projected larger yield losses or smaller gains than MMs. The EMs' median-projected maize and wheat yield changes were 3.6% and 6.2%, respectively, compared to 6.5% and 15.2% for the MM. The EM projected a 10% reduction in the potential maize growing area, where the MM projected a 9% gain. Both models showed increases in the potential spring wheat production region (EM = 48%, MM = 20%), but these results were more equivocal because both models (particularly the EM) substantially overestimated the extent of current suitability. The substantial water-use efficiency gains simulated by the MMs under elevated CO2 accounted for much of the EMMM difference, but EMs may have more accurately represented crop temperature sensitivities. Our results align with earlier studies showing that EMs may show larger climate change losses than MMs. Crop forecasting efforts should expand to include EMMM comparisons to provide a fuller picture of crop-climate response uncertainties.

  10. Comparison of GAS5 Long non-coding RNA Expression and NEAT1 in Breast Cancer Patients and Healthy People

    Directory of Open Access Journals (Sweden)

    A Arshi

    2016-06-01

    Full Text Available Background & aim: Breast cancer entails 10% of all cancers in the world.  Among all types of cancers, 30 percent of women are infected with breast cancer. Non-coding of long RNA (lncRNA is a new group of known genes in the human genome transcribed from large parts of the genome of eukaryotes and play an important role in the regulation of different biological processes. The aim of the present study was to compare the expression level of GAS5 lncRNA and NEAT1  in normal and neoplastic samples from breast cancer patients by RT-qPCR. Methods: In the present case-control study, 40 samples from patients with breast cancer tumor and 40 patients from non-tumor under the direct supervision of a pathologist specialist due to clinical presentation and laboratory findings were collected. After extracting DNA from normal and tumor tissues, cDNA synthesis method according to the protocol and RT-qPCR was performed by SYBR®Premix Ex TaqTM II kit.  LncRNA expression levels of genes GAS5 and NEAT1 was calculated using ΔΔCT. Data were analyzed using t-test. Results: The results of Real Time Reverse transcription-PCR indicated that partial expression levels of GAS5 lncRNA gene in tumor samples compared to normal GAS5 lncRNA of the gene, decreasing the expression, and the mean relative expression levels of lncRNA and NEAT1 gene in tumor samples compared to normal was overexpressed. These variation gene expression of LncRNA related to GAS5 about 1.5 times and 2 times to  lncRNA from  NEAT1 gene was observed respectively. Conclusion: Due to the previous reports, these lncRNAs act as tumor suppressor in breast cancer and had differential expression in tumor and normal tissues, which could be used as biomarker for cancer diagnosis. Moreover, expression of these lncRNAs in different breast cancer subtypes and patient with other blood raises the importance of this molecules as a biomarker for cancer diagnosis and prognosis.

  11. Multimodel simulations of carbon monoxide: Comparison with observations and projected near-future changes

    Science.gov (United States)

    Shindell, D. T.; Faluvegi, G.; Stevenson, D. S.; Krol, M. C.; Emmons, L. K.; Lamarque, J.-F.; PéTron, G.; Dentener, F. J.; Ellingsen, K.; Schultz, M. G.; Wild, O.; Amann, M.; Atherton, C. S.; Bergmann, D. J.; Bey, I.; Butler, T.; Cofala, J.; Collins, W. J.; Derwent, R. G.; Doherty, R. M.; Drevet, J.; Eskes, H. J.; Fiore, A. M.; Gauss, M.; Hauglustaine, D. A.; Horowitz, L. W.; Isaksen, I. S. A.; Lawrence, M. G.; Montanaro, V.; Müller, J.-F.; Pitari, G.; Prather, M. J.; Pyle, J. A.; Rast, S.; Rodriguez, J. M.; Sanderson, M. G.; Savage, N. H.; Strahan, S. E.; Sudo, K.; Szopa, S.; Unger, N.; van Noije, T. P. C.; Zeng, G.

    2006-10-01

    We analyze present-day and future carbon monoxide (CO) simulations in 26 state-of-the-art atmospheric chemistry models run to study future air quality and climate change. In comparison with near-global satellite observations from the MOPITT instrument and local surface measurements, the models show large underestimates of Northern Hemisphere (NH) extratropical CO, while typically performing reasonably well elsewhere. The results suggest that year-round emissions, probably from fossil fuel burning in east Asia and seasonal biomass burning emissions in south-central Africa, are greatly underestimated in current inventories such as IIASA and EDGAR3.2. Variability among models is large, likely resulting primarily from intermodel differences in representations and emissions of nonmethane volatile organic compounds (NMVOCs) and in hydrologic cycles, which affect OH and soluble hydrocarbon intermediates. Global mean projections of the 2030 CO response to emissions changes are quite robust. Global mean midtropospheric (500 hPa) CO increases by 12.6 ± 3.5 ppbv (16%) for the high-emissions (A2) scenario, by 1.7 ± 1.8 ppbv (2%) for the midrange (CLE) scenario, and decreases by 8.1 ± 2.3 ppbv (11%) for the low-emissions (MFR) scenario. Projected 2030 climate changes decrease global 500 hPa CO by 1.4 ± 1.4 ppbv. Local changes can be much larger. In response to climate change, substantial effects are seen in the tropics, but intermodel variability is quite large. The regional CO responses to emissions changes are robust across models, however. These range from decreases of 10-20 ppbv over much of the industrialized NH for the CLE scenario to CO increases worldwide and year-round under A2, with the largest changes over central Africa (20-30 ppbv), southern Brazil (20-35 ppbv) and south and east Asia (30-70 ppbv). The trajectory of future emissions thus has the potential to profoundly affect air quality over most of the world's populated areas.

  12. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  13. SLUDGE TREATMENT PROJECT COST COMPARISON BETWEEN HYDRAULIC LOADING AND SMALL CANISTER LOADING CONCEPTS

    Energy Technology Data Exchange (ETDEWEB)

    GEUTHER J; CONRAD EA; RHOADARMER D

    2009-08-24

    The Sludge Treatment Project (STP) is considering two different concepts for the retrieval, loading, transport and interim storage of the K Basin sludge. The two design concepts under consideration are: (1) Hydraulic Loading Concept - In the hydraulic loading concept, the sludge is retrieved from the Engineered Containers directly into the Sludge Transport and Storage Container (STSC) while located in the STS cask in the modified KW Basin Annex. The sludge is loaded via a series of transfer, settle, decant, and filtration return steps until the STSC sludge transportation limits are met. The STSC is then transported to T Plant and placed in storage arrays in the T Plant canyon cells for interim storage. (2) Small Canister Concept - In the small canister concept, the sludge is transferred from the Engineered Containers (ECs) into a settling vessel. After settling and decanting, the sludge is loaded underwater into small canisters. The small canisters are then transferred to the existing Fuel Transport System (FTS) where they are loaded underwater into the FTS Shielded Transfer Cask (STC). The STC is raised from the basin and placed into the Cask Transfer Overpack (CTO), loaded onto the trailer in the KW Basin Annex for transport to T Plant. At T Plant, the CTO is removed from the transport trailer and placed on the canyon deck. The CTO and STC are opened and the small canisters are removed using the canyon crane and placed into an STSC. The STSC is closed, and placed in storage arrays in the T Plant canyon cells for interim storage. The purpose of the cost estimate is to provide a comparison of the two concepts described.

  14. Project JADE. Long-term function and safety. Comparison of repository systems

    Energy Technology Data Exchange (ETDEWEB)

    Birgersson, Lars; Pers, K.; Wiborgh, M. [Kemakta Konsult AB, Stockholm (Sweden)

    2001-12-01

    A comparison of the KBS-3 V(ertical deposition), KBS-3 H(orizontal deposition) and MLH repository systems with regard to the long-term repository performance and the radionuclide migration is presented in the report. Several differences between the repository systems have been identified. The differences are mainly related to the: distance between canister and backfilled tunnels, excavated rock volumes, deposition hole direction. The overall conclusion is that the differences are in general quite small with regard to the repository function and safety. None of the differences are of such importance for the long-term repository performance and radionuclide migration that they discriminate any of the repository systems. The differences between the two KBS-3 systems are small. Based on this study, there is no reason to change from the reference system KBS-3 V to KBS-3 H. MLH has the potential to be a very robust system, especially in a long-term perspective. However, the MLH system will require extensive research, development, and analysis before it will be as confident as the reference repository system, KBS-3 V. Although the MLH and KBS-3 H systems are in some ways favourable compared to the reference system KBS-3 V, the overall conclusion is that the KBS-3 V system is still a very attractive system. A major advantage with KBS-3 V is that it is by far the most investigated and developed system. The JADE-project was initiated in 1996, and the main part of the study was carried out during 1997 and 1998. The JADE study is consequently based on presumptions that were valid a few years ago. Some of these presumptions have been modified since then. The new presumptions are however not judged to change the overall conclusions.

  15. Multidetector CT evaluation of central airways stenoses: Comparison of virtual bronchoscopy, minimal-intensity projection, and multiplanar reformatted images

    OpenAIRE

    Sundarakumar, Dinesh K; Bhalla, Ashu S; Raju Sharma; Smriti Hari; Randeep Guleria; Khilnani, Gopi C.

    2011-01-01

    Aims: To evaluate the diagnostic utility of virtual bronchoscopy, multiplanar reformatted images, and minimal-intensity projection in assessing airway stenoses. Settings and Design: It was a prospective study involving 150 patients with symptoms of major airway disease. Materials and Methods: Fifty-six patients were selected for analysis based on the detection of major airway lesions on fiber-optic bronchoscopy (FB) or routine axial images. Comparisons were made between axial images, virtual ...

  16. SUPPLEMENTARY COMPARISON Final report on EUROMET.L-S18 (EUROMET Project 905): Comparison of squareness measurements

    Science.gov (United States)

    Mokros, Jiri

    2010-01-01

    A bilateral comparison has been undertaken between two European national metrology institutes (NMIs) in the subject field of squareness measurement. Specifically, both NMIs made measurements of a rectangular squareness standard, made of granite. Both participants made measurements of the internal angle at a defined position along the measuring faces, as well as measurement of the corresponding straightness profiles on the two adjoining sides. The report lists the results obtained by both participants, as well as an analysis of the results and their uncertainties. The results are in good agreement with one another. This bilateral comparison followed the same technical protocol as a previous supplementary comparison (EUROMET.L-S10). Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by EURAMET, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  17. Flood risk assessment in France: comparison of extreme flood estimation methods (EXTRAFLO project, Task 7)

    Science.gov (United States)

    Garavaglia, F.; Paquet, E.; Lang, M.; Renard, B.; Arnaud, P.; Aubert, Y.; Carre, J.

    2013-12-01

    In flood risk assessment the methods can be divided in two families: deterministic methods and probabilistic methods. In the French hydrologic community the probabilistic methods are historically preferred to the deterministic ones. Presently a French research project named EXTRAFLO (RiskNat Program of the French National Research Agency, https://extraflo.cemagref.fr) deals with the design values for extreme rainfall and floods. The object of this project is to carry out a comparison of the main methods used in France for estimating extreme values of rainfall and floods, to obtain a better grasp of their respective fields of application. In this framework we present the results of Task 7 of EXTRAFLO project. Focusing on French watersheds, we compare the main extreme flood estimation methods used in French background: (i) standard flood frequency analysis (Gumbel and GEV distribution), (ii) regional flood frequency analysis (regional Gumbel and GEV distribution), (iii) local and regional flood frequency analysis improved by historical information (Naulet et al., 2005), (iv) simplify probabilistic method based on rainfall information (i.e. Gradex method (CFGB, 1994), Agregee method (Margoum, 1992) and Speed method (Cayla, 1995)), (v) flood frequency analysis by continuous simulation approach and based on rainfall information (i.e. Schadex method (Paquet et al., 2013, Garavaglia et al., 2010), Shyreg method (Lavabre et al., 2003)) and (vi) multifractal approach. The main result of this comparative study is that probabilistic methods based on additional information (i.e. regional, historical and rainfall information) provide better estimations than the standard flood frequency analysis. Another interesting result is that, the differences between the various extreme flood quantile estimations of compared methods increase with return period, staying relatively moderate up to 100-years return levels. Results and discussions are here illustrated throughout with the example

  18. Analysis and Comparison of Electric Drive System Projects of Military Tracked Vehicles

    Institute of Scientific and Technical Information of China (English)

    LIAO Zi-li; MA Xiao-jun; ZHAO Yu-hui; ZANG Ke-mao

    2007-01-01

    The electric drive system characteristics of different projects for tracked vehicles are analyzed. For the two most typical projects, the parameters of power, torque and rotating speed and others of drive motor are figured out under the condition of satisfying adequate steering performance of the tracked vehicles. General opinions on the two projects are brought forward and conclusions are drawn.

  19. In silico Coding Sequence Analysis of Walnut GAI and PIP2 Genes and Comparison with Different Plant Species

    Directory of Open Access Journals (Sweden)

    Mahdi Mohseniazar

    2017-02-01

    done with MEGA from aligned sequences. The motifs of protein sequences were found using the program of T-COFEE at website (http://www.ebi.ac.uk/Tools/msa/tcoffee/. The Neighbor-Joining (NJ method was used to designing the phylogenetic tree. The predicted exons and introns in mRNA sequences were done by http://genes.mit.edu/GENSCAN.html website. The secondary structure of proteins was predicted by PSIORED online on http://bioinf.cs.ucl.ac.uk/psipred/. Prediction of 3D model of protein was performed using the 3D alignment of protein structure by BLASTp and PDB database as source. Also, targeting prediction of proteins was done online by TargetP at (http://www.cbs.dtu.dk/services/TargetP/ website. Results and discussion: In phylogenetic investigation among 17 different species, Walnut species evolutionary stand in dicotyledonous and woody plants by both of GAI and PIP2 genes and protein sequence clustering. By multiple alignments and investigation in conserved sequence of these genes in plant revealed that despite differences in cDNA length, there were very similarities in conserved region, secondary and tertiary structure. Protein analysis in the GAI gene family showed that the following domains including DELLA, TVHYNP, VHIID, RKVATYFGEALARR, AVNSVFELH, RVER, and SAW were conserved in this proteins. In secondary structure of protein, β-sheets and α-helixes specified by PSIPRED software for both of GAI and PIP2 proteins. GAI protein had 9 β-sheets and 15 α-helixes in its structure, also PIP2 protein had2 β-sheet (at 180-188 and 248-253 and 8 α-helixes. In comparison of 3D structure, walnut PIP2 protein was very similar to chain A of PIP2 protein of spinach (Spinacia oleracea and GAI protein of walnut was similar to B-subunit of Arabidopsis GAI protein with 48% similarity. The length of GAI protein was varied from 636 aa in Malus baccata var. xiaojinensis to 336 aa in Physcomitrella patens among species. In walnut, the length of GAI and PIP2 protein was 613 aa and

  20. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  1. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA)); Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian (Sandia National Labs., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems.

  2. Comparison and Evaluation of the Effects of Rib and Lung Inhomogeneities on Lung Dose in Breast Brachytherapy using a Treatment Planning System and the MCNPX Code

    Directory of Open Access Journals (Sweden)

    Hossein Salehi Yazdi

    2010-09-01

    Full Text Available Introduction: This study investigates to what extent the computed dose received by lung tissue in a commercially available treatment planning system (TPS for 192Ir high-dose-rate breast brachytherapy is accurate in view of tissue inhomogeneities and presence of ribs. Materials and Methods: A CT scan of the breast was used to construct a patient-equivalent phantom in the clinical treatment planning system. An implant involving 13 plastic catheters and 383 programmed source dwell positions were simulated using the MCNPX code. Results: The results were compared with the corresponding commercial TPS in the form of isodoses and cumulative dose–volume histogram in breast, lung and ribs. The comparison of Monte Carlo results and TPS calculation showed that the isodoses greater than 62% in the breast that were located rather close to the implant or away from the breast curvature surface and lung boundary were in good agreement. TPS calculations, however, overestimated dose in the lung for lower isodose contours and points that were lying near the breast-air boundary and relatively away from the implant. Discussion and Conclusions: Taking into account the ribs and entering the actual data for breast, rib and lung, revealed an average overestimation of dose in lung in the TPS calculation.

  3. Comparison between 1D and 1 1/2D Eulerian Vlasov codes for the numerical simulation of stimulated Raman scattering

    Science.gov (United States)

    Ghizzo, A.; Bertrand, P.; Lebas, J.; Shoucri, M.; Johnston, T.; Fijalkow, E.; Feix, M. R.

    1992-10-01

    The present 1 1/2D relativistic Euler-Vlasov code has been used to check the validity of a hydrodynamic description used in a 1D version of the Vlasov code. By these means, detailed numerical results can be compared; good agreement furnishes full support for the 1D electromagnetic Vlasov code, which runs faster than the 1 1/2D code. The results obtained assume a nonrelativistic v(y) velocity.

  4. 拉美抗震设计规范与中、美规范相关内容的对比研究%Comparison of Latin- American Seismic Design Codes with Chinese and US Seismic Design Codes

    Institute of Scientific and Technical Information of China (English)

    任广杰; 邱兆山

    2013-01-01

    Among the Latin-American countries, seismic design codes of Cuba, Colombia and Mexico (MOC) are very typi-cal;this paper introduces the methods to set up spectral acceleration of Design Earthquakes and these methods are compared with relevant contents in the Chinese and US codes. The simplicity of the Cuban code is close to the Chinese code and 1997 UBC code;the Colombian code has referred to the concepts of US codes of different periods;and the Mexican MOC code (manual) has very special application methods. Other relevant comparative points are also summarized in the paper. The pa-per thus helps people understand the relevant design codes of Latin-American countries.%在拉美国家中,古巴、哥伦比亚、墨西哥MOC抗震规范比较典型。文中介绍了其中建立设计地震加速度反应谱的方法,并与中、美国家规范相关内容进行了对比。古巴规范简化的程度与中国规范及1997UBC规范相近;哥伦比亚规范混合采用了不同时期美国规范中的概念;墨西哥MOC规范应用形式上很特殊。文中还总结了与各个规范相关的其它对比要点。为理解拉美国家相关规范提供借鉴。

  5. Holographic codes

    CERN Document Server

    Latorre, Jose I

    2015-01-01

    There exists a remarkable four-qutrit state that carries absolute maximal entanglement in all its partitions. Employing this state, we construct a tensor network that delivers a holographic many body state, the H-code, where the physical properties of the boundary determine those of the bulk. This H-code is made of an even superposition of states whose relative Hamming distances are exponentially large with the size of the boundary. This property makes H-codes natural states for a quantum memory. H-codes exist on tori of definite sizes and get classified in three different sectors characterized by the sum of their qutrits on cycles wrapped through the boundaries of the system. We construct a parent Hamiltonian for the H-code which is highly non local and finally we compute the topological entanglement entropy of the H-code.

  6. Making Contact: The NAMES Project in Comparison to the Vietnam Memorial.

    Science.gov (United States)

    Jensen, Marvin D.

    Several principles of gestalt therapy are applied in an analysis of the similarities between the Vietnam Veterans Memorial and The NAMES Project Quilt. The NAMES Project Quilt memorializes people who have died of Acquired Immune Deficiency Syndrome (AIDS). The creators of the two memorials engaged in the initial searches for "whole"…

  7. Hotspots of uncertainty in land use and land cover change projections: a global scale model comparison

    NARCIS (Netherlands)

    Prestele, Reinhard; Alexander, Peter; Rounsevell, Mark; Arneth, Almut; Calvin, Katherine; Doelman, Jonathan; Eitelberg, David; Engström, Kerstin; Fujimori, Shinichiro; Hasegawa, Tomoko; Havlik, Petr; Humpenöder, Florian; Jain, Atul K.; Krisztin, Tamás; Kyle, Page; Meiyappan, Prasanth; Popp, Alexander; Sands, Ronald D.; Schaldach, Rüdiger; Schüngel, Jan; Stehfest, Elke; Tabeau, Andrzej; Meijl, van Hans; Vliet, van Jasper; Verburg, Peter H.

    2016-01-01

    Model-based global projections of future land use and land cover (LULC) change are frequently used in environmental assessments to study the impact of LULC change on environmental services and to provide decision support for policy. These projections are characterized by a high uncertainty in terms

  8. A Comparison on the Implementation Approaches for the e-Bario and e-Bedian Projects

    Science.gov (United States)

    2004-06-01

    spurred Universiti Malaysia Sarawak ( Unimas ) to conduct similar research projects to introduce ICT to remote communities in Sarawak, Malaysia. The... Unimas ’ research project, as they are both extremely remote, particularly from mainstream development. Both the villages do not have government...Picking on this cue, the researchers from Unimas decided to embark on ICT and Internet connection beginning with the school. The physical

  9. A comparison of the aquatic impacts of large hydro and small hydro projects

    Science.gov (United States)

    Taylor, Lara A.

    The expansion of small hydro development in British Columbia has raised concerns surrounding the effects of these projects, and the provincial government's decision to proceed with Site C has brought attention to the impacts of large hydro. Together, these decisions highlight that there are impacts associated with all energy development. My study examines the aquatic effects of large and small hydro projects using two case study sites: Site C and the Upper Harrison Water Power Project. I first determine the aquatic effects of each of the case study sites. Next, I use existing literature and benefits transfer to determine the monetary value of these effects. My results suggest that, with mitigation, small hydro projects have less of an effect on the environment than a large hydro project per unit of electricity. I also describe the implications of my study in the context of current British Columbia energy policy. Keywords: hydropower; aquatic effects. Subject Terms: environmental impact assessment; benefits transfer.

  10. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  11. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  12. The KIDTALK Behavior and Language Code: Manual and Coding Protocol.

    Science.gov (United States)

    Delaney, Elizabeth M.; Ezell, Sara S.; Solomon, Ned A.; Hancock, Terry B.; Kaiser, Ann P.

    Developed as part of the Milieu Language Teaching Project at the John F. Kennedy Center at Vanderbilt University in Nashville, Tennessee, this KIDTALK Behavior-Language Coding Protocol and manual measures behavior occurring during adult-child interactions. The manual is divided into 5 distinct sections: (1) the adult behavior codes describe…

  13. Comparison of a Label-Free Quantitative Proteomic Method Based on Peptide Ion Current Area to the Isotope Coded Affinity Tag Method

    Directory of Open Access Journals (Sweden)

    Young Ah Goo

    2008-01-01

    Full Text Available Recently, several research groups have published methods for the determination of proteomic expression profiling by mass spectrometry without the use of exogenously added stable isotopes or stable isotope dilution theory. These so-called label-free, methods have the advantage of allowing data on each sample to be acquired independently from all other samples to which they can later be compared in silico for the purpose of measuring changes in protein expression between various biological states. We developed label free software based on direct measurement of peptide ion current area (PICA and compared it to two other methods, a simpler label free method known as spectral counting and the isotope coded affinity tag (ICAT method. Data analysis by these methods of a standard mixture containing proteins of known, but varying, concentrations showed that they performed similarly with a mean squared error of 0.09. Additionally, complex bacterial protein mixtures spiked with known concentrations of standard proteins were analyzed using the PICA label-free method. These results indicated that the PICA method detected all levels of standard spiked proteins at the 90% confidence level in this complex biological sample. This finding confirms that label-free methods, based on direct measurement of the area under a single ion current trace, performed as well as the standard ICAT method. Given the fact that the label-free methods provide ease in experimental design well beyond pair-wise comparison, label-free methods such as our PICA method are well suited for proteomic expression profiling of large numbers of samples as is needed in clinical analysis.

  14. The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site

    Energy Technology Data Exchange (ETDEWEB)

    Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.; Li, Jun; Wei, Lingli; Yamamoto, Hajime; Gasda, Sarah E.; Hosseini, Seyyed; Nicot, Jean-Philippe; Birkholzer, Jens

    2015-05-23

    Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e, MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.

  15. Validation and comparison of two-phase flow modeling capabilities of CFD, sub channel and system codes by means of post-test calculations of BFBT transient tests

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Wadim; Manes, Jorge Perez; Imke, Uwe; Escalante, Javier Jimenez; Espinoza, Victor Sanchez, E-mail: victor.sanchez@kit.edu

    2013-10-15

    Highlights: • Simulation of BFBT turbine and pump transients at multiple scales. • CFD, sub-channel and system codes are used for the comparative study. • Heat transfer models are compared to identify difference between the code predictions. • All three scales predict results in good agreement to experiment. • Sub cooled boiling models are identified as field for future research. -- Abstract: The Institute for Neutron Physics and Reactor Technology (INR) at the Karlsruhe Institute of Technology (KIT) is involved in the validation and qualification of modern thermo hydraulic simulations tools at various scales. In the present paper, the prediction capabilities of four codes from three different scales – NEPTUNE{sub C}FD as fine mesh computational fluid dynamics code, SUBCHANFLOW and COBRA-TF as sub channels codes and TRACE as system code – are assessed with respect to their two-phase flow modeling capabilities. The subject of the investigations is the well-known and widely used data base provided within the NUPEC BFBT benchmark related to BWRs. Void fraction measurements simulating a turbine and a re-circulation pump trip are provided at several axial levels of the bundle. The prediction capabilities of the codes for transient conditions with various combinations of boundary conditions are validated by comparing the code predictions with the experimental data. In addition, the physical models of the different codes are described and compared to each other in order to explain the different results and to identify areas for further improvements.

  16. 无损图像压缩编码方法及其比较%A Study on Ways of Lossless Image Compression and Coding and Relevant Comparisons

    Institute of Scientific and Technical Information of China (English)

    冉晓娟

    2014-01-01

    This essay studies the principles of three ways of lossless image compression including run length coding, LZW coding and Huffman coding as well as making comparative analyses of them,which contributes to the applica-tions of various coding methods in lossless image compression.%研究游程编码,LZW编码和哈夫曼编码三种无损图像压缩的原理,并对其进行分析,这有助于针对不同类型的图像选择合适的压缩编码方法。

  17. A class of Sudan-decodable codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...... on algebraic-geometry codes and the decoding algorithms are generalized accordingly. Comparisons with Reed-Solomon and Hermitian codes are made....

  18. Inter-comparison of statistical downscaling methods for projection of extreme flow indices across Europe

    DEFF Research Database (Denmark)

    Hundecha, Yeshewatesfa; Sunyer Pinya, Maria Antonia; Lawrence, Deborah;

    2016-01-01

    flow indices in most of the catchments. The catchments where the extremes are expected to increase have a rainfall-dominated flood regime. In these catchments, the downscaling methods also project an increase in the extreme precipitation in the seasons when the extreme flows occur. In catchments where...... the flooding is mainly caused by spring/summer snowmelt, the downscaling methods project a decrease in the extreme flows in three of the four catchments considered. A major portion of the variability in the projected changes in the extreme flow indices is attributable to the variability of the climate model......The effect of methods of statistical downscaling of daily precipitation on changes in extreme flow indices under a plausible future climate change scenario was investigated in 11 catchments selected from 9 countries in different parts of Europe. The catchments vary from 67 to 6171km2 in size...

  19. Hotspots of uncertainty in land-use and land-cover change projections: a global-scale model comparison

    Energy Technology Data Exchange (ETDEWEB)

    Prestele, Reinhard [Environmental Geography Group, Department of Earth Sciences, Vrije Universiteit Amsterdam, De Boelelaan 1087 1081 HV Amsterdam The Netherlands; Alexander, Peter [School of GeoSciences, University of Edinburgh, Drummond Street Edinburgh EH89XP UK; Rounsevell, Mark D. A. [School of GeoSciences, University of Edinburgh, Drummond Street Edinburgh EH89XP UK; Arneth, Almut [Department Atmospheric Environmental Research (IMK-IFU), Karlsruhe Institute of Technology, Kreuzeckbahnstr. 19 82467 Garmisch-Partenkirchen Germany; Calvin, Katherine [Joint Global Change Research Institute, Pacific Northwest National Laboratory, College Park MD 20740 USA; Doelman, Jonathan [PBL Netherlands Environmental Assessment Agency, P.O. Box 303 3720 AH Bilthoven The Netherlands; Eitelberg, David A. [Environmental Geography Group, Department of Earth Sciences, Vrije Universiteit Amsterdam, De Boelelaan 1087 1081 HV Amsterdam The Netherlands; Engström, Kerstin [Department of Geography and Ecosystem Science, Lund University, Sölvegatan 12 Lund Sweden; Fujimori, Shinichiro [Center for Social and Environmental Systems Research, National Institute for Environmental Studies, 16-2 Onogawa Tsukuba Ibaraki 305-8506 Japan; Hasegawa, Tomoko [Center for Social and Environmental Systems Research, National Institute for Environmental Studies, 16-2 Onogawa Tsukuba Ibaraki 305-8506 Japan; Havlik, Petr [Ecosystem Services and Management Program, International Institute for Applied Systems Analysis, A-2361 Laxenburg Austria; Humpenöder, Florian [Potsdam Institute for Climate Impact Research (PIK), P.O. Box 60 12 03 14412 Potsdam Germany; Jain, Atul K. [Department of Atmospheric Sciences, University of Illinois, Urbana IL 61801 USA; Krisztin, Tamás [Ecosystem Services and Management Program, International Institute for Applied Systems Analysis, A-2361 Laxenburg Austria; Kyle, Page [Joint Global Change Research Institute, Pacific Northwest National Laboratory, College Park MD 20740 USA; Meiyappan, Prasanth [Department of Atmospheric Sciences, University of Illinois, Urbana IL 61801 USA; Popp, Alexander [Potsdam Institute for Climate Impact Research (PIK), P.O. Box 60 12 03 14412 Potsdam Germany; Sands, Ronald D. [Resource and Rural Economics Division, Economic Research Service, US Department of Agriculture, Washington DC 20250 USA; Schaldach, Rüdiger [Center for Environmental Systems Research, University of Kassel, Wilhelmshöher Allee 47 D-34109 Kassel Germany; Schüngel, Jan [Center for Environmental Systems Research, University of Kassel, Wilhelmshöher Allee 47 D-34109 Kassel Germany; Stehfest, Elke [PBL Netherlands Environmental Assessment Agency, P.O. Box 303 3720 AH Bilthoven The Netherlands; Tabeau, Andrzej [LEI, Wageningen University and Research Centre, P.O. Box 29703 2502 LS The Hague The Netherlands; Van Meijl, Hans [LEI, Wageningen University and Research Centre, P.O. Box 29703 2502 LS The Hague The Netherlands; Van Vliet, Jasper [Environmental Geography Group, Department of Earth Sciences, Vrije Universiteit Amsterdam, De Boelelaan 1087 1081 HV Amsterdam The Netherlands; Verburg, Peter H. [Environmental Geography Group, Department of Earth Sciences, Vrije Universiteit Amsterdam, De Boelelaan 1087 1081 HV Amsterdam The Netherlands; Swiss Federal Research Institute WSL, Zürcherstrasse 111 CH-8903 Birmensdorf Switzerland

    2016-06-08

    Model-based global projections of future land use and land cover (LULC) change are frequently used in environmental assessments to study the impact of LULC change on environmental services and to provide decision support for policy. These projections are characterized by a high uncertainty in terms of quantity and allocation of projected changes, which can severely impact the results of environmental assessments. In this study, we identify hotspots of uncertainty, based on 43 simulations from 11 global-scale LULC change models representing a wide range of assumptions of future biophysical and socio-economic conditions. We attribute components of uncertainty to input data, model structure, scenario storyline and a residual term, based on a regression analysis and analysis of variance. From this diverse set of models and scenarios we find that the uncertainty varies, depending on the region and the LULC type under consideration. Hotspots of uncertainty appear mainly at the edges of globally important biomes (e.g. boreal and tropical forests). Our results indicate that an important source of uncertainty in forest and pasture areas originates from different input data applied in the models. Cropland, in contrast, is more consistent among the starting conditions, while variation in the projections gradually increases over time due to diverse scenario assumptions and different modeling approaches. Comparisons at the grid cell level indicate that disagreement is mainly related to LULC type definitions and the individual model allocation schemes. We conclude that improving the quality and consistency of observational data utilized in the modeling process as well as improving the allocation mechanisms of LULC change models remain important challenges. Current LULC representation in environmental assessments might miss the uncertainty arising from the diversity of LULC change modeling approaches and many studies ignore the uncertainty in LULC projections in assessments of

  20. Hotspots of uncertainty in land-use and land-cover change projections: a global-scale model comparison.

    Science.gov (United States)

    Prestele, Reinhard; Alexander, Peter; Rounsevell, Mark D A; Arneth, Almut; Calvin, Katherine; Doelman, Jonathan; Eitelberg, David A; Engström, Kerstin; Fujimori, Shinichiro; Hasegawa, Tomoko; Havlik, Petr; Humpenöder, Florian; Jain, Atul K; Krisztin, Tamás; Kyle, Page; Meiyappan, Prasanth; Popp, Alexander; Sands, Ronald D; Schaldach, Rüdiger; Schüngel, Jan; Stehfest, Elke; Tabeau, Andrzej; Van Meijl, Hans; Van Vliet, Jasper; Verburg, Peter H

    2016-12-01

    Model-based global projections of future land-use and land-cover (LULC) change are frequently used in environmental assessments to study the impact of LULC change on environmental services and to provide decision support for policy. These projections are characterized by a high uncertainty in terms of quantity and allocation of projected changes, which can severely impact the results of environmental assessments. In this study, we identify hotspots of uncertainty, based on 43 simulations from 11 global-scale LULC change models representing a wide range of assumptions of future biophysical and socioeconomic conditions. We attribute components of uncertainty to input data, model structure, scenario storyline and a residual term, based on a regression analysis and analysis of variance. From this diverse set of models and scenarios, we find that the uncertainty varies, depending on the region and the LULC type under consideration. Hotspots of uncertainty appear mainly at the edges of globally important biomes (e.g., boreal and tropical forests). Our results indicate that an important source of uncertainty in forest and pasture areas originates from different input data applied in the models. Cropland, in contrast, is more consistent among the starting conditions, while variation in the projections gradually increases over time due to diverse scenario assumptions and different modeling approaches. Comparisons at the grid cell level indicate that disagreement is mainly related to LULC type definitions and the individual model allocation schemes. We conclude that improving the quality and consistency of observational data utilized in the modeling process and improving the allocation mechanisms of LULC change models remain important challenges. Current LULC representation in environmental assessments might miss the uncertainty arising from the diversity of LULC change modeling approaches, and many studies ignore the uncertainty in LULC projections in assessments of LULC

  1. Multimodel simulations of carbon monoxide: comparison with observations and projected near-future changes

    NARCIS (Netherlands)

    Shindell, D.T.; Krol, M.C.|info:eu-repo/dai/nl/078760410

    2006-01-01

    We analyze present-day and future carbon monoxide (CO) simulations in 26 state-ofthe- art atmospheric chemistry models run to study future air quality and climate change. In comparison with near-global satellite observations from the MOPITT instrument and local surface measurements, the models show

  2. Multimodel simulations of carbon monoxide: Comparison with observations and projected near-future changes

    NARCIS (Netherlands)

    Shindell, D.T.; Faluvegi, G.; Stevenson, D.S.; Krol, M.C.; Emmons, L.K.; Lamarque, J.F.; Petron, G.; Dentener, F.J.; Ellingsen, K.; Schultz, M.G.; Wild, O.; Amann, M.; Atherton, C.S.; Bergmann, D.J.; Bey, I.; Butler, T.; Cofala, J.; Collins, W.J.; Derwent, R.G.; Doherty, R.M.; Drevet, J.; Eskes, H.J.; Fiore, A.M.; Gauss, M.; Hauglustaine, D.A.; Horowitz, L.W.; Isaksen, I.S.A.; Lawrence, M.G.; Montanaro, V.; Muller, J.F.; Pitari, G.; Prather, M.J.; Pyle, J.A.; Rast, S.; Rodriguez, J.M.; Sanderson, M.G.; Savage, N.H.; Strahan, S.E.; Sudo, K.; Szopa, S.; Unger, N.; Noije, van T.P.C.; Zeng, G.

    2006-01-01

    We analyze present-day and future carbon monoxide (CO) simulations in 26 state-of-the-art atmospheric chemistry models run to study future air quality and climate change. In comparison with near-global satellite observations from the MOPITT instrument and local surface measurements, the models show

  3. Inter-comparison of statistical downscaling methods for projection of extreme flow indices across Europe

    Science.gov (United States)

    Hundecha, Yeshewatesfa; Sunyer, Maria A.; Lawrence, Deborah; Madsen, Henrik; Willems, Patrick; Bürger, Gerd; Kriaučiūnienė, Jurate; Loukas, Athanasios; Martinkova, Marta; Osuch, Marzena; Vasiliades, Lampros; von Christierson, Birgitte; Vormoor, Klaus; Yücel, Ismail

    2016-10-01

    The effect of methods of statistical downscaling of daily precipitation on changes in extreme flow indices under a plausible future climate change scenario was investigated in 11 catchments selected from 9 countries in different parts of Europe. The catchments vary from 67 to 6171 km2 in size and cover different climate zones. 15 regional climate model outputs and 8 different statistical downscaling methods, which are broadly categorized as change factor and bias correction based methods, were used for the comparative analyses. Different hydrological models were implemented in different catchments to simulate daily runoff. A set of flood indices were derived from daily flows and their changes have been evaluated by comparing their values derived from simulations corresponding to the current and future climate. Most of the implemented downscaling methods project an increase in the extreme flow indices in most of the catchments. The catchments where the extremes are expected to increase have a rainfall-dominated flood regime. In these catchments, the downscaling methods also project an increase in the extreme precipitation in the seasons when the extreme flows occur. In catchments where the flooding is mainly caused by spring/summer snowmelt, the downscaling methods project a decrease in the extreme flows in three of the four catchments considered. A major portion of the variability in the projected changes in the extreme flow indices is attributable to the variability of the climate model ensemble, although the statistical downscaling methods contribute 35-60% of the total variance.

  4. Comparison of the Effect of Iterative Reconstruction versus Filtered Back Projection on Cardiac CT Postprocessing

    NARCIS (Netherlands)

    Spears, J. Reid; Schoepf, U. Joseph; Henzler, Thomas; Joshi, Gayatri; Moscariello, Antonio; Vliegenthart, Rozemarijn; Cho, Young Jun; Apfaltrer, Paul; Rowe, Garrett; Weininger, Markus; Ebersberger, Ullrich

    2014-01-01

    Rationale and Objectives: To investigate the impact of iterative reconstruction in image space (IRIS) on image noise, image quality (10), and postprocessing at coronary computed tomography angiography (cCTA) compared to traditional filtered back-projection (FBP). Materials and Methods: The cCTA resu

  5. A Comparison of Critical Chain Project Management (CCPM) Buffer Sizing Techniques

    Science.gov (United States)

    2007-11-02

    which network characterization was used. Besides the two works in the preceding paragraph, Brucker, Drexl, Mohring, Neumann, and Pesch (1999) and...analysis. Simulation 1991; 57(4): 245- 255 Brucker P, Drexl A, Mohring R, Neumann K, Pesch E. Resource-constrained project scheduling: Notation

  6. Comparison of the Effect of Iterative Reconstruction versus Filtered Back Projection on Cardiac CT Postprocessing

    NARCIS (Netherlands)

    Spears, J. Reid; Schoepf, U. Joseph; Henzler, Thomas; Joshi, Gayatri; Moscariello, Antonio; Vliegenthart, Rozemarijn; Cho, Young Jun; Apfaltrer, Paul; Rowe, Garrett; Weininger, Markus; Ebersberger, Ullrich

    2014-01-01

    Rationale and Objectives: To investigate the impact of iterative reconstruction in image space (IRIS) on image noise, image quality (10), and postprocessing at coronary computed tomography angiography (cCTA) compared to traditional filtered back-projection (FBP). Materials and Methods: The cCTA resu

  7. LiveCode mobile development

    CERN Document Server

    Lavieri, Edward D

    2013-01-01

    A practical guide written in a tutorial-style, ""LiveCode Mobile Development Hotshot"" walks you step-by-step through 10 individual projects. Every project is divided into sub tasks to make learning more organized and easy to follow along with explanations, diagrams, screenshots, and downloadable material.This book is great for anyone who wants to develop mobile applications using LiveCode. You should be familiar with LiveCode and have access to a smartphone. You are not expected to know how to create graphics or audio clips.

  8. Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study.

    Science.gov (United States)

    Kotchenova, Svetlana Y; Vermote, Eric F; Levy, Robert; Lyapustin, Alexei

    2008-05-01

    Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.

  9. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  10. The Durham elementary particle data bank, data analysis and model comparison project

    CERN Document Server

    Gault, F D; Chadwick, B; Cooper, C S

    1973-01-01

    In comparing theoretical models with data the first problem encountered is the retrieval of data from published sources and its storage in such a way as to make it readily accessible. The second is to implement programs which check the data for consistency and any recurring characteristics. The third is to produce an efficiently written program to compare the theoretical model with the data, the program being written as a function of N parameters so that it can be coupled to a minimizing program such as CERN MINUIT. Between each step, interface, retrieval and storage problems arise. The authors discuss the organization of the Durham project, the Data Bank software and retrieval techniques and the integration of model independent software with the Data Bank. This project is being carried out on the Northumbrian Universities Multiple Access Computer (NUMAC) an IBM 360 /67 operating under the Michigan Terminal System (MTS). (10 refs).

  11. Comparison of cancer risks projected from animal bioassays to epidemiologic studies of acrylonitrile-exposed workers.

    Science.gov (United States)

    Ward, C E; Starr, T B

    1993-10-01

    Bioassay findings have demonstrated that acrylonitrile (ACN) is a rodent carcinogen, but the available epidemiologic evidence provides little support for the human carcinogenicity of ACN. This discordance between laboratory animal and human study findings is explored by determining post hoc the statistical power of 11 epidemiologic studies of ACN-exposed workers to detect the all-site and brain cancer excesses that are projected from rodent drinking water bioassay data. At reasonable estimates of the level and duration of exposures among the occupational cohorts, a majority of the human studies had sufficient power (> 80%) to detect the projected excesses, yet such responses were consistently absent. We conclude, subject to certain caveats, that the upper bound estimate of ACN's inhalation cancer potency of 1.5 x 10(-4) per ppm is too high to be consistent with the human ACN experience.

  12. Validation and intercomparison of Persistent Scatterers Interferometry: PSIC4 project results

    NARCIS (Netherlands)

    Raucoules, D.; Bourgine, B.; Michele, M. de; Le Cozannet, G.; Closset, L.; Bremmer, C.; Veldkamp, H.; Tragheim, D.; Bateson, L.; Crosetto, M.; Agudo, M.; Engdahl, M.

    2009-01-01

    This article presents the main results of the Persistent Scatterer Interferometry Codes Cross Comparison and Certification for long term differential interferometry (PSIC4) project. The project was based on the validation of the PSI (Persistent Scatterer Interferometry) data with respect to levellin

  13. Comparison of Profitableness between Tobacco and Some Products in Scope of Alternative Product Project

    Directory of Open Access Journals (Sweden)

    H. Sivuk

    2009-09-01

    Full Text Available Tobacco is fairly contributed Turkish economy through mainly export and tax revenues. It has been applied restriction policies of tobacco planting areas with “Tobacco Law” which was passed an act in 2002. Therefore, it has been given promotions to producers who produce loss of production instead of tobacco with Alternative Product Project which has been applied since 2002. It has been saw not enough to be adopted by producers in 11 provinces in 2002-2007. The big number of the producers which used project support by abandoning tobacco production have tended cereal cultivation. In fact, this information is clearly indicator at indecision for producers to product which is chosen instead of tobacco. The project will continue with 3 years by chancing supporting amount and limiting its scope with 9 provinces from 2008. It has been intended with this work to direct to producers which are seeking alternative products instead of tobacco. Therefore, profitableness levels of wheat, aspir and canola which will be alternative for tobacco has been analyzed. It has been used Absolute Profit and Relative Profit at the analyze phase. It has been found that the best alternative for tobacco is canola as a result of the study.

  14. COMPARISONS BETWEEN AND COMBINATIONS OF DIFFERENT APPROACHES TO ACCELERATE ENGINEERING PROJECTS

    Directory of Open Access Journals (Sweden)

    H. Steyn

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this article, traditional project management methods such as PERT and CPM, as well as fast-tracking and systems approaches, viz. concurrent engineering and critical chain, are reviewed with specific reference to their contribution to reducing the duration of the execution phase of engineering projects. Each of these techniques has some role to play in the acceleration of project execution. Combinations of approaches are evaluated by considering the potential of sets consisting of two different approaches each. While PERT and CPM approaches have been combined for many years in a technique called PERT/CPM, new combinations of approaches are discussed. Certain assumptions inherent to PERT and often wrong are not made by the critical chain approach.

    AFRIKAANSE OPSOMMING: In hierdie artikel word tradisionele projekbestuurbenaderings soos PERT en CPM asook projekversnelling en stelselbenaderings, naamlik gelyktydige ingenieurswese, en kritiekeketting-ondersoek met betrekking tot die bydrae wat elk tot die versnelling van die uitvoeringsfase van ingenieursprojekte kan lewer. Elk van hierdie benaderings kan ‘n spesifieke bydrae tot die versnelling van projekte lewer. Kombinasies, elk bestaande uit twee verskillende benaderings, word geëvalueer. Terwyl PERT en CPM reeds baie jare lank in kombinasie gebruik word, word nuwe kombinasies ook hier bespreek. Sekere aannames inherent aan die PERT-benadering is dikwels foutief. Hierdie aannames word nie deur die kritieke-ketting-benadering gemaak nie.

  15. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    ; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....

  16. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  17. Comparison of THALES and VIPRE-01 Subchannel Codes for Loss of Flow and Single Reactor Coolant Pump Rotor Seizure Accidents using Lumped Channel APR1400 Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Oezdemir, Erdal; Moon, Kang Hoon; Oh, Seung Jong [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of); Kim, Yongdeog [KHNP-CRI, Daejeon (Korea, Republic of)

    2014-10-15

    Subchannel analysis plays important role to evaluate safety critical parameters like minimum departure from nucleate boiling ratio (MDNBR), peak clad temperature and fuel centerline temperature. In this study, two different subchannel codes, VIPRE-01 (Versatile Internals and Component Program for Reactors: EPRI) and THALES (Thermal Hydraulic AnaLyzer for Enhanced Simulation of core) are examined. In this study, two different transient cases for which MDNBR result play important role are selected to conduct analysis with THALES and VIPRE-01 subchannel codes. In order to get comparable results same core geometry, fuel parameters, correlations and models are selected for each code. MDNBR results from simulations by both code are agree with each other with negligible difference. Whereas, simulations conducted by enabling conduction model in VIPRE-01 shows significant difference from the results of THALES.

  18. Comparison of ICD code-based diagnosis of obesity with measured obesity in children and the implications for health care cost estimates

    OpenAIRE

    Kuhle Stefan; Kirk Sara FL; Ohinmaa Arto; Veugelers Paul J

    2011-01-01

    Abstract Background Administrative health databases are a valuable research tool to assess health care utilization at the population level. However, their use in obesity research limited due to the lack of data on body weight. A potential workaround is to use the ICD code of obesity to identify obese individuals. The objective of the current study was to investigate the sensitivity and specificity of an ICD code-based diagnosis of obesity from administrative health data relative to the gold s...

  19. The NIMROD Code

    Science.gov (United States)

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  20. Comparison of 3 absolute gravimeters based on different methods for the e-MASS project

    CERN Document Server

    Louchet-Chauvet, A; Bodart, Q; Landragin, A; Santos, F Pereira Dos; Baumann, H; D'Agostino, G; Origlia, C

    2010-01-01

    We report on the comparison between three absolute gravimeters that took place in April 2010 at Laboratoire National de M\\'etrologie et d'Essais. The three instruments (FG5#209 from METAS, Switzerland, IMGC-02 from INRIM, Italy, and CAG from LNE-SYRTE, France) rely on different methods: optical and atomic interferometry. We discuss their differences as well as their similarities. We compare their measurements of the gravitational acceleration in 4 points of the same pillar, in the perspective of an absolute determination of g for a watt balance experiment

  1. Application of QR Two-dimensional Code in Iinternational Project Logistics%QR二维码在国际工程项目物流的应用初探

    Institute of Scientific and Technical Information of China (English)

    朱艳平

    2016-01-01

    Aiming at the problems encountered in international engineering project regarding the tracking and management of logistics information,this paper discusses the demand analysis,and then establishes the QR two dimensional code element design,generation and use of methods and processes in the international engineering project logistics management.At last,as per an actual case,this paper proposes operational plans and suggestions for the application of the two -dimensional bar code technology in the international project logistics implementation and management.%针对国际工程项目在物流信息化追踪和管理中遇到的问题,文中论述了需求分析,构建了应用于国际工程项目物流的QR二维码的要素设计、生成及使用方法和流程,结合参与管理项目的实际案例,为二维码技术在国际工程项目物流实施和管理中的应用提出了具有可操作性的方案和建议。

  2. COMPARISON OF THREE METHODS TO PROJECT FUTURE BASELINE CARBON EMISSIONS IN TEMPERATE RAINFOREST, CURINANCO, CHILE

    Energy Technology Data Exchange (ETDEWEB)

    Patrick Gonzalez; Antonio Lara; Jorge Gayoso; Eduardo Neira; Patricio Romero; Leonardo Sotomayor

    2005-07-14

    Deforestation of temperate rainforests in Chile has decreased the provision of ecosystem services, including watershed protection, biodiversity conservation, and carbon sequestration. Forest conservation can restore those ecosystem services. Greenhouse gas policies that offer financing for the carbon emissions avoided by preventing deforestation require a projection of future baseline carbon emissions for an area if no forest conservation occurs. For a proposed 570 km{sup 2} conservation area in temperate rainforest around the rural community of Curinanco, Chile, we compared three methods to project future baseline carbon emissions: extrapolation from Landsat observations, Geomod, and Forest Restoration Carbon Analysis (FRCA). Analyses of forest inventory and Landsat remote sensing data show 1986-1999 net deforestation of 1900 ha in the analysis area, proceeding at a rate of 0.0003 y{sup -1}. The gross rate of loss of closed natural forest was 0.042 y{sup -1}. In the period 1986-1999, closed natural forest decreased from 20,000 ha to 11,000 ha, with timber companies clearing natural forest to establish plantations of non-native species. Analyses of previous field measurements of species-specific forest biomass, tree allometry, and the carbon content of vegetation show that the dominant native forest type, broadleaf evergreen (bosque siempreverde), contains 370 {+-} 170 t ha{sup -1} carbon, compared to the carbon density of non-native Pinus radiata plantations of 240 {+-} 60 t ha{sup -1}. The 1986-1999 conversion of closed broadleaf evergreen forest to open broadleaf evergreen forest, Pinus radiata plantations, shrublands, grasslands, urban areas, and bare ground decreased the carbon density from 370 {+-} 170 t ha{sup -1} carbon to an average of 100 t ha{sup -1} (maximum 160 t ha{sup -1}, minimum 50 t ha{sup -1}). Consequently, the conversion released 1.1 million t carbon. These analyses of forest inventory and Landsat remote sensing data provided the data to

  3. nIFTy Cosmology: Galaxy/halo mock catalogue comparison project on clustering statistics

    CERN Document Server

    Chuang, Chia-Hsun; Prada, Francisco; Munari, Emiliano; Avila, Santiago; Izard, Albert; Kitaura, Francisco-Shu; Manera, Marc; Monaco, Pierluigi; Murray, Steven; Knebe, Alexander; Scoccola, Claudia G; Yepes, Gustavo; Garcia-Bellido, Juan; Marin, Felipe A; Muller, Volker; Skibba, Ramin; Crocce, Martin; Fosalba, Pablo; Gottlober, Stefan; Klypin, Anatoly A; Power, Chris; Tao, Charling; Turchaninov, Victor

    2014-01-01

    We present a comparison of major methodologies of fast generating mock halo or galaxy catalogues. The comparison is done for two-point and the three-point clustering statistics. The reference catalogues are drawn from the BigMultiDark N-body simulation. Both friend-of-friends (including distinct halos only) and spherical overdensity (including distinct halos and subhalos) catalogs have been used with the typical number density of a large-volume galaxy surveys. We demonstrate that a proper biasing model is essential for reproducing the power spectrum at quasilinear and even smaller scales. With respect to various clustering statistics a methodology based on perturbation theory and a realistic biasing model leads to very good agreement with N-body simulations. However, for the quadrupole of the correlation function or the power spectrum, only the method based on semi-N-body simulation could reach high accuracy (1% level) at small scales, i.e., r0.15 h/Mpc. For those methods that only produce distinct haloes, a ...

  4. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  5. Inter-comparison of statistical downscaling methods for projection of extreme precipitation in Europe

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Hundecha, Y.; Lawrence, D.;

    2015-01-01

    Information on extreme precipitation for future climate is needed to assess the changes in the frequency and intensity of flooding. The primary source of information in climate change impact studies is climate model projections. However, due to the coarse resolution and biases of these models...... be drawn regarding the differences between CFs and BC methods. The performance of the BC methods during the control period also depends on the catchment, but in most cases they represent an improvement compared to RCM outputs. Analysis of the variance in the ensemble of RCMs and SDMs indicates...

  6. Comparison of LASER and LED illumination for fiber optic fringe projection

    Science.gov (United States)

    Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard

    2016-04-01

    The inspection of functional elements is a crucial part of modern production cycles. However, with higher integration of production machinery and products, the accessibility for measurement systems is more and more limited. A solution for this problem can be found in endoscopy techniques, which are able to transport the image information for optical measurement methods. In this paper, an optical inspection system based on the fringe projection profilometry technique is presented. The fiber-optic fringe projection system uses two high-resolution image fibers to connect a compact sensor head to the pattern generation and camera unit. In order to keep inspection times low, the system is developed with particular focus on fast projection times. This can be achieved by using a digital micro-mirror device, which is capable of projecting grey-scale patterns at a rate of more than 10 images per second. However, due to the low numerical aperture of the optical fibers, a limiting factor for the pattern rate is the illumination path of the pattern generator. Two different designs of the illumination path are presented, which are based on a LASER light source as well as a LED light source. Due to low beam divergence and high intensities LASERs are well suited for fiber coupling. Unfortunately, the coherent property of the light has negative effects in certain measurement applications, as interference patterns, the so called speckle, appear on rough surfaces. Although speckle reducing methods are employed in the LASER beam path, the emergence of interference cannot be prevented completely. As an alternative, an illumination path based on a LED light source is demonstrated. To compare the effects of the speckle, based on measurements on a planar calibration standard both designs are compared in terms of phase noise, which is directly related to the noise in the reconstructed 3-D point data. Additionally, optical power measurements of both methods are compared to give an

  7. 中美混凝土结构设计规范剪扭构件承载力的对比分析%COMPARISON BETWEEN CHINESE CODE AND AMERICAN CODE IN SHEAR-TORSION STRENGTH OF RC MEMBERS

    Institute of Scientific and Technical Information of China (English)

    鲁懿虬; 黄靓

    2012-01-01

    Shear-torsion strength calculation methods and results of RC members using Chinese code(GB 50010-2002) are compared with those using American Code(ACI 318-08) in this paper.The reliability levels of the two codes are also analyzed.It is shown that the Trilinear Model used for shear-torsion behavior modeling in Chinese code expands the shear-torsion envelop,leading to an overestimation of concrete capacity and safety margin.While American Code provides a more conservative model and requires larger amount of steel reinforcement,hence is more conservative.Moreover,American Code can satisfy the strength demand of 1/4 Circular Arc Model test but Chinese code cannot.The reliability level of the Chinese Code can not meet the requirements of Unified Standard for Reliability Design of Building Structures(GB 50068-2001).%为研究GB50010-2002《混凝土结构设计规范》(以下简称"我国规范")剪扭构件的安全度水准及有关设计方法的合理性,对比分析了我国规范和ACI 318-08(以下简称"美国规范")剪扭构件的承载力计算方法和结果,并进行了可靠度比较。结果表明,我国规范在考虑混凝土部分相关性时,用"三折线"简化模型将混凝土部分提供的抗扭和抗剪承载力相关曲线外扩,使得混凝土部分的计算承载力被高估,设计安全度偏小。美国规范对于混凝土部分剪扭相关性考虑得较为保守,使得混凝土部分设计偏于安全。美国规范配筋量比我国规范大,配筋基本满足1/4圆弧相关性模型的承载力要求;而我国规范计算所得的钢筋不能满足1/4圆弧相关性模型的承载力要求,设计安全度比美国规范小。美国规范的可靠指标比我国规范高,我国规范可靠指标不能满足GB 50068-2001《建筑结构可靠度设计统一标准》的要求。

  8. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe...... the codes succinctly using Gröbner bases....

  9. Development and comparison of projection and image space 3D nodule insertion techniques

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan

    2016-04-01

    This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.

  10. Projected current density comparison in tDCS block and smooth FE modeling.

    Science.gov (United States)

    Indahlastari, Aprinda; Chauhan, Munish; Sadleir, Rosalind J

    2016-08-01

    Current density distribution and projected current density calculation following transcranial direct current stimulation (tDCS) forward model in a human head were compared between two modeling pipelines: block and smooth. Block model was directly constructed from MRI voxel resolution and simulated in C. Smooth models underwent a boundary smoothing process by applying recursive Gaussian filters and simulated in COMSOL. Three smoothing levels were added to determine their effects on current density distribution compared to block models. Median current density percentage differences were calculated in anterior superior temporal gyrus (ASTG), hippocampus (HIP), inferior frontal gyrus (IFG), occipital lobes (OCC) and precentral gyrus (PRC) and normalized against a baseline value. A maximum of + 20% difference in median current density was found for three standard electrode montages: F3-RS, T7-T8 and Cz-Oz. Furthermore, median current density percentage differences in each montage target brain structures were found to be within + 7%. Higher levels of smoothing increased median current density percentage differences in T7-T8 and Cz-Oz target structures. However, while demonstrating similar trends in each montage, additional smoothing levels showed no clear relationship between their smoothing effects and calculated median current density in the five cortical structures. Finally, relative L2 error in reconstructed projected current density was found to be 17% and 21% for block and smooth pipelines, respectively. Overall, a block model workflow may be a more attractive alternative for simulating tDCS stimulation because involves a shorter modeling time and independence from commercial modeling platforms.

  11. Cities in transcontinental context: A comparison of mega urban projects in Shanghai and Belgrade

    Directory of Open Access Journals (Sweden)

    Waley Paul

    2013-01-01

    Full Text Available This study of urban developments in Belgrade and Shanghai is set in the context of comparative urban research. It presents two ostensibly contrasting cities and briefly examines urban development patterns in China and Serbia before focusing more specifically on mega urban projects in the two cities - Pudong and Hongqiao in Shanghai contrasted with New Belgrade. While the historical genesis of the Chinese and Serbian projects differs markedly, together they provide complementary examples of contemporary entrepreneurial urban development in divergent settings. China and Serbia share a heritage of state ownership of urban land, and this characteristic is still very much a feature underpinning development in Shanghai and other Chinese cities, as well as in New Belgrade. In both territories, state ownership of land has contributed to a form of urban development which - it is argued in this paper - can best be seen as state-based but market-led. The comparative study that this work initiates will, it is hoped, contribute to an understanding of contextual change in the two worlds regions of East Europe and East Asia.

  12. "A COMPARISON OF ""TRADITIONAL LECTURE"" AND ""LECTURE ALONG WITH FILMSTRIP PROJECTION"""

    Directory of Open Access Journals (Sweden)

    S.Tahvildari

    1986-12-01

    Full Text Available In this study, for the first time in Iran, the application and effectiveness of two educational methods of" traditional lecture" and " lecture along with filmstrip projection", on the level of personal health knowledge of students studying at third grade of girls guidance school in the 7th educational district of Tehran was compared. The "Experimental design" was chosen for conducting this research in order to have a more suitable ground for understanding causal relationships. In order to consider both, the effects of the primary measurement and also the effects of the simultaneous factors, the experimental design was applied. A control group was also included in the experiment. The results of this study showed that the "lecture along with filmstrip projection" was significantly better and more effective that "traditional lecture" method of education in increasing student's health knowledge in relation to personal health contents. It was also concluded that the "traditional lecture" method of education has had significantly more effects on increasing student's health knowledge as compared to control group (without receiving health education in relation to personal health.

  13. Imaging through turbid media via sparse representation: imaging quality comparison of three projection matrices

    Science.gov (United States)

    Shao, Xiaopeng; Li, Huijuan; Wu, Tengfei; Dai, Weijia; Bi, Xiangli

    2015-05-01

    The incident light will be scattered away due to the inhomogeneity of the refractive index in many materials which will greatly reduce the imaging depth and degrade the imaging quality. Many exciting methods have been presented in recent years for solving this problem and realizing imaging through a highly scattering medium, such as the wavefront modulation technique and reconstruction technique. The imaging method based on compressed sensing (CS) theory can decrease the computational complexity because it doesn't require the whole speckle pattern to realize reconstruction. One of the key premises of this method is that the object is sparse or can be sparse representation. However, choosing a proper projection matrix is very important to the imaging quality. In this paper, we analyzed that the transmission matrix (TM) of a scattering medium obeys circular Gaussian distribution, which makes it possible that a scattering medium can be used as the measurement matrix in the CS theory. In order to verify the performance of this method, a whole optical system is simulated. Various projection matrices are introduced to make the object sparse, including the fast Fourier transform (FFT) basis, the discrete cosine transform (DCT) basis and the discrete wavelet transform (DWT) basis, the imaging performances of each of which are compared comprehensively. Simulation results show that for most targets, applying the discrete wavelet transform basis will obtain an image in good quality. This work can be applied to biomedical imaging and used to develop real-time imaging through highly scattering media.

  14. Photolysis frequency measurement techniques: results of a comparison within the ACCENT project

    Directory of Open Access Journals (Sweden)

    K. C. Clemitshaw

    2008-09-01

    Full Text Available An intercomparison of different radiometric techniques measuring atmospheric photolysis frequencies j(NO2, j(HCHO and j(O1D was carried out in a two-week field campaign in June 2005 at Jülich, Germany. Three double-monochromator based spectroradiometers (DM-SR, three single-monochromator based spectroradiometers with diode-array detectors (SM-SR and seventeen filter radiometers (FR (ten j(NO2-FR, seven j(O1D-FR took part in this comparison. For j(NO2, all spectroradiometer results agreed within ±3%. For j(HCHO, agreement was slightly poorer between −8% and +4% of the DM-SR reference result. For the SM-SR deviations were explained by poorer spectral resolutions and lower accuracies caused by decreased sensitivities of the photodiode arrays in a wavelength range below 350 nm. For j(O1D, the results were more complex within +8% and −4% with increasing deviations towards larger solar zenith angles for the SM-SR. The direction and the magnitude of the deviations were dependent on the technique of background determination. All j(NO2-FR showed good linearity with single calibration factors being sufficient to convert from output voltages to j(NO2. Measurements were feasible until sunset and comparison with previous calibrations showed good long-term stability. For the j(O1D-FR, conversion from output voltages to j(O1D needed calibration factors and correction functions considering the influences of total ozone column and elevation of the sun. All instruments showed good linearity at photolysis frequencies exceeding about 10% of maximum values. At larger solar zenith angles, the agreement was non-uniform with deviations explainable by insufficient correction functions. Comparison with previous calibrations for some j(O1D-FR indicated

  15. nIFTy cosmology: Galaxy/halo mock catalogue comparison project on clustering statistics

    Science.gov (United States)

    Chuang, Chia-Hsun; Zhao, Cheng; Prada, Francisco; Munari, Emiliano; Avila, Santiago; Izard, Albert; Kitaura, Francisco-Shu; Manera, Marc; Monaco, Pierluigi; Murray, Steven; Knebe, Alexander; Scóccola, Claudia G.; Yepes, Gustavo; Garcia-Bellido, Juan; Marín, Felipe A.; Müller, Volker; Skibba, Ramin; Crocce, Martin; Fosalba, Pablo; Gottlöber, Stefan; Klypin, Anatoly A.; Power, Chris; Tao, Charling; Turchaninov, Victor

    2015-09-01

    We present a comparison of major methodologies of fast generating mock halo or galaxy catalogues. The comparison is done for two-point (power spectrum and two-point correlation function in real and redshift space), and the three-point clustering statistics (bispectrum and three-point correlation function). The reference catalogues are drawn from the BigMultiDark N-body simulation. Both friend-of-friends (including distinct haloes only) and spherical overdensity (including distinct haloes and subhalos) catalogues have been used with the typical number density of a large volume galaxy surveys. We demonstrate that a proper biasing model is essential for reproducing the power spectrum at quasi-linear and even smaller scales. With respect to various clustering statistics, a methodology based on perturbation theory and a realistic biasing model leads to very good agreement with N-body simulations. However, for the quadrupole of the correlation function or the power spectrum, only the method based on semi-N-body simulation could reach high accuracy (1 per cent level) at small scales, i.e. r 0.15 h Mpc-1. Full N-body solutions will remain indispensable to produce reference catalogues. Nevertheless, we have demonstrated that the more efficient approximate solvers can reach a few per cent accuracy in terms of clustering statistics at the scales interesting for the large-scale structure analysis. This makes them useful for massive production aimed at covariance studies, to scan large parameter spaces, and to estimate uncertainties in data analysis techniques, such as baryon acoustic oscillation reconstruction, redshift distortion measurements, etc.

  16. Comparison and validation of the results of the AZNHEX v.1.0 code with the MCNP code simulating the core of a fast reactor cooled with sodium; Comparacion y validacion de los resultados del codigo AZNHEX v.1.0 con el codigo MCNP simulando el nucleo de un reactor rapido refrigerado con sodio

    Energy Technology Data Exchange (ETDEWEB)

    Galicia A, J.; Francois L, J. L.; Bastida O, G. E. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Esquivel E, J., E-mail: blink19871@hotmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2016-09-15

    The development of the AZTLAN platform for the analysis and design of nuclear reactors is led by Instituto Nacional de Investigaciones Nucleares (ININ) and divided into four working groups, which have well-defined activities to achieve significant progress in this project individually and jointly. Within these working groups is the users group, whose main task is to use the codes that make up the AZTLAN platform to provide feedback to the developers, and in this way to make the final versions of the codes are efficient and at the same time reliable and easy to understand. In this paper we present the results provided by the AZNHEX v.1.0 code when simulating the core of a fast reactor cooled with sodium at steady state. The validation of these results is a fundamental part of the platform development and responsibility of the users group, so in this research the results obtained with AZNHEX are compared and analyzed with those provided by the Monte Carlo code MCNP-5, software worldwide used and recognized. A description of the methodology used with MCNP-5 is also presented for the calculation of the interest variables and the difference that is obtained with respect to the calculated with AZNHEX. (Author)

  17. Formal Verification of Interactions of the RTOS, Memory System, and Application Programs at the PowerPC 750 Binary Code Level Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In the proposed project, we will formally verify the correctness of the interaction between a Real-Time Operating System (RTOS) and user processes under various...

  18. Comparison and Validation of Four Arctic Sea Ice Thickness Products of the EC POLAR ICE Project

    Science.gov (United States)

    Melsheimer, C.; Makynen, M.; Rasmussen, T. S.; Rudjord, Ø.; Simila, M.; Solberg, R.; Walker, N. P.

    2016-08-01

    Sea ice thickness (SIT) is an important parameter for monitoring Arctic change, modelling and predicting weather and climate, and for navigation and offshore operations. However, SIT is still not very well monitored operationally. In the European Commission (EC) FP7 project "POLAR ICE", three novel SIT products based on different satellite data as well as SIT from a state-of-the- art ocean and sea ice model are fed into a common data handling and distribution system for end users. Each SIT product has different scopes and limitations as to, e.g., spatial and temporal resolution, ice thickness range and geographical domain. The aim of this study is to compare the four different SIT products with each other and with SIT in-situ measurements in order to better understand the differences and limitations, and possibly give recommendations on how to best profit from the synergy of the different data.

  19. Financial and clinical governance implications of clinical coding accuracy in neurosurgery: a multidisciplinary audit.

    Science.gov (United States)

    Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar

    2010-04-01

    Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.

  20. Inter-comparison of statistical downscaling methods for projection of extreme precipitation in Europe

    Science.gov (United States)

    Sunyer, M. A.; Hundecha, Y.; Lawrence, D.; Madsen, H.; Willems, P.; Martinkova, M.; Vormoor, K.; Bürger, G.; Hanel, M.; Kriaučiūnienė, J.; Loukas, A.; Osuch, M.; Yücel, I.

    2015-04-01

    Information on extreme precipitation for future climate is needed to assess the changes in the frequency and intensity of flooding. The primary source of information in climate change impact studies is climate model projections. However, due to the coarse resolution and biases of these models, they cannot be directly used in hydrological models. Hence, statistical downscaling is necessary to address climate change impacts at the catchment scale. This study compares eight statistical downscaling methods (SDMs) often used in climate change impact studies. Four methods are based on change factors (CFs), three are bias correction (BC) methods, and one is a perfect prognosis method. The eight methods are used to downscale precipitation output from 15 regional climate models (RCMs) from the ENSEMBLES project for 11 catchments in Europe. The overall results point to an increase in extreme precipitation in most catchments in both winter and summer. For individual catchments, the downscaled time series tend to agree on the direction of the change but differ in the magnitude. Differences between the SDMs vary between the catchments and depend on the season analysed. Similarly, general conclusions cannot be drawn regarding the differences between CFs and BC methods. The performance of the BC methods during the control period also depends on the catchment, but in most cases they represent an improvement compared to RCM outputs. Analysis of the variance in the ensemble of RCMs and SDMs indicates that at least 30% and up to approximately half of the total variance is derived from the SDMs. This study illustrates the large variability in the expected changes in extreme precipitation and highlights the need for considering an ensemble of both SDMs and climate models. Recommendations are provided for the selection of the most suitable SDMs to include in the analysis.

  1. Atmospheric Rivers in VR-CESM: Historical Comparison and Future Projections

    Science.gov (United States)

    McClenny, E. E.; Ullrich, P. A.

    2016-12-01

    Atmospheric rivers (ARs) are responsible for most of the horizontal vapor transport from the tropics, and bring upwards of half the annual precipitation to midlatitude west coasts. The difference between a drought year and a wet year can come down to 1-2 ARs. Such few events transform an otherwise arid region into one which supports remarkable biodiversity, productive agriculture, and booming human populations. It follows that such a sensitive hydroclimate feature would demand priority in evaluating end-of-century climate runs, and indeed, the AR subfield has grown significantly over the last decade. However, results tend to vary wildly from study to study, raising questions about how to best approach ARs in models. The disparity may result from any number of issues, including the ability for a model to properly resolve a precipitating AR, to the formulation and application of an AR detection algorithm. ARs pose a unique problem in global climate models (GCMs) computationally and physically, because the GCM horizontal grid must be fine enough to resolve coastal mountain range topography and force orographic precipitation. Thus far, most end-of-century projections on ARs have been performed on models whose grids are too coarse to resolve mountain ranges, causing authors to draw conclusions on AR intensity from water vapor content or transport alone. The use of localized grid refinement in the Variable Resolution version of NCAR's Community Earth System Model (VR-CESM) has succeeded in resolving AR landfall. This study applies an integrated water vapor AR detection algorithm to historical and future projections from VR-CESM, with historical ARs validated against NASA's Modern Era Retrospective-Analysis for Research and Applications. Results on end-of-century precipitating AR frequency, intensity, and landfall location will be discussed.

  2. New code for equilibriums and quasiequilibrium initial data of compact objects. II. Convergence tests and comparisons of binary black hole initial data

    CERN Document Server

    Uryu, Koji; Grandclement, Philippe

    2012-01-01

    COCAL is a code for computing equilibriums or quasiequilibrium initial data of single or binary compact objects based on finite difference methods. We present the results of supplementary convergence tests of COCAL code using time symmetric binary black hole data (Brill-Lindquist solution). Then, we compare the initial data of binary black holes on the conformally flat spatial slice obtained from COCAL and KADATH, where KADATH is a library for solving a wide class of problems in theoretical physics including relativistic compact objects with spectral methods. Data calculated from the two codes converge nicely towards each other, for close as well as largely separated circular orbits of binary black holes. Finally, as an example, a sequence of equal mass binary black hole initial data with corotating spins is calculated and compared with data in the literature.

  3. SubHaloes going Notts: The SubHalo-Finder Comparison Project

    CERN Document Server

    Onions, Julian; Pearce, Frazer R; Muldrew, Stuart I; Lux, Hanni; Knollmann, Steffen R; Ascasibar, Yago; Behroozi, Peter; Elahi, Pascal; Han, Jiaxin; Maciejewski, Michal; Merchán, Manuel E; Neyrinck, Mark; Ruiz, Andrés N; Sgró, Mario A; Springel, Volker; Tweed, Dylan

    2012-01-01

    We present a detailed comparison of the substructure properties of a single Milky Way sized dark matter halo from the Aquarius suite at five different resolutions, as identified by a variety of different (sub-)halo finders for simulations of cosmic structure formation. These finders span a wide range of techniques and methodologies to extract and quantify substructures within a larger non-homogeneous background density (e.g. a host halo). This includes real-space, phase-space, velocity-space and time- space based finders, as well as finders employing a Voronoi tessellation, friends-of-friends techniques, or refined meshes as the starting point for locating substructure.A common post-processing pipeline was used to uniformly analyse the particle lists provided by each finder. We extract quantitative and comparable measures for the subhaloes, primarily focusing on mass and the peak of the rotation curve for this particular study. We find that all of the finders agree extremely well on the presence and location ...

  4. CMIP5 permafrost degradation projection:A comparison among different regions

    Science.gov (United States)

    Guo, Donglin; Wang, Huijun

    2016-05-01

    The considerable impact of permafrost degradation on hydrology and water resources, ecosystems, human engineering facilities, and climate change requires us to carry out more in-depth studies, at finer spatial scales, to investigate the issue. In this study, regional differences of the future permafrost changes are explored with respect to the regions (high altitude and high latitude, and in four countries) based on the surface frost index (SFI) model and multimodel and multiscenario data from the fifth phase of the Coupled Model Intercomparison Project (CMIP5). Results show the following: (1) Compared with seven other sets of driving data, Climatic Research Unit air temperature combined with Climate Forecast System Reanalysis snow data (CRU_CFSR) yield a permafrost extent with the least absolute area bias and was thus used in the simulation. The SFI model, driven by CRU_CFSR data climatology plus multimodel mean anomalies, produces a present-day (1986-2005) permafrost area of 15.45 × 106 km2 decade-1, which compares reasonably with observations of 15.24 × 106 km2 decade-1. (2) The high-altitude (Tibetan Plateau) permafrost area shows a larger decreasing percentage trend than the high-latitude permafrost area. This indicates that, in terms of speed, high-altitude permafrost thaw is faster than high-latitude permafrost, mainly due to the larger percentage sensitivity to rising air temperature of the high-altitude permafrost compared to the high-latitude permafrost, which is likely related to their thermal conditions. (3) Permafrost in China shows the fastest thaw, which is reflected by the percentage trend in permafrost area, followed by the United States, Russia, and Canada. These discrepancies are mainly linked to different percentage sensitivities of permafrost areas in these four countries to air temperature change. (4) In terms of the ensemble mean, permafrost areas in all regions are projected to decrease by the period 2080-2099. Under representative

  5. Coded Path Protection: Efficient Conversion of Sharing to Coding

    CERN Document Server

    Avci, Serhat Nazim

    2011-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow and require complex signaling. Additionally, real-time reconfiguration of a cross-connect threatens their transmission integrity. On the other hand, coding-based protection schemes are proactive. Therefore, they have higher restoration speed, lower signaling complexity, and higher transmission integrity. This paper introduces a coding-based protection scheme, named Coded Path Protection (CPP). In CPP, a backup copy of the primary data is encoded with other data streams, resulting in capacity savings. This paper presents an optimal and simple capacity placement and coding group formation algorithm. The algorithm converts the sharing structure of any solution of a Shared Path Protection (SPP) technique into a coding structure with minimum extra capacity. We conducted quantitative and qualitative comparisons of our technique with the SPP and, another tec...

  6. Electronic medical records: recommendations for routine. Report of the eHID (Electronic Health Indicator Data Project). International comparisons of epidemiological outcomes: diabetes, health disease and mental illness.

    NARCIS (Netherlands)

    Pringle, M.; Schellevis, F.G.; Elliott, C.; Verheij, R.A.; Fleming, D.M.

    2007-01-01

    Aim: It is believed that electronic medical records generated in a routine and disciplined manner by primary care doctors can potentially provide a very cost effective approach to disease monitoring. Part of the eHID project was concerned with a comparison of the actual epidemiological data that

  7. Comparison of Ice-Bank Actual Results Against Simulated Predicted Results in Carroll Refurbishment Project DKIT

    Directory of Open Access Journals (Sweden)

    Edel Donnelly

    2012-11-01

    Full Text Available This paper reviews the selection methods used in the design of an ice-bank thermal energy storage (TES application in the Carroll’s building in Dundalk IT. The complexities of the interaction between the on- site wind turbine, existing campus load and the refurbished building meant that traditional calculation methods and programmes could not be used and specialist software had to be developed during the design process. The research reviews this tool against the actual results obtained from the operation in the building for one college term of full time use. The paper also examines the operation of the system in order to produce recommendations for its potential modification to improve its efficiency and utilisation. Simulation software is evaluated and maximum import capacity is minimised. Significant budget constraints limited the level of control and metering that could be provided for the project, and this paper demonstrates some investigative processes that were used to overcome the limitations on data availability.

  8. The PorGrow project: overall cross-national results, comparisons and implications.

    Science.gov (United States)

    Millstone, E; Lobstein, T

    2007-05-01

    European policymakers need more information on policy responses to obesity that stakeholders judge effective and acceptable. The Policy Options for Responding to the Growing Challenge of Obesity Research Project gathered such intelligence by interviewing key stakeholder groups in nine countries. Interviews used an innovative multi-criteria mapping (MCM) methodology that gathers quantitative and qualitative information on the stakeholders' perceptions and judgements. Aggregating across all participants, a comprehensive portfolio of policy measures, integrated into a coherent programme, would be well-supported by broad coalitions of stakeholders. Those portfolios should include measures (i) to provide improved educations in schools and to the general adult population; (ii) measures to improve access to and incentives for physical activity; (iii) measures to improve information about both foods and physical activity and (iv) changes to the supply of and demand for foodstuffs. There was little support for fiscal measures and technological 'fixes'; they were judged ineffective and unacceptable. Significant differences were found across European regions, and across different stakeholder perspectives, but not across genders. There is a strong case for improved monitoring of body mass index levels, dietary habits and physical activity. An MCM study can effectively cover several countries, rather than being confined to just one, and generate both national and cross-national policy analyses and proposals.

  9. Comparison of parabolic filtration methods for 3D filtered back projection in pulsed EPR imaging.

    Science.gov (United States)

    Qiao, Zhiwei; Redler, Gage; Epel, Boris; Halpern, Howard J

    2014-11-01

    Pulse electron paramagnetic resonance imaging (Pulse EPRI) is a robust method for noninvasively measuring local oxygen concentrations in vivo. For 3D tomographic EPRI, the most commonly used reconstruction algorithm is filtered back projection (FBP), in which the parabolic filtration process strongly influences image quality. In this work, we designed and compared 7 parabolic filtration methods to reconstruct both simulated and real phantoms. To evaluate these methods, we designed 3 error criteria and 1 spatial resolution criterion. It was determined that the 2 point derivative filtration method and the two-ramp-filter method have unavoidable negative effects resulting in diminished spatial resolution and increased artifacts respectively. For the noiseless phantom the rectangular-window parabolic filtration method and sinc-window parabolic filtration method were found to be optimal, providing high spatial resolution and small errors. In the presence of noise, the 3 point derivative method and Hamming-window parabolic filtration method resulted in the best compromise between low image noise and high spatial resolution. The 3 point derivative method is faster than Hamming-window parabolic filtration method, so we conclude that the 3 point derivative method is optimal for 3D FBP. Copyright © 2014. Published by Elsevier Inc.

  10. Verification of the NIKE3D structural analysis code by comparison against the analytic solution for a spherical cavity under a far-field uniaxial stress

    Energy Technology Data Exchange (ETDEWEB)

    Kansa, E.J.

    1989-01-01

    The original scope of this task was to simulate the stresses and displacements of a hard rock tunnel experimental design using a suitable three-dimensional finite element code. NIKE3D was selected as a suitable code for performing these primarily approximate linearly elastic 3D analyses, but it required modifications to include initial stress, shear traction boundary condition and excavation options. During the summer of 1988, such capabilities were installed in a special version of NIKE3D. Subsequently, we verified both the LLNL's commonly used version of NIKE3D and our private modified version against the analytic solution for a spherical cavity in an elastic material deforming under a far-field uniaxial stress. We find the results produced by the unmodified and modified versions of NIKE3D to be in good agreement with the analytic solutions, except near the cavity, where the errors in the stress field are large. As can be expected from a code based on a displacement finite element formulation, the displacements are much more accurate than the stresses calculated from the 8-noded brick elements. To reduce these errors to acceptable levels, the grid must be refined further near the cavity wall. The level of grid refinement required to simulate accurately tunneling problems that do not have spatial symmetry in three dimensions using the current NIKE3D code is likely to exceed the memory capacity of the largest CRAY 1 computers at LLNL. 8 refs., 121 figs.

  11. Paired Comparison Survey Analyses Utilizing Rasch Methodology of the Relative Difficulty and Estimated Work Relative Value Units of CPT® Code 27279

    Science.gov (United States)

    Lorio, Morgan; Ferrara, Lisa

    2016-01-01

    Background Minimally invasive sacroiliac joint arthrodesis (“MI SIJ fusion”) received a Category I CPT® code (27279) effective January 1, 2015 and was assigned a work relative value unit (“RVU”) of 9.03. The International Society for the Advancement of Spine Surgery (“ISASS”) conducted a study consisting of a Rasch analysis of two separate surveys of surgeons to assess the accuracy of the assigned work RVU. Methods A survey was developed and sent to ninety-three ISASS surgeon committee members. Respondents were asked to compare CPT® 27279 to ten other comparator CPT® codes reflective of common spine surgeries. The survey presented each comparator CPT® code with its code descriptor as well as the description of CPT® 27279 and asked respondents to indicate whether CPT® 27279 was greater, equal, or less in terms of work effort than the comparator code. A second survey was sent to 557 U.S.-based spine surgeon members of ISASS and 241 spine surgeon members of the Society for Minimally Invasive Spine Surgery (“SMISS”). The design of the second survey mirrored that of the first survey except for the use of a broader set of comparator CPT® codes (27 vs. 10). Using the work RVUs of the comparator codes, a Rasch analysis was performed to estimate the relative difficulty of CPT® 27279, after which the work RVU of CPT® 27279 was estimated by regression analysis. Results Twenty surgeons responded to the first survey and thirty-four surgeons responded to the second survey. The results of the regression analysis of the first survey indicate a work RVU for CPT® 27279 of 14.36 and the results of the regression analysis of the second survey indicate a work RVU for CPT® 27279 of 14.1. Conclusion The Rasch analysis indicates that the current work RVU assigned to CPT® 27279 is undervalued at 9.03. Averaging the results of the regression analyses of the two surveys indicates a work RVU for CPT® 27279 of 14.23.

  12. Comparison of Fuel Temperature Coefficients of PWR UO{sub 2} Fuel from Monte Carlo Codes (MCNP6.1 and KENO6)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung-O; Roh, Gyuhong; Lee, Byungchul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    As a result, there was a difference within about 300-400 pcm between keff values at each enrichment due to the difference of codes and nuclear data used in the evaluations. The FTC was changed to be less negative with the increase of uranium enrichment, and it followed the form of asymptotic curve. However, it is necessary to perform additional study for investigating what factor causes the differences more than two standard deviation (2σ) among the FTCs at partial enrichment region. The interaction probability of incident neutron with nuclear fuel is depended on the relative velocity between the neutron and the target nuclei. The Fuel Temperature Coefficient (FTC) is defined as the change of Doppler effect with respect to the change in fuel temperature without any other change such as moderator temperature, moderator density, etc. In this study, the FTCs for UO{sub 2} fuel were evaluated by using MCNP6.1 and KENO6 codes based on a Monte Carlo method. In addition, the latest neutron cross-sections (ENDF/B-VI and VII) were applied to analyze the effect of these data on the evaluation of FTC, and nuclear data used in MCNP calculations were generated from the makxsf code. An evaluation of the Doppler effect and FTC for UO{sub 2} fuel widely used in PWR was conducted using MCNP6.1 and KENO6 codes. The ENDF/B-VI and VII were also applied to analyze what effect these data has on those evaluations. All cross-sections needed for MCNP calculation were produced using makxsf code. The calculation models used in the evaluations were based on the typical PWR UO{sub 2} lattice.

  13. European inter-comparison of Monte Carlo codes users for the uncertainty calculation of the kerma in air beside a caesium-137 source; Intercomparaison europeenne d'utilisateurs de codes monte carlo pour le calcul d'incertitudes sur le kerma dans l'air aupres d'une source de cesium-137

    Energy Technology Data Exchange (ETDEWEB)

    De Carlan, L.; Bordy, J.M.; Gouriou, J. [CEA Saclay, LIST, Laboratoire National Henri Becquerel, Laboratoire de Metrologie de la Dose 91 - Gif-sur-Yvette (France)

    2010-07-01

    Within the frame of the CONRAD European project (Coordination Network for Radiation Dosimetry), and more precisely within a work group paying attention to uncertainty assessment in computational dosimetry and aiming at comparing different approaches, the authors report the simulation of an irradiator containing a caesium 137 source to calculate the kerma in air as well as its uncertainty due to different parameters. They present the problem geometry, recall the studied issues (kerma uncertainty, influence of capsule source, influence of the collimator, influence of the air volume surrounding the source). They indicate the codes which have been used (MNCP, Fluka, Penelope, etc.) and discuss the obtained results for the first issue

  14. Climate change and Arctic ecosystems: 2. Modeling, paleodata-model comparisons, and future projections

    Science.gov (United States)

    Kaplan, J.O.; Bigelow, N.H.; Prentice, I.C.; Harrison, S.P.; Bartlein, P.J.; Christensen, T.R.; Cramer, W.; Matveyeva, N.V.; McGuire, A.D.; Murray, D.F.; Razzhivin, V.Y.; Smith, B.; Walker, D. A.; Anderson, P.M.; Andreev, A.A.; Brubaker, L.B.; Edwards, M.E.; Lozhkin, A.V.

    2003-01-01

    Large variations in the composition, structure, and function of Arctic ecosystems are determined by climatic gradients, especially of growing-season warmth, soil moisture, and snow cover. A unified circumpolar classification recognizing five types of tundra was developed. The geographic distributions of vegetation types north of 55??N, including the position of the forest limit and the distributions of the tundra types, could be predicted from climatology using a small set of plant functional types embedded in the biogeochemistry-biogeography model BIOME4. Several palaeoclimate simulations for the last glacial maximum (LGM) and mid-Holocene were used to explore the possibility of simulating past vegetation patterns, which are independently known based on pollen data. The broad outlines of observed changes in vegetation were captured. LGM simulations showed the major reduction of forest, the great extension of graminoid and forb tundra, and the restriction of low- and high-shrub tundra (although not all models produced sufficiently dry conditions to mimic the full observed change). Mid-Holocene simulations reproduced the contrast between northward forest extension in western and central Siberia and stability of the forest limit in Beringia. Projection of the effect of a continued exponential increase in atmospheric CO2 concentration, based on a transient ocean-atmosphere simulation including sulfate aerosol effects, suggests a potential for larger changes in Arctic ecosystems during the 21st century than have occurred between mid-Holocene and present. Simulated physiological effects of the CO2 increase (to > 700 ppm) at high latitudes were slight compared with the effects of the change in climate.

  15. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in brain CT

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Qingguo, E-mail: renqg83@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Dewan, Sheilesh Kumar, E-mail: sheilesh_d1@hotmail.com [Department of Geriatrics, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Ming, E-mail: minli77@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Jianying, E-mail: Jianying.Li@med.ge.com [CT Imaging Research Center, GE Healthcare China, Beijing (China); Mao, Dingbiao, E-mail: maodingbiao74@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Wang, Zhenglei, E-mail: Williswang_doc@yahoo.com.cn [Department of Radiology, Shanghai Electricity Hospital, Shanghai 200050 (China); Hua, Yanqing, E-mail: cjr.huayanqing@vip.163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China)

    2012-10-15

    Purpose: To compare image quality and visualization of normal structures and lesions in brain computed tomography (CT) with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP) reconstruction techniques in different X-ray tube current–time products. Materials and methods: In this IRB-approved prospective study, forty patients (nineteen men, twenty-one women; mean age 69.5 ± 11.2 years) received brain scan at different tube current–time products (300 and 200 mAs) in 64-section multi-detector CT (GE, Discovery CT750 HD). Images were reconstructed with FBP and four levels of ASIR-FBP blending. Two radiologists (please note that our hospital is renowned for its geriatric medicine department, and these two radiologists are more experienced in chronic cerebral vascular disease than in neoplastic disease, so this research did not contain cerebral tumors but as a discussion) assessed all the reconstructed images for visibility of normal structures, lesion conspicuity, image contrast and diagnostic confidence in a blinded and randomized manner. Volume CT dose index (CTDI{sub vol}) and dose-length product (DLP) were recorded. All the data were analyzed by using SPSS 13.0 statistical analysis software. Results: There was no statistically significant difference between the image qualities at 200 mAs with 50% ASIR blending technique and 300 mAs with FBP technique (p > .05). While between the image qualities at 200 mAs with FBP and 300 mAs with FBP technique a statistically significant difference (p < .05) was found. Conclusion: ASIR provided same image quality and diagnostic ability in brain imaging with greater than 30% dose reduction compared with FBP reconstruction technique.

  16. `95 computer system operation project

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.

  17. Comparison of Septocolumellar Suture (SCS and Lateral Crural Overlay (LCO methods on nasal tip projection and rotation in rhinoplasty

    Directory of Open Access Journals (Sweden)

    Khorasani Gh

    2010-11-01

    Full Text Available "n Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Background: Proper nasal tip control is a difficult step in rhinoplasty. The aim of this study was to compare the effects of two cartilage modifying methods, Septocolumellar Suture (SCS and Lateral Crural Overlay (LCO, on nasal tip projection and rotation."n"n Methods: In a single-blinded clinical trial, 36 patients who were scheduled for nasal tip deprojection were enrolled. A profile photograph of face was taken from all the patients before and three months post operation. Nasofacial angles, TP:Ln ratio for assessing nasal tip projection, tip columellar angle and nasolabial angles for nasal tip rotation assessment were measured by a computer software. The patients were randomly divided into two groups that underwent open rhinoplasty. "n"n Results: Both the LCO and SCS methods were accompanied by a significant reduction in nasofacial angle and TP:Ln ratio, there was raised nasolabial and rotation angle in comparison to preoperative values. The use of LCO method in comparison to SCS resulted in more increase in the nasolabial angle (11.83±3.05 Vs. 4.56±1.62 degree and Rotation Angle (11.44±3.22 Vs. 1.56±1.04 degree and resulted in more reduction in post-operative TP:Ln ratio in comparison to preoperative measures

  18. i-Review: Sharing Code

    Directory of Open Access Journals (Sweden)

    Jonas Kubilius

    2014-02-01

    Full Text Available Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF. GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  19. NOVEL BIPHASE CODE -INTEGRATED SIDELOBE SUPPRESSION CODE

    Institute of Scientific and Technical Information of China (English)

    Wang Feixue; Ou Gang; Zhuang Zhaowen

    2004-01-01

    A kind of novel binary phase code named sidelobe suppression code is proposed in this paper. It is defined to be the code whose corresponding optimal sidelobe suppression filter outputs the minimum sidelobes. It is shown that there do exist sidelobe suppression codes better than the conventional optimal codes-Barker codes. For example, the sidelobe suppression code of length 11 with filter of length 39 has better sidelobe level up to 17dB than that of Barker code with the same code length and filter length.

  20. Comparison of caprock pore networks which potentially will be impacted by carbon sequestration projects.

    Energy Technology Data Exchange (ETDEWEB)

    McCray, John (Colorado School of Mines); Navarre-Sitchler, Alexis (Colorado School of Mines); Mouzakis, Katherine (Colorado School of Mines); Heath, Jason E.; Dewers, Thomas A.; Rother, Gernot (Oak Ridge National Laboratory)

    2010-12-01

    Injection of CO2 into underground rock formations can reduce atmospheric CO2 emissions. Caprocks present above potential storage formations are the main structural trap inhibiting CO2 from leaking into overlying aquifers or back to the Earth's surface. Dissolution and precipitation of caprock minerals resulting from reaction with CO2 may alter the pore network where many pores are of the micrometer to nanometer scale, thus altering the structural trapping potential of the caprock. However, the distribution, geometry and volume of pores at these scales are poorly characterized. In order to evaluate the overall risk of leakage of CO2 from storage formations, a first critical step is understanding the distribution and shape of pores in a variety of different caprocks. As the caprock is often comprised of mudstones, we analyzed samples from several mudstone formations with small angle neutron scattering (SANS) and high-resolution transmission electron microscopy (TEM) imaging to compare the pore networks. Mudstones were chosen from current or potential sites for carbon sequestration projects including the Marine Tuscaloosa Group, the Lower Tuscaloosa Group, the upper and lower shale members of the Kirtland Formation, and the Pennsylvanian Gothic shale. Expandable clay contents ranged from 10% to approximately 40% in the Gothic shale and Kirtland Formation, respectively. During SANS, neutrons effectively scatter from interfaces between materials with differing scattering length density (i.e., minerals and pores). The intensity of scattered neutrons, I(Q), where Q is the scattering vector, gives information about the volume and arrangement of pores in the sample. The slope of the scattering data when plotted as log I(Q) vs. log Q provides information about the fractality or geometry of the pore network. On such plots slopes from -2 to -3 represent mass fractals while slopes from -3 to -4 represent surface fractals. Scattering data showed surface fractal dimensions

  1. Kinematic Sunyaev-Zel'dovich effect with projected fields. II. Prospects, challenges, and comparison with simulations

    Science.gov (United States)

    Ferraro, Simone; Hill, J. Colin; Battaglia, Nick; Liu, Jia; Spergel, David N.

    2016-12-01

    The kinematic Sunyaev-Zel'dovich (kSZ) signal is a powerful probe of the cosmic baryon distribution. The kSZ signal is proportional to the integrated free electron momentum rather than the electron pressure (which sources the thermal SZ signal). Since velocities should be unbiased on large scales, the kSZ signal is an unbiased tracer of the large-scale electron distribution, and thus can be used to detect the "missing baryons" that evade most observational techniques. While most current methods for kSZ extraction rely on the availability of very accurate redshifts, we revisit a method that allows measurements even in the absence of redshift information for individual objects. It involves cross-correlating the square of an appropriately filtered cosmic microwave background (CMB) temperature map with a projected density map constructed from a sample of large-scale structure tracers. We show that this method will achieve high signal-to-noise when applied to the next generation of high-resolution CMB experiments, provided that component separation is sufficiently effective at removing foreground contamination. Considering statistical errors only, we forecast that this estimator can yield S /N ≈3 , 120 and over 150 for Planck, Advanced ACTPol, and a hypothetical Stage IV CMB experiment, respectively, in combination with a galaxy catalog from WISE, and about 20% larger S /N for a galaxy catalog from the proposed SPHEREx experiment. We show that the basic estimator receives a contribution due to leakage from CMB lensing, but that this term can be effectively removed by either direct measurement or marginalization, with little effect on the kSZ significance. We discuss possible sources of systematic contamination and propose mitigation strategies for future surveys. We compare the theoretical predictions to numerical simulations and validate the approximations in our analytic approach. This work serves as a companion paper to the first kSZ measurement with this method

  2. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  3. New code of conduct

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  4. Multidetector CT evaluation of central airways stenoses: Comparison of virtual bronchoscopy, minimal-intensity projection, and multiplanar reformatted images

    Directory of Open Access Journals (Sweden)

    Dinesh K Sundarakumar

    2011-01-01

    Full Text Available Aims: To evaluate the diagnostic utility of virtual bronchoscopy, multiplanar reformatted images, and minimal-intensity projection in assessing airway stenoses. Settings and Design: It was a prospective study involving 150 patients with symptoms of major airway disease. Materials and Methods: Fifty-six patients were selected for analysis based on the detection of major airway lesions on fiber-optic bronchoscopy (FB or routine axial images. Comparisons were made between axial images, virtual bronchoscopy (VB, minimal-intensity projection (minIP, and multiplanar reformatted (MPR images using FB as the gold standard. Lesions were evaluated in terms of degree of airway narrowing, distance from carina, length of the narrowed segment and visualization of airway distal to the lesion. Results: MPR images had the highest degree of agreement with FB (Κ = 0.76 in the depiction of degree of narrowing. minIP had the least degree of agreement with FB (Κ = 0.51 in this regard. The distal visualization was best on MPR images (84.2%, followed by axial images (80.7%, whereas FB could visualize the lesions only in 45.4% of the cases. VB had the best agreement with FB in assessing the segment length (Κ = 0.62. Overall there were no statistically significant differences in the measurement of the distance from the carina in the axial, minIP, and MPR images. MPR images had the highest overall degree of confidence, namely, 70.17% (n = 40. Conclusion: Three-dimensional reconstruction techniques were found to improve lesion evaluation compared with axial images alone. The technique of MPR images was the most useful for lesion evaluation and provided additional information useful for surgical and airway interventions in tracheobronchial stenosis. minIP was useful in the overall depiction of airway anatomy.

  5. Writing the Live Coding Book

    DEFF Research Database (Denmark)

    Blackwell, Alan; Cox, Geoff; Lee, Sang Wong

    2016-01-01

    This paper is a speculation on the relationship between coding and writing, and the ways in which technical innovations and capabilities enable us to rethink each in terms of the other. As a case study, we draw on recent experiences of preparing a book on live coding, which integrates a wide rang...... of personal, historical, technical and critical perspectives. This book project has been both experimental and reflective, in a manner that allows us to draw on critical understanding of both code and writing, and point to the potential for new practices in the future.......This paper is a speculation on the relationship between coding and writing, and the ways in which technical innovations and capabilities enable us to rethink each in terms of the other. As a case study, we draw on recent experiences of preparing a book on live coding, which integrates a wide range...

  6. Results from the First Validation Phase of CAP code

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Kim, Min Ki; Lee, Byung Chul [FNC Tech., SNU, Seoul (Korea, Republic of); Ha, Sang Jun; Choi, Hoon [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    The second stage of Safety Analysis Code Development for Nuclear Power Plants was lunched on Apirl, 2010 and is scheduled to be through 2012, of which the scope of work shall cover from code validation to licensing preparation. As a part of this project, CAP(Containment Analysis Package) will follow the same procedures. CAP's validation works are organized hieratically into four validation steps using; 1) Fundamental phenomena. 2) Principal phenomena (mixing and transport) and components in containment. 3) Demonstration test by small, middle, large facilities and International Standard Problems. 4) Comparison with other containment codes such as GOTHIC or COMTEMPT. In addition, collecting the experimental data related to containment phenomena and then constructing the database is one of the major works during the second stage as a part of this project. From the validation process of fundamental phenomenon, it could be expected that the current capability and the future improvements of CAP code will be revealed. For this purpose, simple but significant problems, which have the exact analytical solution, were selected and calculated for validation of fundamental phenomena. In this paper, some results of validation problems for the selected fundamental phenomena will be summarized and discussed briefly

  7. A comparison of dynamical and statistical downscaling methods for regional wave climate projections along French coastlines.

    Science.gov (United States)

    Laugel, Amélie; Menendez, Melisa; Benoit, Michel; Mattarolo, Giovanni; Mendez, Fernando

    2013-04-01

    Wave climate forecasting is a major issue for numerous marine and coastal related activities, such as offshore industries, flooding risks assessment and wave energy resource evaluation, among others. Generally, there are two main ways to predict the impacts of the climate change on the wave climate at regional scale: the dynamical and the statistical downscaling of GCM (Global Climate Model). In this study, both methods have been applied on the French coast (Atlantic , English Channel and North Sea shoreline) under three climate change scenarios (A1B, A2, B1) simulated with the GCM ARPEGE-CLIMAT, from Météo-France (AR4, IPCC). The aim of the work is to characterise the wave climatology of the 21st century and compare the statistical and dynamical methods pointing out advantages and disadvantages of each approach. The statistical downscaling method proposed by the Environmental Hydraulics Institute of Cantabria (Spain) has been applied (Menendez et al., 2011). At a particular location, the sea-state climate (Predictand Y) is defined as a function, Y=f(X), of several atmospheric circulation patterns (Predictor X). Assuming these climate associations between predictor and predictand are stationary, the statistical approach has been used to project the future wave conditions with reference to the GCM. The statistical relations between predictor and predictand have been established over 31 years, from 1979 to 2009. The predictor is built as the 3-days-averaged squared sea level pressure gradient from the hourly CFSR database (Climate Forecast System Reanalysis, http://cfs.ncep.noaa.gov/cfsr/). The predictand has been extracted from the 31-years hindcast sea-state database ANEMOC-2 performed with the 3G spectral wave model TOMAWAC (Benoit et al., 1996), developed at EDF R&D LNHE and Saint-Venant Laboratory for Hydraulics and forced by the CFSR 10m wind field. Significant wave height, peak period and mean wave direction have been extracted with an hourly-resolution at

  8. Which General Principles of the Civil Code Do We Need---Comparison with German Civil Code%我们需要什么样的民法总则--与德国民法比较

    Institute of Scientific and Technical Information of China (English)

    魏振瀛

    2016-01-01

    The civil law system in our country has major differences with German civil law system and in essence the difference lies in legislative conception and mentality. German civil law system is established on the basis of rights,thus theories on legal relations therefrom can be classified into part of the theory on“relations of rights”. While the civil code of our country is built on legal relations,which can be classified into part of the theory on“relations of rights,obligations and liabilities”. From the perspective of legislative methodology,the major difference lies in the concept and connotation of liabilities as well as relations of lia-bilities and obligations. Since German Civil Code is based on rights,the right to claim plays a key role. While in our country,the legal relation is the core of the civil law system which follows the major path of“rights - obligations - liabilities”,thus the right to claim has different functions in our country compared with those in German civil law.%我国民法体系与德国民法体系有重大差异,根本原因是立法理念和思路不同。德国民法体系是以权利为核心构建的,所反映的法律关系理论属于“权利关系”阶段的理论。我国民法是以法律关系为核心构建的,所反映的法律关系理论属于“权利义务责任关系”阶段的理论。从立法技术上看,我国民法体系与德国民法体系有重大差异的根源在于,民法上责任的概念和内涵及责任与义务的关系有重大差异。由于《德国民法典》是以权利为核心构建的,请求权在民法中占有十分重要的地位。我国民法体系是以法律关系为核心构建的,体现为以权利———义务———责任为主线,我国民法上请求权的功能与德国民法上请求权的功能不同。

  9. Adult attachment interviews of women from low-risk, poverty, and maltreatment risk samples: comparisons between the hostile/helpless and traditional AAI coding systems.

    Science.gov (United States)

    Frigerio, Alessandra; Costantino, Elisabetta; Ceppi, Elisa; Barone, Lavinia

    2013-01-01

    The main aim of this study was to investigate the correlates of a Hostile-Helpless (HH) state of mind among 67 women belonging to a community sample and two different at-risk samples matched on socio-economic indicators, including 20 women from low-SES population (poverty sample) and 15 women at risk for maltreatment being monitored by the social services for the protection of juveniles (maltreatment risk sample). The Adult Attachment Interview (AAI) protocols were reliably coded blind to the samples' group status. The rates of HH classification increased in relation to the risk status of the three samples, ranging from 9% for the low-risk sample to 60% for the maltreatment risk sample to 75% for mothers in the maltreatment risk sample who actually maltreated their infants. In terms of the traditional AAI classification system, 88% of the interviews from the maltreating mothers were classified Unresolved/Cannot Classify (38%) or Preoccupied (50%). Partial overlapping between the 2 AAI coding systems was found, and discussion concerns the relevant contributions of each AAI coding system to understanding of the intergenerational transmission of maltreatment.

  10. Decoding the productivity code

    DEFF Research Database (Denmark)

    Hansen, David

    .e., to be prepared to initiate improvement. The study shows how the effectiveness of the improvement system depends on the congruent fit between the five elements as well as the bridging coherence between the improvement system and the work system. The bridging coherence depends on how improvements are activated...... approach often ends up with demanding intense employee focus to sustain improvement and engagement. Likewise, a single-minded employee development approach often ends up demanding rationalization to achieve the desired financial results. These ineffective approaches make organizations react like pendulums...... that swing between rationalization and employee development. The productivity code is the lack of alternatives to this ineffective approach. This thesis decodes the productivity code based on the results from a 3-year action research study at a medium-sized manufacturing facility. During the project period...

  11. Progress report for the project: Comparison of the response of mature branches and seedlings of Pinus ponderosa to atmospheric pollution

    Energy Technology Data Exchange (ETDEWEB)

    Houpis, J.L.J.; Anderson, P.D.; Benes, S.E.; Phelps, S.P.; Loeffler, A.T.

    1990-09-01

    This progress report details Lawrence Livermore National Laboratory's (LLNL) performance regarding the projects Comparison of the Response of Mature Branches and Seedlings of Pinus ponderosa to Atmospheric Pollution'' and Effects of Ozone, acid Precipitation, and Their Interactions on Mature Branches and Seedlings of Ponderosa Pine'' for the months of November 1989 to June 1990. During the last eight months, we have initiated ozone and acid precipitation exposures, and we began intensive growth, morphological, and physiological measurements. During these major physiological measurement periods, we measured photosynthesis, transpiration, stomatal conductance, respiration, antioxidant activity, pigmentation, and foliar nutrient concentration. We have also concluded the analysis of our branch autonomy experiment, which we conducted in the fall. We determined that virtually no carbon is exported among branches in close proximity to one another. This conclusion assists in validating the approach of using branches and branch exposure chambers as a means of assessing the effects of air pollution on mature trees of Ponderosa pine. 6 refs., 4 figs., 3 tabs.

  12. Good Codes From Generalised Algebraic Geometry Codes

    CERN Document Server

    Jibril, Mubarak; Ahmed, Mohammed Zaki; Tjhai, Cen

    2010-01-01

    Algebraic geometry codes or Goppa codes are defined with places of degree one. In constructing generalised algebraic geometry codes places of higher degree are used. In this paper we present 41 new codes over GF(16) which improve on the best known codes of the same length and rate. The construction method uses places of small degree with a technique originally published over 10 years ago for the construction of generalised algebraic geometry codes.

  13. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a fi

  14. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  15. Linac code benchmarking of HALODYN and PARMILA based on beam experiments

    Science.gov (United States)

    Yin, X.; Bayer, W.; Hofmann, I.

    2016-01-01

    As part of the 'High Intensity Pulsed Proton Injector' (HIPPI) project in the European Framework Programme, a program for the comparison and benchmarking of 3D Particle-In-Cell (PIC) linac codes with experiment has been implemented. HALODYN and PARMILA are two of the codes involved in this program. In this study, the initial Twiss parameters were obtained from the results of beam experiments that were conducted using the GSI UNILAC in low-beam-current. Furthermore, beam dynamics simulations of the Alvarez Drift Tube Linac (DTL) section were performed by HALODYN and PARMILA codes and benchmarked for the same beam experiments. These simulation results exhibit some agreements with the experimental results for the low-beam-current case. The similarities and differences between the experimental and simulated results were analyzed quantitatively. In addition, various physical aspects of the simulation codes and the linac design strategy are also discussed.

  16. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  17. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    Science.gov (United States)

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  18. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    Science.gov (United States)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new

  19. COMPARISON OF WETLANDS DESIGNATED IN PROJECT ISOK AND BY DELIMITATION OF SOILS METHOD IN ŚRODA ŚLĄSKA DISTRICT

    OpenAIRE

    Adam Górecki; Marek Helis

    2014-01-01

    This paper presents the results of a comparison floodareas made by two methods in Środa Śląska district. Using the tools of GIS did spatial analysis of maximum ranges of the floodaccording to the Preliminary Flood Risk Assessment method in Global Monitoring for Enviroment and Security Project (ISOK) and the method base on delimitation of soils. The difference between wetlands designated in project ISOK compared to areas designated by the delimitation of soil method in the north-eastern part...

  20. A DNA barcode-based survey of terrestrial arthropods in the Society Islands of French Polynesia: host diversity within the SymbioCode Project

    Directory of Open Access Journals (Sweden)

    Thibault Ramage

    2017-02-01

    Full Text Available We report here on the taxonomic and molecular diversity of 10 929 terrestrial arthropod specimens, collected on four islands of the Society Archipelago, French Polynesia. The survey was part of the ‘SymbioCode Project’ that aims to establish the Society Islands as a natural laboratory in which to investigate the flux of bacterial symbionts (e.g., Wolbachia and other genetic material among branches of the arthropod tree. The sample includes an estimated 1127 species, of which 1098 included at least one DNA-barcoded specimen and 29 were identified to species level using morphological traits only. Species counts based on molecular data emphasize that some groups have been understudied in this region and deserve more focused taxonomic effort, notably Diptera, Lepidoptera and Hymenoptera. Some taxa that were also subjected to morphological scrutiny reveal a consistent match between DNA and morphology-based species boundaries in 90% of the cases, with a larger than expected genetic diversity in the remaining 10%. Many species from this sample are new to this region or are undescribed. Some are under description, but many await inspection by motivated experts, who can use the online images or request access to ethanol-stored specimens.

  1. A Comparison of the Nutritional Quality of Food Products Advertised in Grocery Store Circulars of High- versus Low-Income New York City Zip Codes

    Directory of Open Access Journals (Sweden)

    Danna Ethan

    2013-12-01

    Full Text Available Grocery stores can be an important resource for health and nutrition with the variety and economic value of foods offered. Weekly circulars are a means of promoting foods at a sale price. To date, little is known about the extent that nutritious foods are advertised and prominently placed in circulars. This study’s aim was to compare the nutritional quality of products advertised on the front page of online circulars from grocery stores in high- versus low-income neighborhoods in New York City (NYC. Circulars from grocery stores in the five highest and five lowest median household income NYC zip codes were analyzed. Nutrition information for food products was collected over a two-month period with a total of 805 products coded. The study found no significant difference between the nutritional quality of products advertised on the front page of online circulars from grocery stores in high- versus low-income neighborhoods in New York City (NYC. In both groups, almost two-thirds of the products advertised were processed, one-quarter were high in carbohydrates, and few to no products were low-sodium, high-fiber, or reduced-, low- or zero fat. Through innovative partnerships with health professionals, grocery stores are increasingly implementing in-store and online health promotion strategies. Weekly circulars can be used as a means to regularly advertise and prominently place more healthful and seasonal foods at an affordable price, particularly for populations at higher risk for nutrition-related chronic disease.

  2. The “2T” ion-electron semi-analytic shock solution for code-comparison with xRAGE: A report for FY16

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-05

    This report documents an effort to generate the semi-analytic "2T" ion-electron shock solution developed in the paper by Masser, Wohlbier, and Lowrie [1], and the initial attempts to understand how to use this solution as a code-verification tool for one of LANL's ASC codes, xRAGE. Most of the work so far has gone into generating the semi-analytic solution. Considerable effort will go into understanding how to write the xRAGE input deck that both matches the boundary conditions imposed by the solution, and also what physics models must be implemented within the semi-analytic solution itself to match the model assumptions inherit within xRAGE. Therefore, most of this report focuses on deriving the equations for the semi-analytic 1D-planar time-independent "2T" ion-electron shock solution, and is written in a style that is intended to provide clear guidance for anyone writing their own solver.

  3. Comparison of two equation-of-state models for partially ionized aluminum: Zel'dovich and Raizer's model versus the activity expansion code

    Energy Technology Data Exchange (ETDEWEB)

    Harrach, R.J.; Rogers, F.J.

    1981-09-01

    Two equation-of-state (EOS) models for multipy ionized matter are evaluated for the case of an aluminum plasma in the temperature range from about one eV to several hundred eV, spanning conditions of weak to strong ionization. Specifically, the simple analytical mode of Zel'dovich and Raizer and the more comprehensive model comprised by Rogers' plasma physics avtivity expansion code (ACTEX) are used to calculate the specific internal energy epsilon and average degree of ionization Z-bar*, as functons of temperature T and density rho. In the absence of experimental data, these results are compared against each other, covering almost five orders-of-magnitude variation in epsilon and the full range of Z-bar* We find generally good agreement between the two sets of results, especially for low densities and for temperatures near the upper end of the rage. Calculated values of epsilon(T) agree to within +- 30% over nearly the full range in T for densities below about 1 g/cm/sup 3/. Similarly, the two models predict values of Z-bar*(T) which track each other fairly well; above 20 eV the discrepancy is less than +- 20% fpr rho< or approx. =1 g/cm/sup 3/. Where the calculations disagree, we expect the ACTEX code to be more accurate than Zel'dovich and Raizer's model, by virtue of its more detailed physics content.

  4. Comparison of two equation-of-state models for partially ionized aluminum: Zel'dovich and Raizer's model versus the activity expansion code

    Science.gov (United States)

    Harrach, Robert J.; Rogers, Forest J.

    1981-09-01

    Two equation-of-state (EOS) models for multipy ionized matter are evaluated for the case of an aluminum plasma in the temperature range from about one eV to several hundred eV, spanning conditions of weak to strong ionization. Specifically, the simple analytical mode of Zel'dovich and Raizer and the more comprehensive model comprised by Rogers' plasma physics avtivity expansion code (ACTEX) are used to calculate the specific internal energy ɛ and average degree of ionization Z¯*, as functons of temperature T and density ρ. In the absence of experimental data, these results are compared against each other, covering almost five orders-of-magnitude variation in ɛ and the full range of Z¯* We find generally good agreement between the two sets of results, especially for low densities and for temperatures near the upper end of the rage. Calculated values of ɛ(T) agree to within ±30% over nearly the full range in T for densities below about 1 g/cm3. Similarly, the two models predict values of Z¯*(T) which track each other fairly well; above 20 eV the discrepancy is less than ±20% fpr ρ≲1 g/cm3. Where the calculations disagree, we expect the ACTEX code to be more accurate than Zel'dovich and Raizer's model, by virtue of its more detailed physics content.

  5. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The “2T” ion-electron semi-analytic shock solution for code-comparison with xRAGE: A report for FY16

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-05

    This report documents an effort to generate the semi-analytic "2T" ion-electron shock solution developed in the paper by Masser, Wohlbier, and Lowrie, and the initial attempts to understand how to use this solution as a code-verification tool for one of LANL's ASC codes, xRAGE. Most of the work so far has gone into generating the semi-analytic solution. Considerable effort will go into understanding how to write the xRAGE input deck that both matches the boundary conditions imposed by the solution, and also what physics models must be implemented within the semi-analytic solution itself to match the model assumptions inherit within xRAGE. Therefore, most of this report focuses on deriving the equations for the semi-analytic 1D-planar time-independent "2T" ion-electron shock solution, and is written in a style that is intended to provide clear guidance for anyone writing their own solver.

  7. 一种多天线信道特征投影物理层安全编码算法%A Physical Layer Secrecy Coding Algorithm Using Multi-antenna Channel Characteristics Projection

    Institute of Scientific and Technical Information of China (English)

    王亚东; 黄开枝; 吉江

    2012-01-01

    This paper proposed a physical layer secrecy coding algorithm using multi-antenna channel characteristics projection in consideration of the existing secrecy coding's problems, such as channel condition dependence and random sharing. In channel reciprocal Time-Division-Duplex (TDD) system, multi-antenna transmitter estimates the authorized channel characteristics according to the training symbols transmitted by the single-antenna receiver, and then generates the pair of projected vectors through multi-antenna channel characteristics projection.While transmitting weighted vector is randomly selected from the pair of projected vectors symbol-by-symbol, the Hamming distances of wiretapper's demodulated codewords are radomly confused so that it is hard to decode normally and secure transmission is qualified. Simulation results show that the scheme makes the wiretapper's BER near approximately 0.5 and the authorized receiver's BER a level lower than the existing methods of multi-antenna physical layer secure transmission.%针对现有安全编码设计方法对信道条件依赖性强、收发无法共享随机性等问题,该文提出了一种多天线信道特征投影物理层安全编码算法.在满足信道互易性的时分双工系统中,多天线发射机根据单天线接收机发送的训练符号估计信道得到授权信道特征,利用信道特征投影生成投影矢量对,发射每个符号时随机选择投影矢量作为发射权重矢量,窃听接收机由于还原码字的汉明距离发生随机置乱而无法正确译码,从而实现安全传输.仿真结果表明:该算法使窃听者的误比特率接近0.5,授权接收机的误比特率较已有多天线物理层安全传输方法低一个数量级.

  8. Scar or recurrence - comparison of MRI and color-coded ultrasound with echo signal amplifiers; Narbe oder Rezidiv? Einsatz der signalverstaerkten Doppler-Sonographie im Vergleich zur MRT

    Energy Technology Data Exchange (ETDEWEB)

    Aichinger, U.; Schulz-Wendtland, R.; Lell, M.; Bautz, W. [Institut fuer Diagnostische Radiologie, Erlangen Univ., Nuernberg (Germany); Kraemer, S. [Klinik fuer Frauenheilkunde, Friedrich-Alexander-Universitaet Erlangen, Nuernberg (Germany)

    2002-11-01

    Purpose: MRI is the most reliable method to differentiate scar and recurrent carcinoma of the breast after surgical treatment. This study compares MRI and color-coded ultrasound with and without echo signal amplifier (ESA). Materials and Methods: Forty-two patients with suspected recurrent tumors were enrolled in this prospective study, with 38 patients after breast conserving therapy and 4 after mastectomy. All patients had a clinical examination, mammography (n=38), real time ultrasound (US), color-coded ultrasound without and with ESA (Levovist {sup trademark}, Schering, Berlin), and dynamic MRI. The criteria used for duplex ultrasound were tumor vascularisation and flow pattern. The results were compared with histologic findings or the results of follow-up examinations for at least 12 months. Results: The detection of penetrating or central vessels proved to be an accurate sign of malignancy in duplex ultrasound. With the application of ESA, additional vessels were detected within the lesions, increasing the diagnostic accuracy (83% with ESA versus 79% without ESA). The sensitivity of color-coded ultrasound improved from 64% to 86% with echo signal amplifier. The specificity was 86% without and 82% with echo signal amplifier. MRI was found to have a sensitivity fo 100% and a specificity of 82%. The same 5 lesions were false positive on MRI and color-coded US after Levovist {sup trademark}. No lesion without signs of vascularity within or in its vicinity was malignant. Conclusion: Color-coded ultrasound seems to be a promising method in the differentiation between scar and recurrence. Lesions with penetrating or central vessels have a high probability of being malignant, whereas lesions without any signs of vascularity inside or nearby have a high probability of being benign. Advantage of contrast-enhanced US is its ubiquitous availability. (orig.) [German] Studienziel: Die MRT zeigt in der Rezidiverkennung nach behandeltem Mammakarzinom die hoechste

  9. Space Time Codes from Permutation Codes

    CERN Document Server

    Henkel, Oliver

    2006-01-01

    A new class of space time codes with high performance is presented. The code design utilizes tailor-made permutation codes, which are known to have large minimal distances as spherical codes. A geometric connection between spherical and space time codes has been used to translate them into the final space time codes. Simulations demonstrate that the performance increases with the block lengths, a result that has been conjectured already in previous work. Further, the connection to permutation codes allows for moderate complex en-/decoding algorithms.

  10. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    Science.gov (United States)

    Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.

  11. Projecting the environmental profile of Singapore's landfill activities: Comparisons of present and future scenarios based on LCA.

    Science.gov (United States)

    Khoo, Hsien H; Tan, Lester L Z; Tan, Reginald B H

    2012-05-01

    This article aims to generate the environmental profile of Singapore's Semakau landfill by comparing three different operational options associated with the life cycle stages of landfilling activities, against a 'business as usual' scenario. Before life cycle assessment or LCA is used to quantify the potential impacts from landfilling activities, an attempt to incorporate localized and empirical information into the amounts of ash and MSW sent to the landfill was made. A linear regression representation of the relationship between the mass of waste disposed and the mass of incineration ash generated was modeled from waste statistics between years 2004 and 2009. Next, the mass of individual MSW components was projected from 2010 to 2030. The LCA results highlighted that in a 'business as usual' scenario the normalized total impacts of global warming, acidification and human toxicity increased by about 2% annually from 2011 to 2030. By replacing the 8000-tonne barge with a 10000-tonne coastal bulk carrier or freighter (in scenario 2) a grand total reduction of 48% of both global warming potential and acidification can be realized by year 2030. Scenario 3 explored the importance of having a Waste Water Treatment Plant in place to reduce human toxicity levels - however, the overall long-term benefits were not as significant as scenario 2. It is shown in scenario 4 that the option of increased recycling championed over all other three scenarios in the long run, resulting in a total 58% reduction in year 2030 for the total normalized results. A separate comparison of scenarios 1-4 is also carried out for energy utilization and land use in terms of volume of waste occupied. Along with the predicted reductions in environmental burdens, an additional bonus is found in the expanded lifespan of Semakau landfill from year 2032 (base case) to year 2039. Model limitations and suggestions for improvements were also discussed.

  12. Present state of global wetland extent and wetland methane modelling: methodology of a model inter-comparison project (WETCHIMP

    Directory of Open Access Journals (Sweden)

    R. Wania

    2013-05-01

    Full Text Available The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4 emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2 forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013. Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration. The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.

  13. Comparison of surveillance sample demographics over two cycles of the National HIV Behavioral Surveillance Project, Houston, Texas.

    Science.gov (United States)

    Risser, Jan M H; Montealegre, Jane R

    2014-04-01

    We examined differences in sample demographics across cycles of the National HIV Behavioral Surveillance project, that examines HIV risk behaviors among men who have sex with men (MSM), injection drug users (IDU), and heterosexuals living in areas of high HIV prevalence (HET). MSM were recruited through venue-based sampling, and IDU and HET through respondent driven sampling (RDS). RDS data were weighted to account for sampling bias. We compared crude prevalence estimates from MSM1 (2004) to those from MSM2 (2008) for demographic factors known to influence risky sexual and drug-use behaviors. We compared crude and adjusted prevalence estimates for IDU1 (2005) and IDU2 (2009) and HET1 (2006) and HET2 (2010). In the MSM cycle, we found differences in age, and the proportions seeking medical care and reporting a recent arrest. There were no differences in the comparison of crude and weighted estimates for the RDS collected samples, nor were there differences comparing HET1 and HET2 weighted estimates. IDU2 recruited a larger proportion of males, and had a higher percent who graduated from high school and who reported recent medical care and a previous HIV test. Differences across MSM cycles may be related to differences in venues identified for each cycle. Differences in the IDU cycles may be due to an effort on our part to increase the racial/ethnic and drug-use diversity of the sample in IDU2. Our findings show the importance of formative work for both venue-based and RDS samples to increase understanding of the dimensions that affect social networks and the dynamics of populations in space and time. With familiarity of the target population, we believe that both venue-based and RDS recruitment approaches for NHBS work well and can be used to evaluate changes in risky sexual and drug use behaviors and in HIV testing behaviors.

  14. On Construction of Optimal A2-Codes

    Institute of Scientific and Technical Information of China (English)

    HU Lei

    2001-01-01

    Two authentication codes with arbitration (A2-codes) are constrructed from finite affine spaces to illustrate for the first time that the information-theoretic lower bounds for A2-codes can be strictly tighter than the combinatorial ones. The codes also illustrate that the conditional combinatorial lower bounds on numbers of encoding\\ decoding rules are not genuine ones. As an analogue of 3-dimensional case, an A2-code from 4-dimensional finite projective spaces is constructed, which neets both the information-theoretic and combinatorial lower bounds.

  15. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  16. Reusable State Machine Code Generator

    Science.gov (United States)

    Hoffstadt, A. A.; Reyes, C.; Sommer, H.; Andolfato, L.

    2010-12-01

    The State Machine model is frequently used to represent the behaviour of a system, allowing one to express and execute this behaviour in a deterministic way. A graphical representation such as a UML State Chart diagram tames the complexity of the system, thus facilitating changes to the model and communication between developers and domain experts. We present a reusable state machine code generator, developed by the Universidad Técnica Federico Santa María and the European Southern Observatory. The generator itself is based on the open source project architecture, and uses UML State Chart models as input. This allows for a modular design and a clean separation between generator and generated code. The generated state machine code has well-defined interfaces that are independent of the implementation artefacts such as the middle-ware. This allows using the generator in the substantially different observatory software of the Atacama Large Millimeter Array and the ESO Very Large Telescope. A project-specific mapping layer for event and transition notification connects the state machine code to its environment, which can be the Common Software of these projects, or any other project. This approach even allows to automatically create tests for a generated state machine, using techniques from software testing, such as path-coverage.

  17. 国内外电气防火检测规范比较及分析%Comparison and Analysis of Domestic and Overseas Electrical Fireproofing Inspection Code

    Institute of Scientific and Technical Information of China (English)

    任长宁

    2011-01-01

    This paper analyzes and compares the provisions on electrical fireproofing inspect items, test methods, protective devices and construction technology in domestic and overseas standards and codes, and explains thin conductor connecting method, on-site inspection of circuit voltage drop, the use of arc fault circuit interrupter, etc.%分析对比国内外标准规范中涉及的电气防火检测项目、检测方法、保护电器、施工工艺等的规定。并针对细导线接续方法、现场检测线路压降、电弧故障断路器的使用等问题进行了说明。

  18. Performance Comparison and Analysis of LDPC Codes with Multiple Decoding%多译码方式下的LDPC码性能比较与分析

    Institute of Scientific and Technical Information of China (English)

    代妮娜; 蔡黎; 蔡绍林

    2011-01-01

    为了达到研究LDPC码在通信工程使用中性能优劣的目的,采用数学建模、模拟仿真、算法分析的方法,通过MatLab软件仿真实验,获得Gallager等常见的三种编码方式在硬判决条件、和积、BP对数域三种译码条件下,在误码率等方面的系列结果,并得出结论。%In order to achieve the purpose that studying the quality performance of LDPC code in communication engineering,uses mathematic modeling,simulation and algorithm analysis,through MatLab software simulation experiment,under the three decode conditions incl

  19. Comparison of facial expression in patients with obsessive-compulsive disorder and schizophrenia using the Facial Action Coding System: a preliminary study

    Directory of Open Access Journals (Sweden)

    Bersani G

    2012-12-01

    Full Text Available Giuseppe Bersani,1 Francesco Saverio Bersani,1,2 Giuseppe Valeriani,1 Maddalena Robiony,1 Annalisa Anastasia,1 Chiara Colletti,1,3 Damien Liberati,1 Enrico Capra,2 Adele Quartini,1 Elisa Polli11Department of Medical-Surgical Sciences and Biotechnologies, 2Department of Neurology and Psychiatry, Sapienza University of Rome, Rome, 3Department of Neuroscience and Behaviour, Section of Psychiatry, Federico II University of Naples, Naples, ItalyBackground: Research shows that impairment in the expression and recognition of emotion exists in multiple psychiatric disorders. The objective of the current study was to evaluate the way that patients with schizophrenia and those with obsessive-compulsive disorder experience and display emotions in relation to specific emotional stimuli using the Facial Action Coding System (FACS.Methods: Thirty individuals participated in the study, comprising 10 patients with schizophrenia, 10 with obsessive-compulsive disorder, and 10 healthy controls. All participants underwent clinical sessions to evaluate their symptoms and watched emotion-eliciting video clips while facial activity was videotaped. Congruent/incongruent feeling of emotions and facial expression in reaction to emotions were evaluated.Results: Patients with schizophrenia and obsessive-compulsive disorder presented similarly incongruent emotive feelings and facial expressions (significantly worse than healthy participants. Correlations between the severity of psychopathological condition (in particular the severity of affective flattening and impairment in recognition and expression of emotions were found.Discussion: Patients with obsessive-compulsive disorder and schizophrenia seem to present a similarly relevant impairment in both experiencing and displaying of emotions; this impairment may be seen as a chronic consequence of the same neurodevelopmental origin of the two diseases. Mimic expression could be seen as a behavioral indicator of affective

  20. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    Science.gov (United States)

    Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.

  1. Santa Barbara Cluster Comparison Test with DISPH

    CERN Document Server

    Saitoh, Takayuki

    2016-01-01

    The Santa Barbara cluster comparison project (Frenk et al. Frenk+1999) revealed that there is a systematic difference between entropy profiles of clusters of galaxies obtained by Eulerian mesh and Lagrangian smoothed particle hydrodynamics (SPH) codes: Mesh codes gave a core with a constant entropy whereas SPH codes did not. One possible reason for this difference is that mesh codes are not Galilean invariant. Another possible reason is the problem of the SPH method, which might give too much "protection" to cold clumps because of the unphysical surface tension induced at contact discontinuities. In this paper, we apply the density independent formulation of SPH (DISPH), which can handle contact discontinuities accurately, to simulations of a cluster of galaxies, and compare the results with those with the standard SPH. We obtained the entropy core when we adopt DISPH. The size of the core is, however, significantly smaller than those obtained with mesh simulations, and is comparable to those obtained with qu...

  2. Strong Trinucleotide Circular Codes

    Directory of Open Access Journals (Sweden)

    Christian J. Michel

    2011-01-01

    Full Text Available Recently, we identified a hierarchy relation between trinucleotide comma-free codes and trinucleotide circular codes (see our previous works. Here, we extend our hierarchy with two new classes of codes, called DLD and LDL codes, which are stronger than the comma-free codes. We also prove that no circular code with 20 trinucleotides is a DLD code and that a circular code with 20 trinucleotides is comma-free if and only if it is a LDL code. Finally, we point out the possible role of the symmetric group ∑4 in the mathematical study of trinucleotide circular codes.

  3. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  4. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Pietrzak, Robert [Department of Nuclear Physics and Its Applications, Institute of Physics, University of Silesia, Katowice (Poland); Konefał, Adam, E-mail: adam.konefal@us.edu.pl [Department of Nuclear Physics and Its Applications, Institute of Physics, University of Silesia, Katowice (Poland); Sokół, Maria; Orlef, Andrzej [Department of Medical Physics, Maria Sklodowska-Curie Memorial Cancer Center, Institute of Oncology, Gliwice (Poland)

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method. - Highlights: • Influence of the bin structure on the proton dose distributions was examined for the MC simulations. • The considered relative proton dose distributions in water correspond to the clinical application. • MC simulations performed with the logical detectors and the

  5. COMPARISON OF TREND PROJECTION METHODS AND BACKPROPAGATION PROJECTIONS METHODS TREND IN PREDICTING THE NUMBER OF VICTIMS DIED IN TRAFFIC ACCIDENT IN TIMOR TENGAH REGENCY, NUSA TENGGARA

    Directory of Open Access Journals (Sweden)

    Aleksius Madu

    2016-10-01

    Full Text Available The purpose of this study is to predict the number of traffic accident victims who died in Timor Tengah Regency with Trend Projection method and Backpropagation method, and compare the two methods based on the degree of guilt and predict the number traffic accident victims in the Timor Tengah Regency for the coming year. This research was conducted in Timor Tengah Regency where data used in this study was obtained from Police Unit in Timor Tengah Regency. The data is on the number of traffic accidents in Timor Tengah Regency from 2000 – 2013, which is obtained by a quantitative analysis with Trend Projection and Backpropagation method. The results of the data analysis predicting the number of traffic accidents victims using Trend Projection method obtained the best model which is the quadratic trend model with equation Yk = 39.786 + (3.297 X + (0.13 X2. Whereas by using back propagation method, it is obtained the optimum network that consists of 2 inputs, 3 hidden screens, and 1 output. Based on the error rates obtained, Back propagation method is better than the Trend Projection method which means that the predicting accuracy with Back propagation method is the best method to predict the number of traffic accidents victims in Timor Tengah Regency. Thus obtained predicting the numbers of traffic accident victims for the next 5 years (Years 2014-2018 respectively - are 106 person, 115 person, 115 person, 119 person and 120 person.   Keywords: Trend Projection, Back propagation, Predicting.

  6. Joint source channel coding using arithmetic codes

    CERN Document Server

    Bi, Dongsheng

    2009-01-01

    Based on the encoding process, arithmetic codes can be viewed as tree codes and current proposals for decoding arithmetic codes with forbidden symbols belong to sequential decoding algorithms and their variants. In this monograph, we propose a new way of looking at arithmetic codes with forbidden symbols. If a limit is imposed on the maximum value of a key parameter in the encoder, this modified arithmetic encoder can also be modeled as a finite state machine and the code generated can be treated as a variable-length trellis code. The number of states used can be reduced and techniques used fo

  7. Computer codes for birds of North America

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Purpose of paper was to provide a more useful way to provide codes for all North American species, thus making the list useful for virtually all projects concerning...

  8. Automated UMLS-based comparison of medical forms.

    Directory of Open Access Journals (Sweden)

    Martin Dugas

    Full Text Available Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far--to our knowledge--an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS. Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care. Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms.

  9. Automated UMLS-based comparison of medical forms.

    Science.gov (United States)

    Dugas, Martin; Fritz, Fleur; Krumm, Rainer; Breil, Bernhard

    2013-01-01

    Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far--to our knowledge--an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS). Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care). Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms.

  10. Comparison of serological tests for antibody to hepatitis A antigen, using coded specimens from individuals infected with the MS-1 strain of hepatitis A virus.

    Science.gov (United States)

    Dienstag, J L; Krugman, S; Wong, D C; Purcell, R H

    1976-01-01

    To compare serological tests for antibody to hepatitis A antigen (anti-HA), we tested 15 paired serum specimens, submitted under code, from individuals infected with the MS-1 strain of hepatitis A virus. Immune electron microscopy (IEM), immune adherence hemagglutination (IAHA), and solid-phase radioimmunoassay (RIA) tests for anti-HA were performed with hepatitis A antigen (HA Ag) derived from human stool; results were also compared with previously reported titers determined by IAHA with HA Ag derived from marmoset liver. Antibody titers (IAHA and RIA) and ratings (IEM) determined with stool-derived HA Ag compared favorably, and a seroresponse to HA Ag was detected by all three methods for every serum pair tested. Differences in titers were noted between IAHA tests with liver-derived and with stool-derived HA Ag, but the discrepancies could be accounted for by differences in test technique. The agreement found in this study among the three techniques was quite good and confirms the specificity and sensitivity of tests for anti-HA that are done with stool-derived HA-Ag. PMID:186409

  11. Comparison of 2015 Medicare relative value units for gender-specific procedures: Gynecologic and gynecologic-oncologic versus urologic CPT coding. Has time healed gender-worth?

    Science.gov (United States)

    Benoit, M F; Ma, J F; Upperman, B A

    2017-02-01

    In 1992, Congress implemented a relative value unit (RVU) payment system to set reimbursement for all procedures covered by Medicare. In 1997, data supported that a significant gender bias existed in reimbursement for gynecologic compared to urologic procedures. The present study was performed to compare work and total RVU's for gender specific procedures effective January 2015 and to evaluate if time has healed the gender-based RVU worth. Using the 2015 CPT codes, we compared work and total RVU's for 50 pairs of gender specific procedures. We also evaluated 2015 procedure related provider compensation. The groups were matched so that the procedures were anatomically similar. We also compared 2015 to 1997 RVU and fee schedules. Evaluation of work RVU's for the paired procedures revealed that in 36 cases (72%), male vs female procedures had a higher wRVU and tRVU. For total fee/reimbursement, 42 (84%) male based procedures were compensated at a higher rate than the paired female procedures. On average, male specific surgeries were reimbursed at an amount that was 27.67% higher for male procedures than for female-specific surgeries. Female procedure based work RVU's have increased minimally from 1997 to 2015. Time and effort have trended towards resolution of some gender-related procedure worth discrepancies but there are still significant RVU and compensation differences that should be further reviewed and modified as surgical time and effort highly correlate. Copyright © 2016. Published by Elsevier Inc.

  12. Quasi-Periodic Oscillations and Frequencies in AN Accretion Disk and Comparison with the Numerical Results from Non-Rotating Black Hole Computed by the Grh Code

    Science.gov (United States)

    Donmez, Orhan

    The shocked wave created on the accretion disk after different physical phenomena (accretion flows with pressure gradients, star-disk interaction etc.) may be responsible observed Quasi Periodic Oscillations (QPOs) in X-ray binaries. We present the set of characteristics frequencies associated with accretion disk around the rotating and non-rotating black holes for one particle case. These persistent frequencies are results of the rotating pattern in an accretion disk. We compare the frequency's from two different numerical results for fluid flow around the non-rotating black hole with one particle case. The numerical results are taken from Refs. 1 and 2 using fully general relativistic hydrodynamical code with non-selfgravitating disk. While the first numerical result has a relativistic tori around the black hole, the second one includes one-armed spiral shock wave produced from star-disk interaction. Some physical modes presented in the QPOs can be excited in numerical simulation of relativistic tori and spiral waves on the accretion disk. The results of these different dynamical structures on the accretion disk responsible for QPOs are discussed in detail.

  13. Quasi Periodic Oscillations (QPOs) and frequencies in an accretion disk and comparison with the numerical results from non-rotating black hole computed by the GRH code

    CERN Document Server

    Donmez, O

    2006-01-01

    The shocked wave created on the accretion disk after different physical phenomena (accretion flows with pressure gradients, star-disk interaction etc.) may be responsible observed Quasi Periodic Oscillations (QPOs) in $X-$ray binaries. We present the set of characteristics frequencies associated with accretion disk around the rotating and non-rotating black holes for one particle case. These persistent frequencies are results of the rotating pattern in an accretion disk. We compare the frequency's from two different numerical results for fluid flow around the non-rotating black hole with one particle case. The numerical results are taken from our papers Refs.\\refcite{Donmez2} and \\refcite{Donmez3} using fully general relativistic hydrodynamical code with non-selfgravitating disk. While the first numerical result has a relativistic tori around the black hole, the second one includes one-armed spiral shock wave produced from star-disk interaction. Some physical modes presented in the QPOs can be excited in nume...

  14. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  15. Linear codes associated to determinantal varieties

    DEFF Research Database (Denmark)

    Beelen, Peter; Ghorpade, Sudhir R.; Hasan, Sartaj Ul

    2015-01-01

    We consider a class of linear codes associated to projective algebraic varieties defined by the vanishing of minors of a fixed size of a generic matrix. It is seen that the resulting code has only a small number of distinct weights. The case of varieties defined by the vanishing of 2×2 minors...

  16. A Line Based Visualization of Code Evolution

    NARCIS (Netherlands)

    Voinea, S.L.; Telea, A.; Wijk, J.J. van

    2005-01-01

    The source code of software systems changes many times during the system lifecycle. We study how developers can get insight in these changes in order to understand the project context and the product artifacts. For this we propose new techniques for code evolution representation and visualization in

  17. Version-Centric Visualization of Code Evolution

    NARCIS (Netherlands)

    Voinea, S.L.; Telea, A.; Chaudron, M.

    2005-01-01

    The source code of software systems changes many times during the system lifecycle. We study how developers can get insight in these changes in order to understand the project context and the product artifacts. For this we propose new techniques for code evolution representation and visualization in

  18. Present state of global wetland extent and wetland methane modelling: conclusions from a model inter-comparison project (WETCHIMP

    Directory of Open Access Journals (Sweden)

    J. R. Melton

    2013-02-01

    Full Text Available Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4. Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2 forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration. Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two.

    Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1. Second, all

  19. In silico comparison of genomic regions containing genes coding for enzymes and transcription factors for the phenylpropanoid pathway in Phaseolus vulgaris L. and Glycine max L. Merr

    Directory of Open Access Journals (Sweden)

    Yarmilla eReinprecht

    2013-09-01

    Full Text Available Legumes contain a variety of phytochemicals derived from the phenylpropanoid pathway that have important effects on human health as well as seed coat color, plant disease resistance and nodulation. However, the information about the genes involved in this important pathway is fragmentary in common bean (Phaseolus vulgaris L.. The objectives of this research were to isolate genes that function in and control the phenylpropanoid pathway in common bean, determine their genomic locations in silico in common bean and soybean, and analyze sequences of the 4CL gene family in two common bean genotypes. Sequences of phenylpropanoid pathway genes available for common bean or other plant species were aligned, and the conserved regions were used to design sequence-specific primers. The PCR products were cloned and sequenced and the gene sequences along with common bean gene-based (g markers were BLASTed against the Glycine max v.1.0 genome and the P. vulgaris v.1.0 (Andean early release genome. In addition, gene sequences were BLASTed against the OAC Rex (Mesoamerican genome sequence assembly. In total, fragments of 46 structural and regulatory phenylpropanoid pathway genes were characterized in this way and placed in silico on common bean and soybean sequence maps. The maps contain over 250 common bean g and SSR (simple sequence repeat markers and identify the positions of more than 60 additional phenylpropanoid pathway gene sequences, plus the putative locations of seed coat color genes. The majority of cloned phenylpropanoid pathway gene sequences were mapped to one location in the common bean genome but had two positions in soybean. The comparison of the genomic maps confirmed previous studies, which show that common bean and soybean share genomic regions, including those containing phenylpropanoid pathway gene sequences, with conserved synteny. Indels identified in the comparison of Andean and Mesoamerican common bean sequences might be used to develop

  20. The NIST eutectic project: construction of Co C, Pt C and Re C fixed-point cells and their comparison with the NMIJ

    Science.gov (United States)

    Sasajima, N.; Yoon, H. W.; Gibson, C. E.; Khromchenko, V.; Sakuma, F.; Yamada, Y.

    2006-04-01

    The National Institute of Standards and Technology (NIST) has initiated a project on novel high-temperature fixed-points by use of metal (carbide)-carbon eutectics to lower uncertainties in thermodynamic temperature measurement. As the first stage of the NIST eutectic project, a comparison of Co-C, Pt-C and Re-C eutectic fixed-point cells was conducted between the NIST and the National Metrology Institute of Japan (NMIJ) at the NIST to verify the quality of the NIST eutectic cells in addition to checking for possible furnace and radiation thermometer effects on the eutectic fixed-point realizations. In the comparison, two high-temperature furnaces, two radiation thermometers and one gold-point blackbody were used. A Nagano M furnace and a Linear Pyrometer 3 radiation thermometer were transferred from NMIJ and were used in conjunction with a Thermo Gauge furnace and an Absolute Pyrometer 1 radiation thermometer of NIST to check the dependence on the measurement equipment. The results showed that Co-C cells agreed to 73 mK. The melting temperature of the NIST Pt-C cell was approximately 270 mK lower than that of the NMIJ cell, with a comparison uncertainty of roughly 110 mK (k = 2), due to the poor purity of Pt powder. Although the Re-C comparison showed instability of the comparison system, they agreed within 100 mK. Though further improvement is necessary for the Pt-C cell, such as the use of higher purity Pt, the filling and measuring technique has been established at the NIST.

  1. Model Children's Code.

    Science.gov (United States)

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  2. Comparison of the Results of the Whole Core Decay Power Using the ORIGEN Code and ANS-1979 for the Uljin Unit 6

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Eun Hyun; Jeong, Hae Sun; Kim, Dong Ha [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    When a detailed tracking of the nuclide is not required-, i.e., only the whole core decay heat information is required, then the RN package is not activated and the DCH package is solely used, whereas both the RN and DCH packages are used when we need a fission product transport simulation and location information. For DCH only mode, there are four options to calculate the whole core decay heat calculation for the time after a shut-down. The first is using a summation of the decay heat data from ORIGEN-based fission product inventories for the representative BWRs and PWRs, which are scaled if necessary. The second is using the ANS-1979 standard for the decay heat power. The third is using a user-specified tabular function of the whole-core decay as a function of time. The fourth is using a user-specified control function to define the decay heat. In this research, for option 2, the ANS-1979 standard for the whole core decay heat calculation is compared with the result of the ORIGEN calculation for Uljin Unit 6 after arranging the ORIGEN result based on the mass, radioactivity, and decay heat for the elements and nuclides. The MELCOR code is currently using the ANS-1979 standard, the lasted version for decay heat in ANS standards is not mainly dealt with in this research. The goal of the examination is to find the necessity of changing old standard for the enhancement of the accuracy. The ANS-1979 is an old standard about decay heat, thus recent standards which are ANSI/ANS-5.1-1994 and ANSI/ANS-5.1-2005 should be investigated in the long term research. This research has certain drawback in that the mere multiplication of the number of assemblies is done for the whole core decay heat calculation in the arrangement of the ORIGEN result.

  3. Validation of a commercial TPS based on the VMC(++) Monte Carlo code for electron beams: commissioning and dosimetric comparison with EGSnrc in homogeneous and heterogeneous phantoms.

    Science.gov (United States)

    Ferretti, A; Martignano, A; Simonato, F; Paiusco, M

    2014-02-01

    The aim of the present work was the validation of the VMC(++) Monte Carlo (MC) engine implemented in the Oncentra Masterplan (OMTPS) and used to calculate the dose distribution produced by the electron beams (energy 5-12 MeV) generated by the linear accelerator (linac) Primus (Siemens), shaped by a digital variable applicator (DEVA). The BEAMnrc/DOSXYZnrc (EGSnrc package) MC model of the linac head was used as a benchmark. Commissioning results for both MC codes were evaluated by means of 1D Gamma Analysis (2%, 2 mm), calculated with a home-made Matlab (The MathWorks) program, comparing the calculations with the measured profiles. The results of the commissioning of OMTPS were good [average gamma index (γ) > 97%]; some mismatches were found with large beams (size ≥ 15 cm). The optimization of the BEAMnrc model required to increase the beam exit window to match the calculated and measured profiles (final average γ > 98%). Then OMTPS dose distribution maps were compared with DOSXYZnrc with a 2D Gamma Analysis (3%, 3 mm), in 3 virtual water phantoms: (a) with an air step, (b) with an air insert, and (c) with a bone insert. The OMTPD and EGSnrc dose distributions with the air-water step phantom were in very high agreement (γ ∼ 99%), while for heterogeneous phantoms there were differences of about 9% in the air insert and of about 10-15% in the bone region. This is due to the Masterplan implementation of VMC(++) which reports the dose as "dose to water", instead of "dose to medium".

  4. Projections of the nucleus of the basal optic root in pigeons (Columba livia): a comparison of the morphology and distribution of neurons with different efferent projections.

    Science.gov (United States)

    Wylie, Douglas R W; Pakan, Janelle M P; Elliott, Cameron A; Graham, David J; Iwaniuk, Andrew N

    2007-01-01

    The avian nucleus of the basal optic root (nBOR) is a visual structure involved in the optokinetic response. nBOR consists of several morphologically distinct cell types, and in the present study, we sought to determine if these different cell types had differential projections. Using retrograde tracers, we examined the morphology and distribution of nBOR neurons projecting to the vestibulocerebellum (VbC), inferior olive (IO), dorsal thalamus, the pretectal nucleus lentiformis mesencephali (LM), the contralateral nBOR, the oculomotor complex (OMC) and a group of structures along the midline of the mesencephalon. The retrogradely labeled neurons fell into two broad categories: large neurons, most of which were multipolar rather than fusiform and small neurons, which were either fusiform or multipolar. From injections into the IO, LM, contralateral nBOR, and structures along the midline-mesencephalon small nBOR neurons were labeled. Although there were no differences with respect to the size of the labeled neurons from these injections, there were some differences with the respect to the distribution of labeled neurons and the proportion of multipolar vs. fusiform neurons. From injections into the VbC, the large multipolar cells were labeled throughout nBOR. The only other cases in which these large neurons were labeled were contralateral OMC injections. To investigate if single neurons project to multiple targets we used paired injections of red and green fluorescent retrograde tracers into different targets. Double-labeled neurons were never observed indicating that nBOR neurons do not project to multiple targets. We conclude that individual nBOR neurons have unique projections, which may have differential roles in processing optic flow and controlling the optokinetic response.

  5. In silico comparison of genomic regions containing genes coding for enzymes and transcription factors for the phenylpropanoid pathway in Phaseolus vulgaris L. and Glycine max L. Merr

    Science.gov (United States)

    Reinprecht, Yarmilla; Yadegari, Zeinab; Perry, Gregory E.; Siddiqua, Mahbuba; Wright, Lori C.; McClean, Phillip E.; Pauls, K. Peter

    2013-01-01

    Legumes contain a variety of phytochemicals derived from the phenylpropanoid pathway that have important effects on human health as well as seed coat color, plant disease resistance and nodulation. However, the information about the genes involved in this important pathway is fragmentary in common bean (Phaseolus vulgaris L.). The objectives of this research were to isolate genes that function in and control the phenylpropanoid pathway in common bean, determine their genomic locations in silico in common bean and soybean, and analyze sequences of the 4CL gene family in two common bean genotypes. Sequences of phenylpropanoid pathway genes available for common bean or other plant species were aligned, and the conserved regions were used to design sequence-specific primers. The PCR products were cloned and sequenced and the gene sequences along with common bean gene-based (g) markers were BLASTed against the Glycine max v.1.0 genome and the P. vulgaris v.1.0 (Andean) early release genome. In addition, gene sequences were BLASTed against the OAC Rex (Mesoamerican) genome sequence assembly. In total, fragments of 46 structural and regulatory phenylpropanoid pathway genes were characterized in this way and placed in silico on common bean and soybean sequence maps. The maps contain over 250 common bean g and SSR (simple sequence repeat) markers and identify the positions of more than 60 additional phenylpropanoid pathway gene sequences, plus the putative locations of seed coat color genes. The majority of cloned phenylpropanoid pathway gene sequences were mapped to one location in the common bean genome but had two positions in soybean. The comparison of the genomic maps confirmed previous studies, which show that common bean and soybean share genomic regions, including those containing phenylpropanoid pathway gene sequences, with conserved synteny. Indels identified in the comparison of Andean and Mesoamerican common bean 4CL gene sequences might be used to develop inter

  6. TGF β 3 immunoassay standardization: comparison of NIBSC reference preparation code 98/608 with avotermin lot 205-0505-005.

    Science.gov (United States)

    Little, John A; Murdy, Rebecca; Cossar, Natalie; Getliffe, Katherine M; Hanak, Julian; Ferguson, Mark W J

    2012-01-01

    Juvista™ drug product contains human recombinant active transforming growth factor beta 3 (TGFβ3; avotermin). Juvista is being developed for the prevention and reduction of human scarring. Phase II and III clinical and development batches of Juvista were assayed for content by an immunoenzymometric assay (IEMA) using a National Institute for Biological Standards and Control (NIBSC) TGFβ3 reference material (98/608) and avotermin standard (Lot 205-0505-005). Paired Juvista TGFβ3 data were compared directly, pooled, and processed using the statistical analysis described by Bland and Altman. A direct comparison of the two standards was also made. The Bland-Altman result was 1.958, the best estimate of the relationship between Lot 205-0505-005 and reference material 98/608. By IEMA, reference material 98/608 has approximately 50% of the immunoreactivity of Lot 205-0505-005. During clinical development, no change in Juvista TGFβ3 dosage was made, but the standard used for Juvista TGFβ3 assay was changed from 98/608 to 205-0505-005. The stated amount of Juvista TGFβ3 in phase III trials was approximately one-half of that in phase II trials. This article highlights the importance of early adoption of an appropriate and representative standard to achieve accurate quantification of protein drug during clinical development.

  7. SUPPLEMENTARY COMPARISON: Final report on EUROMET comparison EUROMET.PR-S2 (Project No. 156): Responsivity of detectors for radiant power of lasers

    Science.gov (United States)

    Kück, Stefan

    2010-01-01

    The EUROMET.PR-S2 intercomparison of Radiant Power of High Power Lasers for five measurands was carried as a combined round-robin/star type comparison. In total nine participants took part, five from Europe (national metrology institutes of France, Germany (pilot), Great Britain, Romania, Sweden) and four outside Europe (national metrology institutes of Australia, Japan, South Africa and The United States of America). Therefore, this comparison can be considered as a worldwide one. The measurements took place from January 2005 to September 2007. All participants supplied detailed reports of their measurements including full uncertainty statements. All measurement results reported by the participants were used for the intercomparison and no measurement was subject to rejection. The analysis method introduced in section 6 follows the Guidelines for CCPR Comparison Report Preparation and has been accepted by all participants. For the calculation of the supplementary key comparison reference value no participant had to be excluded and the weighted mean with cut-off used here has been supported by all participants. The unilateral degrees of equivalence (DoE) calculated for each participant are approximately 63% consistent with their uncertainties at the k = 2 level and approximately 81% consistent within k = 3. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by EURAMET, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  8. A comparison of cost-benefit analysis of biomass and natural gas CHP projects in Denmark and the Netherlands

    NARCIS (Netherlands)

    Groth, Tanja; Scholtens, Bert

    2016-01-01

    We investigate what drives differences in the project appraisal of biomass and natural gas combined heat and power (CHP) projects in two countries with very similar energy profiles. This is of importance as the European Commission is assessing the potential scope of harmonizing renewable electricity

  9. Hotspots of uncertainty in land-use and land-cover change projections : a global-scale model comparison

    NARCIS (Netherlands)

    Prestele, Reinhard; Alexander, Peter; Rounsevell, Mark D A; Arneth, Almut; Calvin, Katherine; Doelman, Jonathan; Eitelberg, David A.; Engström, Kerstin; Fujimori, Shinichiro; Hasegawa, Tomoko; Havlik, Petr; Humpenöder, Florian; Jain, Atul K.; Krisztin, Tamás; Kyle, Page; Meiyappan, Prasanth; Popp, Alexander; Sands, Ronald D.; Schaldach, Rüdiger; Schüngel, Jan; Stehfest, Elke; Tabeau, Andrzej; Van Meijl, Hans; Van Vliet, Jasper; Verburg, Peter H.

    2016-01-01

    Model-based global projections of future land-use and land-cover (LULC) change are frequently used in environmental assessments to study the impact of LULC change on environmental services and to provide decision support for policy. These projections are characterized by a high uncertainty in terms

  10. Hotspots of uncertainty in land-use and land-cover change projections: a global-scale model comparison

    NARCIS (Netherlands)

    Prestele, R.; Alexander, P.; Rounsevell, M.; Arneth, A.; Calvin, K.; Doelman, J.; Eitelberg, D.A.; Engström, K.; Fujimori, S.; Hasegawa, T.; Havlik, P.; Humpenöder, F.; Jain, A. K.; Krisztin, T.; Kyle, P.; Meiyappan, P.; Popp, A.; Sands, R.D.; Schaldach, R.; Schüngel, J.; Stehfest, E.; Tabeau, A.; Meijl, van H.; Vliet, van J.; Verburg, P.H.

    2016-01-01

    Model-based global projections of future land-use and land-cover (LULC) change are frequently used in environmental assessments to study the impact of LULC change on environmental services and to provide decision support for policy.These projections are characterized by a high uncertainty in terms o

  11. Assessing IT Projects Success with Extended Fuzzy Cognitive Maps & Neutrosophic Cognitive Maps in comparison to Fuzzy Cognitive Maps

    Directory of Open Access Journals (Sweden)

    Kanika Bhutani

    2016-08-01

    Full Text Available IT projects hold a huge importance to economic growth. Today, half of the capital investments are in IT technology. IT systems and projects are extensive and time consuming; thus implying that its failure is not affordable, so proper feasibility study of assessing project success factors is required. A current methodology like Fuzzy Cognitive Maps has been experimented for identifying and evaluating the success factors in IT projects, but this technique has certain limitations. This paper discusses two new approaches to evaluate IT project success: Extended Fuzzy Cognitive Maps (E-FCM & Neutrosophic Cognitive Maps (NCM.The limitations of FCM like non consideration for non-linear, conditional, time delay weights and indeterminate relations are targeted using E-FCM and NCM in this paper.

  12. Projections of the nucleus lentiformis mesencephali in pigeons (Columba livia): a comparison of the morphology and distribution of neurons with different efferent projections.

    Science.gov (United States)

    Pakan, Janelle M P; Krueger, Kimberly; Kelcher, Erin; Cooper, Sarah; Todd, Kathryn G; Wylie, Douglas R W

    2006-03-01

    The avian nucleus lentiformis mesencephali (LM) is a visual structure involved in the optokinetic response. The LM consists of several morphologically distinct cell types. In the present study we sought to determine if different cell types had differential projections. Using retrograde tracers, we examined the morphology and distribution of LM neurons projecting to the vestibulocerebellum (VbC), inferior olive (IO), dorsal thalamus, nucleus of the basal optic root (nBOR), and midline mesencephalon. From injections into the latter two structures, small LM cells were labeled. More were localized to the lateral LM as opposed to medial LM. From injections into the dorsal thalamus, small neurons were found throughout LM. From injections into the VbC, large multipolar cells were found throughout LM. From injections into IO, a strip of medium-sized fusiform neurons along the border of the medial and lateral subnuclei was labeled. To investigate if neurons project to multiple targets we used fluorescent retrograde tracers. After injections into IO and VbC, double-labeled neurons were not observed in LM. Likewise, after injections into nBOR and IO, double-labeled neurons were not observed. Finally, we processed sections through LM for glutamic acid decarboxylase (GAD). Small neurons, mostly in the lateral LM, were labeled, suggesting that projections from LM to nBOR and midline mesencephalon are GABAergic. We conclude that two efferents of LM, VbC and IO, receive input from morphologically distinct neurons: large multipolar and medium-sized fusiform neurons, respectively. The dorsal thalamus, nBOR, and midline mesencephalon receive input from small neurons, some of which are likely GABAergic.

  13. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  14. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  15. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  16. KEY COMPARISON: Measurement of activity concentration of radionuclide Cs-137 in a solution (COOMET Project no 386/RU/06)

    Science.gov (United States)

    Kharitonov, I. A.; Zanevsky, A. V.; Milevski, V.; Ivaniukovich, A.; Oropesa Verdecia, P.; Moreno León, Y.; Svec, A.

    2008-01-01

    A COOMET.RI(II)-K2.Cs-137 comparison of the measurement of a standardized solution of Cs-137 has enabled three national metrology institutes in the COOMET to demonstrate their traceability to the SI. The results of the comparison will be used to evaluate degrees of equivalence for these institutes through the measurements of the linking laboratory in the key comparison BIPM.RI(II)-K1.Cs-137. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCRI Section II, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  17. Comparison of Flattening Filter (FF) and Flattening-Filter-Free (FFF) 6 MV photon beam characteristics for small field dosimetry using EGSnrc Monte Carlo code

    Science.gov (United States)

    Sangeetha, S.; Sureka, C. S.

    2017-06-01

    The present study is focused to compare the characteristics of Varian Clinac 600 C/D flattened and unflattened 6 MV photon beams for small field dosimetry using EGSnrc Monte Carlo Simulation since the small field dosimetry is considered to be the most crucial and provoking task in the field of radiation dosimetry. A 6 MV photon beam of a Varian Clinac 600 C/D medical linear accelerator operates with Flattening Filter (FF) and Flattening-Filter-Free (FFF) mode for small field dosimetry were performed using EGSnrc Monte Carlo user codes (BEAMnrc and DOSXYZnrc) in order to calculate the beam characteristics using Educated-trial and error method. These includes: Percentage depth dose, lateral beam profile, dose rate delivery, photon energy spectra, photon beam uniformity, out-of-field dose, surface dose, penumbral dose and output factor for small field dosimetry (0.5×0.5 cm2 to 4×4 cm2) and are compared with magna-field sizes (5×5 cm2 to 40×40 cm2) at various depths. The results obtained showed that the optimized beam energy and Full-width-half maximum value for small field dosimetry and magna-field dosimetry was found to be 5.7 MeV and 0.13 cm for both FF and FFF beams. The depth of dose maxima for small field size deviates minimally for both FF and FFF beams similar to magna-fields. The depths greater than dmax depicts a steeper dose fall off in the exponential region for FFF beams comparing FF beams where its deviations gets increased with the increase in field size. The shape of the lateral beam profiles of FF and FFF beams varies remains similar for the small field sizes less than 4×4 cm2 whereas it varies in the case of magna-fields. Dose rate delivery for FFF beams shows an eminent increase with a two-fold factor for both small field dosimetry and magna-field sizes. The surface dose measurements of FFF beams for small field size were found to be higher whereas it gets lower for magna-fields than FF beam. The amount of out-of-field dose reduction gets

  18. Report on task assignment No. 3 for the Waste Package Project; Parts A & B, ASME pressure vessel codes review for waste package application; Part C, Library search for reliability/failure rates data on low temperature low pressure piping, containers, and casks with long design lives

    Energy Technology Data Exchange (ETDEWEB)

    Trabia, M.B.; Kiley, M.; Cardle, J.; Joseph, M.

    1991-07-01

    The Waste Package Project Research Team, at UNLV, has four general required tasks. Task one is the management, quality assurance, and overview of the research that is performed under the cooperative agreement. Task two is the structural analysis of spent fuel and high level waste. Task three is an American Society of Mechanical Engineers (ASME) Pressure Vessel Code review for waste package application. Finally, task four is waste package labeling. This report includes preliminary information about task three (ASME Pressure Vessel Code review for Waste package Application). The first objective is to compile a list of the ASME Pressure Vessel Code that can be applied to waste package containers design and manufacturing processes. The second objective is to explore the use of these applicable codes to the preliminary waste package container designs. The final objective is to perform a library search for reliability and/or failure rates data on low pressure, low temperature, containers and casks with long design lives.

  19. HADES, A Radiographic Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M.B.; Slone, D.M.; Schach von Wittenau, A.E.

    2000-08-18

    We describe features of the HADES radiographic simulation code. We begin with a discussion of why it is useful to simulate transmission radiography. The capabilities of HADES are described, followed by an application of HADES to a dynamic experiment recently performed at the Los Alamos Neutron Science Center. We describe quantitative comparisons between experimental data and HADES simulations using a copper step wedge. We conclude with a short discussion of future work planned for HADES.

  20. LA INICIATIVA PROBATORIA DEL JUEZ Y LA IGUALDAD DE ARMAS EN EL PROYECTO DE CÓDIGO PROCESAL CIVIL The judge's probative initiative and equality of weapons in the civil procedural code project

    Directory of Open Access Journals (Sweden)

    Iván Hunter Ampuero

    2011-01-01

    Full Text Available El presente trabajo reúne dos propósitos: el primero, explicar cuál es la relación entre la igualdad procesal y la actividad probatoria del juez, es decir, definir si la iniciativa jurisdiccional ex officio en materia de pruebas puede destinarse a lograr equilibrios procesales cuando los sujetos se presentan en situaciones de desigualdad sustantiva. El segundo es determinar el rol que debe asumir el juez en razón de las exigencias de igualdad que impregnan al proceso, cuya vigencia se le encomienda al juez en el Proyecto de Código Procesal Civil.The present work meets two purposes: First of all, to explain what is the relation between procedural equality and judge's probative activity. Thus, to define if the ex officio jurisdictional initiative related to proofs can lead to achieve procedural balances when individuals are surrounded by substantive inequality. Secondly, to define the role to be assumed by the judge according to the equality requirements which involve a trial. These requirements are entrusted to the judge in the Civil Procedural Code Project.

  1. An updated probabilistic seismic hazard assessment for Romania and comparison with the approach and outcomes of the SHARE Project

    OpenAIRE

    Pavel, Florin; Vacareanu, Radu; Douglas, John; Radulian, Micrea; Cioflan, Carmen; Barbat Barbat, Horia Alejandro

    2016-01-01

    The probabilistic seismic hazard analysis for Romania is revisited within the framework of the BIGSEES national research project (http://infp.infp.ro/bigsees/default.htm) financed by the Romanian Ministry of Education and Scientific Research in the period 2012-2016. The scope of this project is to provide a refined description of the seismic action for Romanian sites according to the requirements of Eurocode 8. To this aim, the seismicity of all the sources influencing the Romanian territory ...

  2. Climate model performance and change projection for freshwater fluxes: Comparison for irrigated areas in Central and South Asia

    OpenAIRE

    Shilpa M. Asokan; Peter Rogberg; Arvid Bring; Jerker Jarsjö; Georgia Destouni

    2016-01-01

    Study region: The large semi-arid Aral Region in Central Asia and the smaller tropical Mahanadi River Basin (MRB) in India. Study focus: Few studies have so far evaluated the performance of the latest generation of global climate models on hydrological basin scales. We here investigate the performance and projections of the global climate models in the Coupled Model Intercomparison Project, Phase 5 (CMIP5) for freshwater fluxes and their changes in two regional hydrological basins, which a...

  3. Comparison of Volume Rendering CT cholangiography and Minimum intensity projection CT cholangiography in patients with obstructive biliary disease

    OpenAIRE

    牛見, 尚史; 佃, 俊二; 平敷, 淳子

    2002-01-01

     We compared the detectability and conspicuity of minimum intensity projection CT cholangiography (Min-IP CTC)with volume rendering CT cholangiography (VRCTC).The subjects were ten patients (6 men, 4 women, mean age 64.7) who clinically suspected obstructive biliary truct disease. They underwent enhanced helical CT. Volume data of delayed phase that reconstructed by 2 or 1 mm thickness was transferred to work station (Advantage Windows) and data processing by Minimum Intensity Projection (Min...

  4. Deploying TSP on a National Scale: An Experience Report from Pilot Projects in Mexico

    Science.gov (United States)

    2009-03-01

    introduced. Structured personal code and design reviews, based on individual defect data, are conducted. PSP2.1 6, 7, 8 Design templates and design...Second, the team was unsure how to structure the work for track- ing and reporting. The needs ofthe team had to be balanced with the project...Direct comparisons should be consi- dered with caution because few non-TSP projects gather data as precise or use the same standar - dized operational

  5. Long Burst Error Correcting Codes Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Long burst error mitigation is an enabling technology for the use of Ka band for high rate commercial and government users. Multiple NASA, government, and commercial...

  6. Establishment the code for prediction of waste volume on NPP decommissioning

    Energy Technology Data Exchange (ETDEWEB)

    Cho, W. H.; Park, S. K.; Choi, Y. D.; Kim, I. S.; Moon, J. K. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    In practice, decommissioning waste volume can be estimated appropriately by finding the differences between prediction and actual operation and considering the operational problem or supplementary matters. So in the nuclear developed countries such as U.S. or Japan, the decommissioning waste volume is predicted on the basis of the experience in their own decommissioning projects. Because of the contamination caused by radioactive material, decontamination activity and management of radio-active waste should be considered in decommissioning of nuclear facility unlike the usual plant or facility. As the decommissioning activity is performed repeatedly, data for similar activities are accumulated, and optimal strategy can be achieved by comparison with the predicted strategy. Therefore, a variety of decommissioning experiences are the most important. In Korea, there is no data on the decommissioning of commercial nuclear power plants yet. However, KAERI has accumulated the basis decommissioning data of nuclear facility through decommissioning of research reactor (KRR-2) and uranium conversion plant (UCP). And DECOMMIS(DECOMMissioning Information Management System) was developed to provide and manage the whole data of decommissioning project. Two codes, FAC code and WBS code, were established in this process. FAC code is the one which is classified by decommissioning target of nuclear facility, and WBS code is classified by each decommissioning activity. The reason why two codes where created is that the codes used in DEFACS (Decommissioning Facility Characterization management System) and DEWOCS (Decommissioning Work-unit productivity Calculation System) are different from each other, and they were classified each purpose. DEFACS which manages the facility needs the code that categorizes facility characteristics, and DEWOCS which calculates unit productivity needs the code that categorizes decommissioning waste volume. KAERI has accumulated decommissioning data of KRR

  7. Locally Orderless Registration Code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  8. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  9. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  10. Constructing quantum codes

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Quantum error correcting codes are indispensable for quantum information processing and quantum computation.In 1995 and 1996,Shor and Steane gave first several examples of quantum codes from classical error correcting codes.The construction of efficient quantum codes is now an active multi-discipline research field.In this paper we review the known several constructions of quantum codes and present some examples.

  11. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-12-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  12. Comparison between project-based learning and discovery learning toward students' metacognitive strategies on global warming concept

    Science.gov (United States)

    Tumewu, Widya Anjelia; Wulan, Ana Ratna; Sanjaya, Yayan

    2017-05-01

    The purpose of this study was to know comparing the effectiveness of learning using Project-based learning (PjBL) and Discovery Learning (DL) toward students metacognitive strategies on global warming concept. A quasi-experimental research design with a The Matching-Only Pretest-Posttest Control Group Design was used in this study. The subjects were students of two classes 7th grade of one of junior high school in Bandung City, West Java of 2015/2016 academic year. The study was conducted on two experimental class, that were project-based learning treatment on the experimental class I and discovery learning treatment was done on the experimental class II. The data was collected through questionnaire to know students metacognitive strategies. The statistical analysis showed that there were statistically significant differences in students metacognitive strategies between project-based learning and discovery learning.

  13. Modeling of an Electron Injector for the AWAKE Project

    CERN Document Server

    Mete, O; Apsimon, R; Burt, G; Doebert, S; Fiorito, R; Welsch, C

    2015-01-01

    Particle-in-cell simulations were performed by using PARMELA to characterise an electron injector with a booster linac for the AWAKE project in order to provide the baseline specifications required by the plasma wakefield experiments. Tolerances and errors were investigated. A 3 GHz travelling wave structure designed by using CST code. Particles were tracked by using the field maps acquired from these electromagnetic simulations. These results are pre- sented in comparison with the generic accelerating structure model within PARMELA.

  14. Comparative study among simulations of an internal monitoring system using different Monte Carlo codes; Estudo comparativo entre simulacoes de um sistema de monitoracao ocupacional interna utilizando diferentes codigos de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, T.C.F.; Bastos, F.M.; Figueiredo, M.T.T.; Souza, L.S.; Guimaraes, M.C.; Silva, C.R.E.; Mello, O.A.; Castelo e Silva, L.A.; Paixao, L., E-mail: tcff01@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Benavente, J.A.; Paiva, F.G. [Universidade Federal de Minas Gerais (PCTN/UFMG), Belo Horizonte, MG (Brazil). Curso de Pos-Graduacao em Ciencias e Tecnicas Nucleares

    2015-07-01

    Computational Monte Carlo (MC) codes have been used for simulation of nuclear installations mainly for internal monitoring of workers, the well known as Whole Body Counters (WBC). The main goal of this project was the modeling and simulation of the counting efficiency (CE) of a WBC system using three different MC codes: MCNPX, EGSnrc and VMC in-vivo. The simulations were performed for three different groups of analysts. The results shown differences between the three codes, as well as in the results obtained by the same code and modeled by different analysts. Moreover, all the results were also compared to the experimental results obtained in laboratory for meaning of validation and final comparison. In conclusion, it was possible to detect the influence on the results when the system is modeled by different analysts using the same MC code and in which MC code the results were best suited, when comparing to the experimental data result. (author)

  15. Comparison of methods for estimates of molecular genetic diversity in genus Croton: influence of coefficients, clustering strategies and data projection.

    Science.gov (United States)

    Scaldaferri, M M; Freitas, J S; Vieira, J G P; Gonçalves, Z S; Souza, A M; Cerqueira-Silva, C B M

    2014-07-25

    We investigated 10 similarity (and disimilarity) coefficients in a set of 40 wild genotypes of Croton linearifolius subjected to analyses using hierarchical grouping methods, grouping methods by optimization and data projection in two-dimensional space. Genotypes were characterized by analyzing DNA polymorphism with the use of 15 ISSR and 12 RAPD markers. The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency evaluation. The Spearman correlation coefficients between the 10 coefficients evaluated were significant (P Croton.

  16. What to do with a Dead Research Code

    Science.gov (United States)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  17. Turbo Codes Extended with Outer BCH Code

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl

    1996-01-01

    The "error floor" observed in several simulations with the turbo codes is verified by calculation of an upper bound to the bit error rate for the ensemble of all interleavers. Also an easy way to calculate the weight enumerator used in this bound is presented. An extended coding scheme is proposed...

  18. Heritability of fractional anisotropy in human white matter : A comparison of Human Connectome Project and ENIGMA-DTI data

    NARCIS (Netherlands)

    Kochunov, Peter; Jahanshad, Neda; Marcus, Daniel; Winkler, Anderson; Sprooten, Emma; Nichols, Thomas E.; Wright, Susan N.; Hong, L. Elliot; Patel, Binish; Behrens, Timothy; Jbabdi, Saad; Andersson, Jesper; Lenglet, Christophe; Yacoub, Essa; Moeller, Steen; Auerbach, Eddie; Ugurbil, Kamil; Sotiropoulos, Stamatios N.; Brouwer, Rachel M.; Landman, Bennett; Lemaitre, Hervé; den Braber, Anouk; Zwiers, Marcel P.; Ritchie, Stuart; van Hulzen, Kimm; Almasy, Laura; Curran, Joanne; deZubicaray, Greig I.; Duggirala, Ravi; Fox, Peter; Martin, Nicholas G.; McMahon, Katie L.; Mitchell, Braxton; Olvera, Rene L.; Peterson, Charles; Starr, John; Sussmann, Jessika; Wardlaw, Joanna; Wright, Margie; Boomsma, Dorret I.; Kahn, Rene; de Geus, Eco J C; Williamson, Douglas E.; Hariri, Ahmad; van 't Ent, Dennis; Bastin, Mark E.; McIntosh, Andrew; Deary, Ian J.; Hulshoff pol, Hilleke E.; Blangero, John; Thompson, Paul M.; Glahn, David C.; Van Essen, David C.

    2015-01-01

    The degree to which genetic factors influence brain connectivity is beginning to be understood. Large-scale efforts are underway to map the profile of genetic effects in various brain regions. The NIH-funded Human Connectome Project (HCP) is providing data valuable for analyzing the degree of geneti

  19. Heritability of fractional anisotropy in human white matter: a comparison of Human Connectome Project and ENIGMA-DTI data

    NARCIS (Netherlands)

    Kochunov, P.; Jahanshad, N.; Marcus, D.; Winkler, A.; Sprooten, E.; Nichols, T.E.; Wright, S.N.; Hong, L.E.; Patel, B.; Behrens, T.; Jbabdi, S.; Andersson, J.; Lenglet, C.; Yacoub, E.; Moeller, S.; Auerbach, E.; Ugurbil, K.; Sotiropoulos, S.N.; Brouwer, R.M.; Landman, B.; Lemaitre, H.; Braber, A.; Zwiers, M.P.; Ritchie, S.; Hulzen, K. van; Almasy, L.; Curran, J.; deZubicaray, G.I.; Duggirala, R.; Fox, P.; Martin, N.G.; McMahon, K.L.; Mitchell, B.; Olvera, R.L.; Peterson, C.; Starr, J.; Sussmann, J.; Wardlaw, J.; Wright, M.; Boomsma, D.I.; Kahn, R.; Geus, E.J. de; Williamson, D.E.; Hariri, A.; Ent, D. van 't; Bastin, M.E.; McIntosh, A.; Deary, I.J.; Pol, H.E.; Blangero, J.; Thompson, P.M.; Glahn, D.C.; Essen, D.C. Van

    2015-01-01

    The degree to which genetic factors influence brain connectivity is beginning to be understood. Large-scale efforts are underway to map the profile of genetic effects in various brain regions. The NIH-funded Human Connectome Project (HCP) is providing data valuable for analyzing the degree of geneti

  20. Characteristics of the molar surface after removal of cervical enamel projections: comparison of three different rotating instruments

    Science.gov (United States)

    2016-01-01

    Purpose The aim of this study was to evaluate and compare tooth surface characteristics in extracted human molars after cervical enamel projections (CEPs) were removed with the use of three rotating instruments. Methods We classified 60 extracted molars due to periodontal lesion with CEPs into grade I, II, or III, according to the Masters and Hoskins’ criteria. Each group contained 20 specimens. Three rotating instruments were used to remove the CEPs: a piezoelectric ultrasonic scaler, a periodontal bur, and a diamond bur. Tooth surface characteristics before and after removal of the projections were then evaluated with scanning electron microscopy (SEM). We analyzed the characteristics of the tooth surfaces with respect to roughness and whether the enamel projections had been completely removed. Results In SEM images, surfaces treated with the diamond bur were smoothest, but this instrument caused considerable harm to tooth structures near the CEPs. The piezoelectric ultrasonic scaler group produced the roughest surface but caused less harm to the tooth structure near the furcation. In general, the surfaces treated with the periodontal bur were smoother than those treated with the ultrasonic scaler, and the periodontal bur did not invade adjacent tooth structures. Conclusions For removal of grade II CEPs, the most effective instrument was the diamond bur. However, in removing grade III projections, the diamond bur can destroy both adjacent tooth structures and the periodontal apparatus. In such cases, careful use of the periodontal bur may be an appropriate substitute. PMID:27127691