WorldWideScience

Sample records for machine code resulting

  1. Experiences and results multitasking a hydrodynamics code on global and local memory machines

    International Nuclear Information System (INIS)

    Mandell, D.

    1987-01-01

    A one-dimensional, time-dependent Lagrangian hydrodynamics code using a Godunov solution method has been multimasked for the Cray X-MP/48, the Intel iPSC hypercube, the Alliant FX series and the IBM RP3 computers. Actual multitasking results have been obtained for the Cray, Intel and Alliant computers and simulated results were obtained for the Cray and RP3 machines. The differences in the methods required to multitask on each of the machines is discussed. Results are presented for a sample problem involving a shock wave moving down a channel. Comparisons are made between theoretical speedups, predicted by Amdahl's law, and the actual speedups obtained. The problems of debugging on the different machines are also described

  2. A portable virtual machine target for proof-carrying code

    DEFF Research Database (Denmark)

    Franz, Michael; Chandra, Deepak; Gal, Andreas

    2005-01-01

    Virtual Machines (VMs) and Proof-Carrying Code (PCC) are two techniques that have been used independently to provide safety for (mobile) code. Existing virtual machines, such as the Java VM, have several drawbacks: First, the effort required for safety verification is considerable. Second and mor...... simultaneously providing efficient justin-time compilation and target-machine independence. In particular, our approach reduces the complexity of the required proofs, resulting in fewer proof obligations that need to be discharged at the target machine....

  3. Ocean circulation code on machine connection

    International Nuclear Information System (INIS)

    Vitart, F.

    1993-01-01

    This work is part of a development of a global climate model based on a coupling between an ocean model and an atmosphere model. The objective was to develop this global model on a massively parallel machine (CM2). The author presents the OPA7 code (equations, boundary conditions, equation system resolution) and parallelization on the CM2 machine. CM2 data structure is briefly evoked, and two tests are reported (on a flat bottom basin, and a topography with eight islands). The author then gives an overview of studies aimed at improving the ocean circulation code: use of a new state equation, use of a formulation of surface pressure, use of a new mesh. He reports the study of the use of multi-block domains on CM2 through advection tests, and two-block tests

  4. Reversible machine code and its abstract processor architecture

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert; Yokoyama, Tetsuo

    2007-01-01

    A reversible abstract machine architecture and its reversible machine code are presented and formalized. For machine code to be reversible, both the underlying control logic and each instruction must be reversible. A general class of machine instruction sets was proven to be reversible, building...

  5. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  6. Understanding and Writing G & M Code for CNC Machines

    Science.gov (United States)

    Loveland, Thomas

    2012-01-01

    In modern CAD and CAM manufacturing companies, engineers design parts for machines and consumable goods. Many of these parts are cut on CNC machines. Whether using a CNC lathe, milling machine, or router, the ideas and designs of engineers must be translated into a machine-readable form called G & M Code that can be used to cut parts to precise…

  7. Machine function based control code algebras

    NARCIS (Netherlands)

    Bergstra, J.A.

    Machine functions have been introduced by Earley and Sturgis in [6] in order to provide a mathematical foundation of the use of the T-diagrams proposed by Bratman in [5]. Machine functions describe the operation of a machine at a very abstract level. A theory of hardware and software based on

  8. Complete permutation Gray code implemented by finite state machine

    Directory of Open Access Journals (Sweden)

    Li Peng

    2014-09-01

    Full Text Available An enumerating method of complete permutation array is proposed. The list of n! permutations based on Gray code defined over finite symbol set Z(n = {1, 2, …, n} is implemented by finite state machine, named as n-RPGCF. An RPGCF can be used to search permutation code and provide improved lower bounds on the maximum cardinality of a permutation code in some cases.

  9. 3D equilibrium codes for mirror machines

    International Nuclear Information System (INIS)

    Kaiser, T.B.

    1983-01-01

    The codes developed for cumputing three-dimensional guiding center equilibria for quadrupole tandem mirrors are discussed. TEBASCO (Tandem equilibrium and ballooning stability code) is a code developed at LLNL that uses a further expansion of the paraxial equilibrium equation in powers of β (plasma pressure/magnetic pressure). It has been used to guide the design of the TMX-U and MFTF-B experiments at Livermore. Its principal weakness is its perturbative nature, which renders its validity for high-β calculation open to question. In order to compute high-β equilibria, the reduced MHD technique that has been proven useful for determining toroidal equilibria was adapted to the tandem mirror geometry. In this approach, the paraxial expansion of the MHD equations yields a set of coupled nonlinear equations of motion valid for arbitrary β, that are solved as an initial-value problem. Two particular formulations have been implemented in computer codes developed at NYU/Kyoto U and LLNL. They differ primarily in the type of grid, the location of the lateral boundary and the damping techniques employed, and in the method of calculating pressure-balance equilibrium. Discussions on these codes are presented in this paper. (Kato, T.)

  10. Model-Driven Engineering of Machine Executable Code

    Science.gov (United States)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  11. Compiler design handbook optimizations and machine code generation

    CERN Document Server

    Srikant, YN

    2003-01-01

    The widespread use of object-oriented languages and Internet security concerns are just the beginning. Add embedded systems, multiple memory banks, highly pipelined units operating in parallel, and a host of other advances and it becomes clear that current and future computer architectures pose immense challenges to compiler designers-challenges that already exceed the capabilities of traditional compilation techniques. The Compiler Design Handbook: Optimizations and Machine Code Generation is designed to help you meet those challenges. Written by top researchers and designers from around the

  12. A friend man-machine interface for thermo-hydraulic simulation codes of nuclear installations

    International Nuclear Information System (INIS)

    Araujo Filho, F. de; Belchior Junior, A.; Barroso, A.C.O.; Gebrim, A.

    1994-01-01

    This work presents the development of a Man-Machine Interface to the TRAC-PF1 code, a computer program to perform best estimate analysis of transients and accidents at nuclear power plants. The results were considered satisfactory and a considerable productivity gain was achieved in the activity of preparing and analyzing simulations. (author)

  13. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  14. Machine-Learning Algorithms to Code Public Health Spending Accounts.

    Science.gov (United States)

    Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David

    Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.

  15. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  16. Towards a universal code formatter through machine learning

    NARCIS (Netherlands)

    Parr, T. (Terence); J.J. Vinju (Jurgen)

    2016-01-01

    textabstractThere are many declarative frameworks that allow us to implement code formatters relatively easily for any specific language, but constructing them is cumbersome. The first problem is that "everybody" wants to format their code differently, leading to either many formatter variants or a

  17. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are

  18. Code-Expanded Random Access for Machine-Type Communications

    DEFF Research Database (Denmark)

    Kiilerich Pratas, Nuno; Thomsen, Henning; Stefanovic, Cedomir

    2012-01-01

    Abstract—The random access methods used for support of machine-type communications (MTC) in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. Motivated by the random access method employed in LTE, we propose...

  19. Code-expanded radio access protocol for machine-to-machine communications

    DEFF Research Database (Denmark)

    Thomsen, Henning; Kiilerich Pratas, Nuno; Stefanovic, Cedomir

    2013-01-01

    The random access methods used for support of machine-to-machine, also referred to as Machine-Type Communications, in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. We propose an approach that is motivated b...... subframes and orthogonal preambles, the amount of available contention resources is drastically increased, enabling the massive support of Machine-Type Communication users that is beyond the reach of current systems.......The random access methods used for support of machine-to-machine, also referred to as Machine-Type Communications, in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. We propose an approach that is motivated...... by the random access method employed in LTE, which significantly increases the amount of contention resources without increasing the system resources, such as contention subframes and preambles. This is accomplished by a logical, rather than physical, extension of the access method in which the available system...

  20. The semaphore codes attached to a Turing machine via resets and their various limits

    OpenAIRE

    Rhodes, John; Schilling, Anne; Silva, Pedro V.

    2016-01-01

    We introduce semaphore codes associated to a Turing machine via resets. Semaphore codes provide an approximation theory for resets. In this paper we generalize the set-up of our previous paper "Random walks on semaphore codes and delay de Bruijn semigroups" to the infinite case by taking the profinite limit of $k$-resets to obtain $(-\\omega)$-resets. We mention how this opens new avenues to attack the P versus NP problem.

  1. A Machine Learning Perspective on Predictive Coding with PAQ

    OpenAIRE

    Knoll, Byron; de Freitas, Nando

    2011-01-01

    PAQ8 is an open source lossless data compression algorithm that currently achieves the best compression rates on many benchmarks. This report presents a detailed description of PAQ8 from a statistical machine learning perspective. It shows that it is possible to understand some of the modules of PAQ8 and use this understanding to improve the method. However, intuitive statistical explanations of the behavior of other modules remain elusive. We hope the description in this report will be a sta...

  2. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    Science.gov (United States)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  3. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  4. Lean coding machine. Facilities target productivity and job satisfaction with coding automation.

    Science.gov (United States)

    Rollins, Genna

    2010-07-01

    Facilities are turning to coding automation to help manage the volume of electronic documentation, streamlining workflow, boosting productivity, and increasing job satisfaction. As EHR adoption increases, computer-assisted coding may become a necessity, not an option.

  5. Machine-Checked Sequencer for Critical Embedded Code Generator

    Science.gov (United States)

    Izerrouken, Nassima; Pantel, Marc; Thirioux, Xavier

    This paper presents the development of a correct-by-construction block sequencer for GeneAuto a qualifiable (according to DO178B/ED12B recommendation) automatic code generator. It transforms Simulink models to MISRA C code for safety critical systems. Our approach which combines classical development process and formal specification and verification using proof-assistants, led to preliminary fruitful exchanges with certification authorities. We present parts of the classical user and tools requirements and derived formal specifications, implementation and verification for the correctness and termination of the block sequencer. This sequencer has been successfully applied to real-size industrial use cases from various transportation domain partners and led to requirement errors detection and a correct-by-construction implementation.

  6. Recent results relevant to ignition physics and machine design issues

    International Nuclear Information System (INIS)

    Coppi, B.; Airoldi, A.; Bombarda, F.

    2001-01-01

    The plasma regimes under which ignition can be achieved involve a characteristic range of parameters and issues on which information has been provided by recent experiments. In particular, these results have motivated a new, in-depth analysis of the expected performance of the Ignitor machine as well as of the plasma processes that it can investigate. The main results and recent advances in the design of key systems of the machine are reported. (author)

  7. Recent results relevant to ignition physics and machine design issues

    International Nuclear Information System (INIS)

    Coppi, B.; Airoldi, A.; Bombarda, F.

    1999-01-01

    The plasma regimes under which ignition can be achieved involve a characteristic range of parameters and issues on which information has been provided by recent experiments. In particular, these results have motivated a new, in-depth analysis of the expected performance of the Ignitor machine as well as of the plasma processes that it can investigate. The main results and recent advances in the design of key systems of the machine are reported. (author)

  8. CNC LATHE MACHINE PRODUCING NC CODE BY USING DIALOG METHOD

    Directory of Open Access Journals (Sweden)

    Yakup TURGUT

    2004-03-01

    Full Text Available In this study, an NC code generation program utilising Dialog Method was developed for turning centres. Initially, CNC lathes turning methods and tool path development techniques were reviewed briefly. By using geometric definition methods, tool path was generated and CNC part program was developed for FANUC control unit. The developed program made CNC part program generation process easy. The program was developed using BASIC 6.0 programming language while the material and cutting tool database were and supported with the help of ACCESS 7.0.

  9. Accuracy comparison among different machine learning techniques for detecting malicious codes

    Science.gov (United States)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  10. Using machine-coded event data for the micro-level study of political violence

    Directory of Open Access Journals (Sweden)

    Jesse Hammond

    2014-07-01

    Full Text Available Machine-coded datasets likely represent the future of event data analysis. We assess the use of one of these datasets—Global Database of Events, Language and Tone (GDELT—for the micro-level study of political violence by comparing it to two hand-coded conflict event datasets. Our findings indicate that GDELT should be used with caution for geo-spatial analyses at the subnational level: its overall correlation with hand-coded data is mediocre, and at the local level major issues of geographic bias exist in how events are reported. Overall, our findings suggest that due to these issues, researchers studying local conflict processes may want to wait for a more reliable geocoding method before relying too heavily on this set of machine-coded data.

  11. Particle-in-cell plasma simulation codes on the connection machine

    International Nuclear Information System (INIS)

    Walker, D.W.

    1991-01-01

    Methods for implementing three-dimensional, electromagnetic, relativistic PIC plasma simulation codes on the Connection Machine (CM-2) are discussed. The gather and scatter phases of the PIC algorithm involve indirect indexing of data, which results in a large amount of communication on the CM-2. Different data decompositions are described that seek to reduce the amount of communication while maintaining good load balance. These methods require the particles to be spatially sorted at the start of each time step, which introduced another form of overhead. The different methods are implemented in CM Fortran on the CM-2 and compared. It was found that the general router is slow in performing the communication in the gather and scatter steps, which precludes an efficient CM Fortran implementation. An alternative method that uses PARIS calls and the NEWS communication network to pipeline data along the axes of the VP set is suggested as a more efficient algorithm

  12. Numerical code to determine the particle trapping region in the LISA machine

    International Nuclear Information System (INIS)

    Azevedo, M.T. de; Raposo, C.C. de; Tomimura, A.

    1984-01-01

    A numerical code is constructed to determine the trapping region in machine like LISA. The variable magnetic field is two deimensional and is coupled to the Runge-Kutta through the Tchebichev polynomial. Various particle orbits including particle interactions were analysed. Beside this, a strong electric field is introduced to see the possible effects happening inside the plasma. (Author) [pt

  13. Using supervised machine learning to code policy issues: Can classifiers generalize across contexts?

    NARCIS (Netherlands)

    Burscher, B.; Vliegenthart, R.; de Vreese, C.H.

    2015-01-01

    Content analysis of political communication usually covers large amounts of material and makes the study of dynamics in issue salience a costly enterprise. In this article, we present a supervised machine learning approach for the automatic coding of policy issues, which we apply to news articles

  14. Comparison of results between different precision MAFIA codes

    International Nuclear Information System (INIS)

    Farkas, D.; Tice, B.

    1990-01-01

    In order to satisfy the inquiries of the Mafia Code users at SLAC, an evaluation of these codes was done. This consisted of running a cavity with known solutions. This study considered only the time independent solutions. No wake-field calculations were tried. The two machines involved were the NMFECC Cray (e-machine) at LLNL and the IBM/3081 at SLAC. The primary difference between the implementation of the codes on these machines is that the Cray has 64-bit accuracy while the IBM version has 32-bit accuracy. Unfortunately this study is incomplete as the Post-processor (P3) could not be made to work properly on the SLAC machine. This meant that no q's were calculated and no field patterns were generated. A certain amount of guessing had to be done when constructing the comparison tables. This problem aside, the probable conclusions that may be drawn are: (1) thirty-two bit precision is adequate for frequency determination; (2) sixty-four bit precision is desirable for field determination. This conclusion is deduced from the accuracy statistics. The cavity selected for study was a rectangular one with the dimensions (4,3,5) in centimeters. Only half of this cavity was used (2,3,5) with the x dimension being the one that was halved. The boundary conditions (B.C.) on the plane of symmetry were varied between Neumann and Dirichlet so as to cover all possible modes. Ten (10) modes were ran for each boundary condition

  15. Machine-learning-assisted correction of correlated qubit errors in a topological code

    Directory of Open Access Journals (Sweden)

    Paul Baireuther

    2018-01-01

    Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.

  16. Development of non-linear vibration analysis code for CANDU fuelling machine

    International Nuclear Information System (INIS)

    Murakami, Hajime; Hirai, Takeshi; Horikoshi, Kiyomi; Mizukoshi, Kaoru; Takenaka, Yasuo; Suzuki, Norio.

    1988-01-01

    This paper describes the development of a non-linear, dynamic analysis code for the CANDU 600 fuelling machine (F-M), which includes a number of non-linearities such as gap with or without Coulomb friction, special multi-linear spring connections, etc. The capabilities and features of the code and the mathematical treatment for the non-linearities are explained. The modeling and numerical methodology for the non-linearities employed in the code are verified experimentally. Finally, the simulation analyses for the full-scale F-M vibration testing are carried out, and the applicability of the code to such multi-degree of freedom systems as F-M is demonstrated. (author)

  17. Automatically explaining machine learning prediction results: a demonstration on type 2 diabetes risk prediction.

    Science.gov (United States)

    Luo, Gang

    2016-01-01

    Predictive modeling is a key component of solutions to many healthcare problems. Among all predictive modeling approaches, machine learning methods often achieve the highest prediction accuracy, but suffer from a long-standing open problem precluding their widespread use in healthcare. Most machine learning models give no explanation for their prediction results, whereas interpretability is essential for a predictive model to be adopted in typical healthcare settings. This paper presents the first complete method for automatically explaining results for any machine learning predictive model without degrading accuracy. We did a computer coding implementation of the method. Using the electronic medical record data set from the Practice Fusion diabetes classification competition containing patient records from all 50 states in the United States, we demonstrated the method on predicting type 2 diabetes diagnosis within the next year. For the champion machine learning model of the competition, our method explained prediction results for 87.4 % of patients who were correctly predicted by the model to have type 2 diabetes diagnosis within the next year. Our demonstration showed the feasibility of automatically explaining results for any machine learning predictive model without degrading accuracy.

  18. Parallelization of MCNP Monte Carlo neutron and photon transport code in parallel virtual machine and message passing interface

    International Nuclear Information System (INIS)

    Deng Li; Xie Zhongsheng

    1999-01-01

    The coupled neutron and photon transport Monte Carlo code MCNP (version 3B) has been parallelized in parallel virtual machine (PVM) and message passing interface (MPI) by modifying a previous serial code. The new code has been verified by solving sample problems. The speedup increases linearly with the number of processors and the average efficiency is up to 99% for 12-processor. (author)

  19. Computer and compiler effects on code results: status report

    International Nuclear Information System (INIS)

    1996-01-01

    Within the framework of the international effort on the assessment of computer codes, which are designed to describe the overall reactor coolant system (RCS) thermalhydraulic response, core damage progression, and fission product release and transport during severe accidents, there has been a continuous debate as to whether the code results are influenced by different code users or by different computers or compilers. The first aspect, the 'Code User Effect', has been investigated already. In this paper the other aspects will be discussed and proposals are given how to make large system codes insensitive to different computers and compilers. Hardware errors and memory problems are not considered in this report. The codes investigated herein are integrated code systems (e. g. ESTER, MELCOR) and thermalhydraulic system codes with extensions for severe accident simulation (e. g. SCDAP/RELAP, ICARE/CATHARE, ATHLET-CD), and codes to simulate fission product transport (e. g. TRAPMELT, SOPHAEROS). Since all of these codes are programmed in Fortran 77, the discussion herein is based on this programming language although some remarks are made about Fortran 90. Some observations about different code results by using different computers are reported and possible reasons for this unexpected behaviour are listed. Then methods are discussed how to avoid portability problems

  20. Benchmarking NNWSI flow and transport codes: COVE 1 results

    International Nuclear Information System (INIS)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs

  1. Brain cells in the avian 'prefrontal cortex' code for features of slot-machine-like gambling.

    Directory of Open Access Journals (Sweden)

    Damian Scarf

    2011-01-01

    Full Text Available Slot machines are the most common and addictive form of gambling. In the current study, we recorded from single neurons in the 'prefrontal cortex' of pigeons while they played a slot-machine-like task. We identified four categories of neurons that coded for different aspects of our slot-machine-like task. Reward-Proximity neurons showed a linear increase in activity as the opportunity for a reward drew near. I-Won neurons fired only when the fourth stimulus of a winning (four-of-a-kind combination was displayed. I-Lost neurons changed their firing rate at the presentation of the first nonidentical stimulus, that is, when it was apparent that no reward was forthcoming. Finally, Near-Miss neurons also changed their activity the moment it was recognized that a reward was no longer available, but more importantly, the activity level was related to whether the trial contained one, two, or three identical stimuli prior to the display of the nonidentical stimulus. These findings not only add to recent neurophysiological research employing simulated gambling paradigms, but also add to research addressing the functional correspondence between the avian NCL and primate PFC.

  2. Results from the Metis code participation to the Hydrocoin exercise

    International Nuclear Information System (INIS)

    Raimbault, P.

    1987-04-01

    The METIS code, developed at the ENSMP is a 2D finite element radionuclide transport and groundwater flow model based on the hypothesis of an equivalent porous medium with an explicit description of the main fractures. It is integrated in the global risk assessment code MELODIE for nuclear waste repositories in geological formations. The participation of the METIS code to the HYDROCOIN exercise is of prime importance for its development and its incorporation in the performance assessment procedure in France. Results from HYDROCOIN cases show that the code can handle correctly fractured media, high permeability contrast formations and buoyancy effects. A 3D version of the code has been developed for carrying comparisons of field experiments and groundwater flow models in HYDROCOIN level 2. In order to carry out the exercise, several pre and post-processing programs were developed and integrated in a conversational module. They include: contour plots, velocity field representations, interpolations, particule tracking routines and uncertainty and sensitivity analysis modules

  3. Automated Testing Infrastructure and Result Comparison for Geodynamics Codes

    Science.gov (United States)

    Heien, E. M.; Kellogg, L. H.

    2013-12-01

    The geodynamics community uses a wide variety of codes on a wide variety of both software and hardware platforms to simulate geophysical phenomenon. These codes are generally variants of finite difference or finite element calculations involving Stokes flow or wave propagation. A significant problem is that codes of even low complexity will return different results depending on the platform due to slight differences in hardware, software, compiler, and libraries. Furthermore, changes to the codes during development may affect solutions in unexpected ways such that previously validated results are altered. The Computational Infrastructure for Geodynamics (CIG) is funded by the NSF to enhance the capabilities of the geodynamics community through software development. CIG has recently done extensive work in setting up an automated testing and result validation system based on the BaTLab system developed at the University of Wisconsin, Madison. This system uses 16 variants of Linux and Mac platforms on both 32 and 64-bit processors to test several CIG codes, and has also recently been extended to support testing on the XSEDE TACC (Texas Advanced Computing Center) Stampede cluster. In this work we overview the system design and demonstrate how automated testing and validation occurs and results are reported. We also examine several results from the system from different codes and discuss how changes in compilers and libraries affect the results. Finally we detail some result comparison tools for different types of output (scalar fields, velocity fields, seismogram data), and discuss within what margins different results can be considered equivalent.

  4. 2D and 3D core-collapse supernovae simulation results obtained with the CHIMERA code

    Energy Technology Data Exchange (ETDEWEB)

    Bruenn, S W; Marronetti, P; Dirk, C J [Physics Department, Florida Atlantic University, 777 W. Glades Road, Boca Raton, FL 33431-0991 (United States); Mezzacappa, A; Hix, W R [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6354 (United States); Blondin, J M [Department of Physics, North Carolina State University, Raleigh, NC 27695-8202 (United States); Messer, O E B [Center for Computational Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6354 (United States); Yoshida, S, E-mail: bruenn@fau.ed [Max-Planck-Institut fur Gravitationsphysik, Albert Einstein Institut, Golm (Germany)

    2009-07-01

    Much progress in realistic modeling of core-collapse supernovae has occurred recently through the availability of multi-teraflop machines and the increasing sophistication of supernova codes. These improvements are enabling simulations with enough realism that the explosion mechanism, long a mystery, may soon be delineated. We briefly describe the CHIMERA code, a supernova code we have developed to simulate core-collapse supernovae in 1, 2, and 3 spatial dimensions. We then describe the results of an ongoing suite of 2D simulations initiated from a 12, 15, 20, and 25 M{sub o-dot} progenitor. These have all exhibited explosions and are currently in the expanding phase with the shock at between 5,000 and 20,000 km. We also briefly describe an ongoing simulation in 3 spatial dimensions initiated from the 15 M{sub o-dot} progenitor.

  5. Machine learning improves the accuracy of myocardial perfusion scintigraphy results

    International Nuclear Information System (INIS)

    Groselj, C.; Kukar, M.

    2002-01-01

    Objective: Machine learning (ML) an artificial intelligence method has in last decade proved to be an useful tool in many fields of decision making, also in some fields of medicine. By reports, its decision accuracy usually exceeds the human one. Aim: To assess applicability of ML in interpretation of the stress myocardial perfusion scintigraphy results in coronary artery disease diagnostic process. Patients and methods: The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy of the investigation were computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate with whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy of scintigraphy were expressed in this way. The results of both decision procedures were compared. Conclusion: Using ML method, 19 more patients out of 327 (5.8%) were correctly diagnosed by stress myocardial perfusion scintigraphy. In this way ML could be an important tool for myocardial perfusion scintigraphy decision making

  6. Results from the Coded Aperture Neutron Imaging System (CANIS)

    International Nuclear Information System (INIS)

    Brubaker, Erik; Steele, John T.; Brennan, James S.; Hilton, Nathan R.; Marleau, Peter

    2010-01-01

    Because of their penetrating power, energetic neutrons and gamma rays (∼1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging- a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.

  7. Results from the coded aperture neutron imaging system

    International Nuclear Information System (INIS)

    Brubaker, Erik; Steele, John T.; Brennan, James S.; Marleau, Peter

    2010-01-01

    Because of their penetrating power, energetic neutrons and gamma rays (∼1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging - a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.

  8. The Mistra experiment for field containment code validation first results

    International Nuclear Information System (INIS)

    Caron-Charles, M.; Blumenfeld, L.

    2001-01-01

    The MISTRA facility is a large scale experiment, designed for the purpose of thermal-hydraulics multi-D codes validation. A short description of the facility, the set up of the instrumentation and the test program are presented. Then, the first experimental results, studying helium injection in the containment and their calculations are detailed. (author)

  9. Monte Carlo simulation of a multi-leaf collimator design for telecobalt machine using BEAMnrc code

    Directory of Open Access Journals (Sweden)

    Ayyangar Komanduri

    2010-01-01

    Full Text Available This investigation aims to design a practical multi-leaf collimator (MLC system for the cobalt teletherapy machine and check its radiation properties using the Monte Carlo (MC method. The cobalt machine was modeled using the BEAMnrc Omega-Beam MC system, which could be freely downloaded from the website of the National Research Council (NRC, Canada. Comparison with standard depth dose data tables and the theoretically modeled beam showed good agreement within 2%. An MLC design with low melting point alloy (LMPA was tested for leakage properties of leaves. The LMPA leaves with a width of 7 mm and height of 6 cm, with tongue and groove of size 2 mm wide by 4 cm height, produced only 4% extra leakage compared to 10 cm height tungsten leaves. With finite 60 Co source size, the interleaf leakage was insignificant. This analysis helped to design a prototype MLC as an accessory mount on a cobalt machine. The complete details of the simulation process and analysis of results are discussed.

  10. Monte Carlo simulation of a multi-leaf collimator design for telecobalt machine using BEAMnrc code

    International Nuclear Information System (INIS)

    Ayyangar, Komanduri M.; Narayan, Pradush; Jesuraj, Fenedit; Raju, M.R.; Dinesh Kumar, M.

    2010-01-01

    This investigation aims to design a practical multi-leaf collimator (MLC) system for the cobalt teletherapy machine and check its radiation properties using the Monte Carlo (MC) method. The cobalt machine was modeled using the BEAMnrc Omega-Beam MC system, which could be freely downloaded from the website of the National Research Council (NRC), Canada. Comparison with standard depth dose data tables and the theoretically modeled beam showed good agreement within 2%. An MLC design with low melting point alloy (LMPA) was tested for leakage properties of leaves. The LMPA leaves with a width of 7 mm and height of 6 cm, with tongue and groove of size 2 mm wide by 4 cm height, produced only 4% extra leakage compared to 10 cm height tungsten leaves. With finite 60 Co source size, the interleaf leakage was insignificant. This analysis helped to design a prototype MLC as an accessory mount on a cobalt machine. The complete details of the simulation process and analysis of results are discussed. (author)

  11. Results from the First Validation Phase of CAP code

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The second stage of Safety Analysis Code Development for Nuclear Power Plants was lunched on Apirl, 2010 and is scheduled to be through 2012, of which the scope of work shall cover from code validation to licensing preparation. As a part of this project, CAP(Containment Analysis Package) will follow the same procedures. CAP's validation works are organized hieratically into four validation steps using; 1) Fundamental phenomena. 2) Principal phenomena (mixing and transport) and components in containment. 3) Demonstration test by small, middle, large facilities and International Standard Problems. 4) Comparison with other containment codes such as GOTHIC or COMTEMPT. In addition, collecting the experimental data related to containment phenomena and then constructing the database is one of the major works during the second stage as a part of this project. From the validation process of fundamental phenomenon, it could be expected that the current capability and the future improvements of CAP code will be revealed. For this purpose, simple but significant problems, which have the exact analytical solution, were selected and calculated for validation of fundamental phenomena. In this paper, some results of validation problems for the selected fundamental phenomena will be summarized and discussed briefly

  12. Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility

    Science.gov (United States)

    Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.

    2017-10-01

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  13. Virtual Machine Provisioning, Code Management, and Data Movement Design for the Fermilab HEPCloud Facility

    Energy Technology Data Exchange (ETDEWEB)

    Timm, S. [Fermilab; Cooper, G. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Grassano, D. [Fermilab; Tiradani, A. [Fermilab; Krishnamurthy, R. [IIT, Chicago; Vinayagam, S. [IIT, Chicago; Raicu, I. [IIT, Chicago; Wu, H. [IIT, Chicago; Ren, S. [IIT, Chicago; Noh, S. Y. [KISTI, Daejeon

    2017-11-22

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  14. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  15. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Science.gov (United States)

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  16. Results of the Nonelastic Reaction Code Brieff for Nuclear Data

    International Nuclear Information System (INIS)

    Duarte, H.

    2009-01-01

    We present recent changes in our nonelastic reaction code BRIEFF and especially in the fast stage of reaction described by the intranuclear cascade (INC) code BRIC. Distributions and excitation functions of residual nuclei production cross sections are shown for proton-induced reaction on target nuclei. Slight improvements are seen in the proton-induced reaction on light nuclei with a closed shell when the energy levels are taken into account in the INC stage. On the other hand, fission gives poor results in the current version. To compare to other nuclear models and LA150 libraries, BRIEFF has been incorporated into MCNPX 2.5.0. Examples of neutron production from thick target irradiation by proton beams between 30 and 350 MeV are presented. Except for some discrepancies, a good agreement with data is obtained on average. (authors)

  17. Emergency medicine summary code for reporting CT scan results: implementation and survey results.

    Science.gov (United States)

    Lam, Joanne; Coughlin, Ryan; Buhl, Luce; Herbst, Meghan; Herbst, Timothy; Martillotti, Jared; Coughlin, Bret

    2018-06-01

    The purpose of the study was to assess the emergency department (ED) providers' interest and satisfaction with ED CT result reporting before and after the implementation of a standardized summary code for all CT scan reporting. A summary code was provided at the end of all CTs ordered through the ED from August to October of 2016. A retrospective review was completed on all studies performed during this period. A pre- and post-survey was given to both ED and radiology providers. A total of 3980 CT scans excluding CTAs were ordered with 2240 CTs dedicated to the head and neck, 1685 CTs dedicated to the torso, and 55 CTs dedicated to the extremities. Approximately 74% CT scans were contrast enhanced. Of the 3980 ED CT examination ordered, 69% had a summary code assigned to it. Fifteen percent of the coded CTs had a critical or diagnostic positive result. The introduction of an ED CT summary code did not show a definitive improvement in communication. However, the ED providers are in consensus that radiology reports are crucial their patients' management. There is slightly increased satisfaction with the providers with less than 5 years of experience with the ED CT codes compared to more seasoned providers. The implementation of a user-friendly summary code may allow better analysis of results, practice improvement, and quality measurements in the future.

  18. The recycling through melting machining chips: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Luiz A.T.; Rossi, Jesualdo L., E-mail: luiz.atp@uol.com.br, E-mail: jelrossi@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Centro de Ciência e Tecnologia de Materiais

    2017-07-01

    PWR (Pressurized Water Reactor) reactors employ as nuclear fuel UO{sub 2} pellets packed in zirconium alloy tubes, called cladding. In the manufacture of the tubes, machining chips are generated which cannot be discarded, since the recycling of this material is strategic in terms of nuclear technology, legislation, economics and the environment. These nuclear alloys are very expensive and are not produced in Brazil and are imported for the manufacture of nuclear fuel. In this work, it will examined methods not yet studied to recycle Zircaloy chips using electron beam furnace in order to obtain ingots. In addition, it is intended to carry out the melting of new Zircaloy alloys, from the melting of zirconium sponge obtained in IPEN and imported and Zircaloy bars. The mechanical properties and the present phases of the material should be determined, as well as, the characterization of the microstructures by optical microscopy. This work, therefore, aims at the creation of a new line of research where methods will be approached to recycle the Zircaloy chips and to reduce in 30 times the volume by means of melting the enormous amount of material stored in the form of machining chips, being able to do others components for nuclear or chemical industry use, as well as conducting basic development research. (author)

  19. Addressing the scaling issue by thermalhydraulic system codes: recent results

    International Nuclear Information System (INIS)

    D'auria, F.; Cherubini, M.; Galassi, G.M.; Muellner, N. . E-mail of corresponding author: dauria@ing.unipi.it; D'auria, F.)

    2005-01-01

    This lecture presents an introduction into the scaling issue following a 'top-down' approach. This means, recent studies which deal with a scaling analysis in LWR with special regards to the WWER Russian reactor type are presented to demonstrate important phenomena for scaling, to be more precise, the counterpart test (CT) methodology. As an example, one CT, a Small Break LOCA carried out in the PSB facility, is presented. PSB is a full height, full pressure rig that reproduces a WWER 1000, power and volume scaling factor is 1:300. The CT has been designed deriving boundary and initial condition from the same test performed in LOBI (that reproduces a PWR). The adopted scaling approach is based on the selection of a few characteristic parameters. They are chosen taking into account their relevance in the behaviour of the transient. The calculation of the SBLOCA has been performed using Relap5/Mod3.3 computer code and its accuracy has been demonstrated by qualitative and quantitative evaluation. For the quantitative evaluation the use of the FFT Based Method is foreseen and the fulfilment of its limits has been obtained. The aim of the example is to give an overview about the theoretical concepts of scaling, which is termed the s caling strategy , and comprises the steps of the selected scaling approach. At the same time interesting results from ongoing research projects will be presented. Comparing experimental data it was found that the investigated facilities show similar behaviour concerning the time trends, and are able to predict on a qualitative level the same thermal hydraulic phenomena. Main obtained results are summarized as follows: PSB and LOBI main parameters have similar trends. This is a confirmation of the validity of the adopted scaling approach and shows that PWR and WWER reactor type behaviour are very close to each other. No new phenomena occur during the CT, notwithstanding the two facilities have a different lay out, and the already known

  20. Verification of SACI-2 computer code comparing with experimental results of BIBLIS-A and LOOP-7 computer code

    International Nuclear Information System (INIS)

    Soares, P.A.; Sirimarco, L.F.

    1984-01-01

    SACI-2 is a computer code created to study the dynamic behaviour of a PWR nuclear power plant. To evaluate the quality of its results, SACI-2 was used to recalculate commissioning tests done in BIBLIS-A nuclear power plant and to calculate postulated transients for Angra-2 reactor. The results of SACI-2 computer code from BIBLIS-A showed as much good agreement as those calculated with the KWU Loop 7 computer code for Angra-2. (E.G.) [pt

  1. Recent results in the decoding of Algebraic geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  2. Extraction of state machines of legacy C code with Cpp2XMI

    NARCIS (Netherlands)

    Brand, van den M.G.J.; Serebrenik, A.; Zeeland, van D.; Serebrenik, A.

    2008-01-01

    Analysis of legacy code is often focussed on extracting either metrics or relations, e.g. call relations or structure relations. For object-oriented programs, e.g. Java or C++ code, such relations are commonly represented as UML diagrams: e.g., such tools as Columbus [1] and Cpp2XMI [2] are capable

  3. Benchmarking Reactor Systems Studies by Comparison of EU and Japanese System Code Results for Different DEMO Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Kemp, R.; Ward, D.J., E-mail: richard.kemp@ccfe.ac.uk [EURATOM/CCFE Association, Culham Centre for Fusion Energy, Abingdon (United Kingdom); Nakamura, M.; Tobita, K. [Japan Atomic Energy Agency, Rokkasho (Japan); Federici, G. [EFDA Garching, Max Plank Institut fur Plasmaphysik, Garching (Germany)

    2012-09-15

    Full text: Recent systems studies work within the Broader Approach framework has focussed on benchmarking the EU systems code PROCESS against the Japanese code TPC for conceptual DEMO designs. This paper describes benchmarking work for a conservative, pulsed DEMO and an advanced, steady-state, high-bootstrap fraction DEMO. The resulting former machine is an R{sub 0} = 10 m, a = 2.5 m, {beta}{sub N} < 2.0 device with no enhancement in energy confinement over IPB98. The latter machine is smaller (R{sub 0} = 8 m, a = 2.7 m), with {beta}{sub N} = 3.0, enhanced confinement, and high bootstrap fraction f{sub BS} = 0.8. These options were chosen to test the codes across a wide range of parameter space. While generally in good agreement, some of the code outputs differ. In particular, differences have been identified in the impurity radiation models and flux swing calculations. The global effects of these differences are described and approaches to identifying the best models, including future experiments, are discussed. Results of varying some of the assumptions underlying the modelling are also presented, demonstrating the sensitivity of the solutions to technological limitations and providing guidance for where further research could be focussed. (author)

  4. On Coding the States of Sequential Machines with the Use of Partition Pairs

    DEFF Research Database (Denmark)

    Zahle, Torben U.

    1966-01-01

    This article introduces a new technique of making state assignment for sequential machines. The technique is in line with the approach used by Hartmanis [l], Stearns and Hartmanis [3], and Curtis [4]. It parallels the work of Dolotta and McCluskey [7], although it was developed independently...

  5. Main studies results for introduction of EB machine to Vietnam and for its application

    International Nuclear Information System (INIS)

    Tran, Khac An; Nguyen, Quoc Hien; Le, Hai

    2004-01-01

    Upon the national program on utilization of EB machine for research and development purposes and the FNCA project on application of electron accelerator, as a counterpart the Research and Development Center for Radiation Technology (VINAGAMMA) is preparing technical, manpower and financial conditions for introduction of an EB machine for R and D purposes. The paper offers main studied results in the field of Radiation Processing aimed at putting applications of EB technology into Vietnam and studies on selection of EB machine for R and D purposes in Vietnam. (author)

  6. Post-processing of the TRAC code's results

    International Nuclear Information System (INIS)

    Baron, J.H.; Neuman, D.

    1987-01-01

    The TRAC code serves for the analysis of accidents in nuclear installations from the thermohydraulic point of view. A program has been developed with the aim of processing the information rapidly generated by the code, with screening graph capacity, both in high and low resolution, or either in paper through printer or plotter. Although the programs are intended to be used after the TRAC runs, they may be also used even when the program is running so as to observe the calculation process. The advantages of employing this type of tool, its actual capacity and its possibilities of expansion according to the user's needs are herein described. (Author)

  7. Computer codes for the calculation of vibrations in machines and structures

    International Nuclear Information System (INIS)

    1989-01-01

    After an introductory paper on the typical requirements to be met by vibration calculations, the first two sections of the conference papers present universal as well as specific finite-element codes tailored to solve individual problems. The calculation of dynamic processes increasingly now in addition to the finite elements applies the method of multi-component systems which takes into account rigid bodies or partial structures and linking and joining elements. This method, too, is explained referring to universal computer codes and to special versions. In mechanical engineering, rotary vibrations are a major problem, and under this topic, conference papers exclusively deal with codes that also take into account special effects such as electromechanical coupling, non-linearities in clutches, etc. (orig./HP) [de

  8. An experimental result of surface roughness machining performance in deep hole drilling

    Directory of Open Access Journals (Sweden)

    Mohamad Azizah

    2016-01-01

    Full Text Available This study presents an experimental result of a deep hole drilling process for Steel material at different machining parameters which are feed rate (f, spindle speed (s, the depth of the hole (d and MQL, number of drops (m on surface roughness, Ra. The experiment was designed using two level full factorial design of experiment (DoE with centre points to collect surface roughness, Ra values. The signal to noise (S/N ratio analysis was used to discover the optimum level for each machining parameters in the experiment.

  9. lncRScan-SVM: A Tool for Predicting Long Non-Coding RNAs Using Support Vector Machine.

    Science.gov (United States)

    Sun, Lei; Liu, Hui; Zhang, Lin; Meng, Jia

    2015-01-01

    Functional long non-coding RNAs (lncRNAs) have been bringing novel insight into biological study, however it is still not trivial to accurately distinguish the lncRNA transcripts (LNCTs) from the protein coding ones (PCTs). As various information and data about lncRNAs are preserved by previous studies, it is appealing to develop novel methods to identify the lncRNAs more accurately. Our method lncRScan-SVM aims at classifying PCTs and LNCTs using support vector machine (SVM). The gold-standard datasets for lncRScan-SVM model training, lncRNA prediction and method comparison were constructed according to the GENCODE gene annotations of human and mouse respectively. By integrating features derived from gene structure, transcript sequence, potential codon sequence and conservation, lncRScan-SVM outperforms other approaches, which is evaluated by several criteria such as sensitivity, specificity, accuracy, Matthews correlation coefficient (MCC) and area under curve (AUC). In addition, several known human lncRNA datasets were assessed using lncRScan-SVM. LncRScan-SVM is an efficient tool for predicting the lncRNAs, and it is quite useful for current lncRNA study.

  10. Two NP-hardness results for preemptive minsum scheduling of unrelated parallel machines

    NARCIS (Netherlands)

    Sitters, R.A.; Aardal, K.; Gerards, B.

    2001-01-01

    We show that the problems of minimizing total completion time and of minimizing the number of late jobs on unrelated parallel machines, when preemption is allowed, are both NP-hard in the strong sense. The former result settles a long-standing open question.

  11. Analysis of results of AZTRAN and AZKIND codes for a BWR

    International Nuclear Information System (INIS)

    Bastida O, G. E.; Vallejo Q, J. A.; Galicia A, J.; Francois L, J. L.; Xolocostli M, J. V.; Rodriguez H, A.; Gomez T, A. M.

    2016-09-01

    This paper presents an analysis of results obtained from simulations performed with the neutron transport code AZTRAN and the kinetic code of neutron diffusion AZKIND, based on comparisons with models corresponding to a typical BWR, in order to verify the behavior and reliability of the values obtained with said code for its current development. For this, simulations of different geometries were made using validated nuclear codes, such as CASMO, MCNP5 and Serpent. The results obtained are considered adequate since they are comparable with those obtained and reported with other codes, based mainly on the neutron multiplication factor and the power distribution of the same. (Author)

  12. An improved machine learning protocol for the identification of correct Sequest search results

    Directory of Open Access Journals (Sweden)

    Lu Hui

    2010-12-01

    Full Text Available Abstract Background Mass spectrometry has become a standard method by which the proteomic profile of cell or tissue samples is characterized. To fully take advantage of tandem mass spectrometry (MS/MS techniques in large scale protein characterization studies robust and consistent data analysis procedures are crucial. In this work we present a machine learning based protocol for the identification of correct peptide-spectrum matches from Sequest database search results, improving on previously published protocols. Results The developed model improves on published machine learning classification procedures by 6% as measured by the area under the ROC curve. Further, we show how the developed model can be presented as an interpretable tree of additive rules, thereby effectively removing the 'black-box' notion often associated with machine learning classifiers, allowing for comparison with expert rule-of-thumb. Finally, a method for extending the developed peptide identification protocol to give probabilistic estimates of the presence of a given protein is proposed and tested. Conclusions We demonstrate the construction of a high accuracy classification model for Sequest search results from MS/MS spectra obtained by using the MALDI ionization. The developed model performs well in identifying correct peptide-spectrum matches and is easily extendable to the protein identification problem. The relative ease with which additional experimental parameters can be incorporated into the classification framework, to give additional discriminatory power, allows for future tailoring of the model to take advantage of information from specific instrument set-ups.

  13. Quran Vibrations in Braille Codes Using the Finite State Machine Technique

    OpenAIRE

    Abualkishik, Abdallah M; Omar, Khairuddin

    2010-01-01

    In this study, the Quran Braille System was developed. It provides blind Muslims an easy way to read and understand the Holy Quran as well as the chance for other blind people to learn about Islam. The experiments have produced a full translation prototype for the Quran verses including associated vibrations. The result of the experiment will be printed out using a Braille printer to introduce the usefulness of this study particularly to researcher and society at large. This study will adhere...

  14. Investigation of machining damage and tool wear resulting from drilling powder metal aluminum alloy

    Energy Technology Data Exchange (ETDEWEB)

    Fell, H.A. [Lockheed Martin Energy Systems, Inc., Oak Ridge, TN (United States)

    1997-05-01

    This report documents the cutting of aluminum powder metallurgy (PM) parts for the North Carolina Manufacturing Extension Partnership. The parts, an aluminum powder metal formulation, were supplied by Sinter Metals Inc., of Conover, North Carolina. The intended use of the alloy is for automotive components. Machining tests were conducted at Y-12 in the machine shop of the Skills Demonstration Center in Building 9737. Testing was done on June 2 and June 3, 1997. The powder metal alloy tested is very abrasive and tends to wear craters and produce erosion effects on the chip washed face of the drills used. It also resulted in huge amounts of flank wear and degraded performance on the part of most drills. Anti-wear coatings on drills seemed to have an effect. Drills with the coating showed less wear for the same amount of cutting. The usefulness of coolants and lubricants in reducing tool wear and chipping/breakout was not investigated.

  15. Results from Commissioning of the Energy Extraction Facilities of the LHC Machine

    CERN Document Server

    Coelingh, G J; Mess, K H

    2008-01-01

    The risk of damage to the superconducting magnets, bus bars and current leads of the LHC machine in case of a resistive transition (quench) is being minimized by adequate protection. The protection is based on early quench detection, bypassing the quenching magnets by cold diodes, energy density dilution in the quenching magnets using heaters and, eventually, energy extraction. For two hundred and twenty-six LHC circuits (600 A and 13 kA) extraction of the stored magnetic energy to external dump resistors was required. All these systems are now installed in the machine and the final hardware commissioning has been undertaken. After a short description of the topology and definitive features, layouts and parameters of these systems the paper will focus on the results from their successful commissioning and an analysis of the system performance.

  16. The Coding Causes of Death in HIV (CoDe) Project: initial results and evaluation of methodology

    DEFF Research Database (Denmark)

    Kowalska, Justyna D; Friis-Møller, Nina; Kirk, Ole

    2011-01-01

    The Coding Causes of Death in HIV (CoDe) Project aims to deliver a standardized method for coding the underlying cause of death in HIV-positive persons, suitable for clinical trials and epidemiologic studies.......The Coding Causes of Death in HIV (CoDe) Project aims to deliver a standardized method for coding the underlying cause of death in HIV-positive persons, suitable for clinical trials and epidemiologic studies....

  17. Results of Investigative Tests of Gas Turbine Engine Compressor Blades Obtained by Electrochemical Machining

    Science.gov (United States)

    Kozhina, T. D.; Kurochkin, A. V.

    2016-04-01

    The paper highlights results of the investigative tests of GTE compressor Ti-alloy blades obtained by the method of electrochemical machining with oscillating tool-electrodes, carried out in order to define the optimal parameters of the ECM process providing attainment of specified blade quality parameters given in the design documentation, while providing maximal performance. The new technological methods suggested based on the results of the tests; in particular application of vibrating tool-electrodes and employment of locating elements made of high-strength materials, significantly extend the capabilities of this method.

  18. Validation of thermohydraulic codes by comparison of experimental results with computer simulations

    International Nuclear Information System (INIS)

    Madeira, A.A.; Galetti, M.R.S.; Pontedeiro, A.C.

    1989-01-01

    The results obtained by simulation of three cases from CANON depressurization experience, using the TRAC-PF1 computer code, version 7.6, implanted in the VAX-11/750 computer of Brazilian CNEN, are presented. The CANON experience was chosen as first standard problem in thermo-hydraulic to be discussed at ENFIR for comparing results from different computer codes with results obtained experimentally. The ability of TRAC-PF1 code to prevent the depressurization phase of a loss of primary collant accident in pressurized water reactors is evaluated. (M.C.K.) [pt

  19. Verification of the Korsar code on results of experiments executed on the PSB-VVER facility

    International Nuclear Information System (INIS)

    Roginskaya, V.L.; Pylev, S.S.; Elkin, I.V.

    2005-01-01

    Full text of publication follows: Paper represents some results of computational research executed within the framework of verification of the KORSAR thermal hydraulic code. This code was designed in the NITI by A.P. Aleksandrov (Russia). The general purpose of the work was development of a nodding scheme of the PSB-VVER integral facility, scheme testing and computational modelling of the experiment 'The PSB-VVER Natural Circulation Test With Stepwise Reduction of the Primary Inventory'. The NC test has been performed within the framework of the OECD PSB-VVER Project (task no. 3). This Project is focused upon the provision of experimental data for codes assessment with regard to VVER analysis. Paper presents a nodding scheme of the PSB-VVER facility and results of pre- and post-test calculations of the specified experiment, obtained with the KORSAR code. The experiment data and the KORSAR pre-test calculation results are in good agreement. A post-test calculation of the experiment with KORSAR code has been performed in order to assess the code capability to simulate the phenomena relevant to the test. The code showed a reasonable prediction of the phenomena measured in the experiment. (authors)

  20. Verification of the Korsar code on results of experiments executed on the PSB-VVER facility

    Energy Technology Data Exchange (ETDEWEB)

    Roginskaya, V.L.; Pylev, S.S.; Elkin, I.V. [NSI RRC ' Kurchatov Institute' , Kurchatov Sq., 1, Moscow, 123182 (Russian Federation)

    2005-07-01

    Full text of publication follows: Paper represents some results of computational research executed within the framework of verification of the KORSAR thermal hydraulic code. This code was designed in the NITI by A.P. Aleksandrov (Russia). The general purpose of the work was development of a nodding scheme of the PSB-VVER integral facility, scheme testing and computational modelling of the experiment 'The PSB-VVER Natural Circulation Test With Stepwise Reduction of the Primary Inventory'. The NC test has been performed within the framework of the OECD PSB-VVER Project (task no. 3). This Project is focused upon the provision of experimental data for codes assessment with regard to VVER analysis. Paper presents a nodding scheme of the PSB-VVER facility and results of pre- and post-test calculations of the specified experiment, obtained with the KORSAR code. The experiment data and the KORSAR pre-test calculation results are in good agreement. A post-test calculation of the experiment with KORSAR code has been performed in order to assess the code capability to simulate the phenomena relevant to the test. The code showed a reasonable prediction of the phenomena measured in the experiment. (authors)

  1. Validation of the WIMSD4M cross-section generation code with benchmark results

    International Nuclear Information System (INIS)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D 2 O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented

  2. Validation of the WIMSD4M cross-section generation code with benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Deen, J.R.; Woodruff, W.L. [Argonne National Lab., IL (United States); Leal, L.E. [Oak Ridge National Lab., TN (United States)

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.

  3. The VENUS-7 benchmarks. Results from state-of-the-art transport codes and nuclear data

    International Nuclear Information System (INIS)

    Zwermann, Winfried; Pautz, Andreas; Timm, Wolf

    2010-01-01

    For the validation of both nuclear data and computational methods, comparisons with experimental data are necessary. Most advantageous are assemblies where not only the multiplication factors or critical parameters were measured, but also additional quantities like reactivity differences or pin-wise fission rate distributions have been assessed. Currently there is a comprehensive activity to evaluate such measure-ments and incorporate them in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. A large number of such experiments was performed at the VENUS zero power reactor at SCK/CEN in Belgium in the sixties and seventies. The VENUS-7 series was specified as an international benchmark within the OECD/NEA Working Party on Scientific Issues of Reactor Systems (WPRS), and results obtained with various codes and nuclear data evaluations were summarized. In the present paper, results of high-accuracy transport codes with full spatial resolution with up-to-date nuclear data libraries from the JEFF and ENDF/B evaluations are presented. The comparisons of the results, both code-to-code and with the measured data are augmented by uncertainty and sensitivity analyses with respect to nuclear data uncertainties. For the multiplication factors, these are performed with the TSUNAMI-3D code from the SCALE system. In addition, uncertainties in the reactivity differences are analyzed with the TSAR code which is available from the current SCALE-6 version. (orig.)

  4. Results of the recent investigations of the accuracy of C-PORCA code

    International Nuclear Information System (INIS)

    Pos, I.

    2001-01-01

    During the last year it was an essential demand to establish the C-PORCA code accuracy concerning the newly developed BNFL fuel. As it is well known the BNFL fuel assembly developed for the NPP Paks contains fuel pins with gadolinium. These pins are situated near the central tube so their unusual effects to be investigated. At the present investigation the C-PORCA code and the APA-H Westinghouse code system results were compared. During this comparison the differences between node and pin wise power distributions have been calculated as well. The fulfilled test have reinforced the applicability and capability of the C-PORCA code for WWER-440 core analysis (Authors)

  5. Cracking the code: a decode strategy for the international business machines punch cards of Korean war soldiers.

    Science.gov (United States)

    Mitsunaga, Erin M

    2006-05-01

    During the Korean War, International Business Machines (IBM) punch cards were created for every individual involved in military combat. Each card contained all pertinent personal information about the individual and was utilized to keep track of all soldiers involved. However, at present, all of the information known about these punch cards reveals only their format and their significance; there is little to no information on how these cards were created or how to interpret the information contained without the aid of the computer system used during the war. Today, it is believed there is no one available to explain this computerized system, nor do the original computers exist. This decode strategy is the result of an attempt to decipher the information on these cards through the use of all available medical and dental records for each individual examined. By cross-referencing the relevant personal information with the known format of the cards, a basic guess-and-check method was utilized. After examining hundreds of IBM punch cards, however, it has become clear that the punch card method of recording information was not infallible. In some cases, there are gaps of information on cards where there are data recorded on personal records; in others, information is punched incorrectly onto the cards, perhaps as the result of a transcription error. Taken all together, it is clear that the information contained on each individual's card should be taken solely as another form of personal documentation.

  6. Sensitivity Analysis of FEAST-Metal Fuel Performance Code: Initial Results

    International Nuclear Information System (INIS)

    Edelmann, Paul Guy; Williams, Brian J.; Unal, Cetin; Yacout, Abdellatif

    2012-01-01

    This memo documents the completion of the LANL milestone, M3FT-12LA0202041, describing methodologies and initial results using FEAST-Metal. The FEAST-Metal code calculations for this work are being conducted at LANL in support of on-going activities related to sensitivity analysis of fuel performance codes. The objective is to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. This report summarizes our preliminary results for the sensitivity analysis using 6 calibration datasets for metallic fuel developed at ANL for EBR-II experiments. Sensitivity ranking methodology was deployed to narrow down the selected parameters for the current study. There are approximately 84 calibration parameters in the FEAST-Metal code, of which 32 were ultimately used in Phase II of this study. Preliminary results of this sensitivity analysis led to the following ranking of FEAST models for future calibration and improvements: fuel conductivity, fission gas transport/release, fuel creep, and precipitation kinetics. More validation data is needed to validate calibrated parameter distributions for future uncertainty quantification studies with FEAST-Metal. Results of this study also served to point out some code deficiencies and possible errors, and these are being investigated in order to determine root causes and to improve upon the existing code models.

  7. School Vending Machine Purchasing Behavior: Results from the 2005 YouthStyles Survey

    Science.gov (United States)

    Thompson, Olivia M.; Yaroch, Amy L.; Moser, Richard P.; Rutten, Lila J. Finney; Agurs-Collins, Tanya

    2010-01-01

    Background: Competitive foods are often available in school vending machines. Providing youth with access to school vending machines, and thus competitive foods, is of concern, considering the continued high prevalence of childhood obesity: competitive foods tend to be energy dense and nutrient poor and can contribute to increased energy intake in…

  8. Development of computer code SIMPSEX for simulation of FBR fuel reprocessing flowsheets: II. additional benchmarking results

    International Nuclear Information System (INIS)

    Shekhar Kumar; Koganti, S.B.

    2003-07-01

    Benchmarking and application of a computer code SIMPSEX for high plutonium FBR flowsheets was reported recently in an earlier report (IGC-234). Improvements and recompilation of the code (Version 4.01, March 2003) required re-validation with the existing benchmarks as well as additional benchmark flowsheets. Improvements in the high Pu region (Pu Aq >30 g/L) resulted in better results in the 75% Pu flowsheet benchmark. Below 30 g/L Pu Aq concentration, results were identical to those from the earlier version (SIMPSEX Version 3, code compiled in 1999). In addition, 13 published flowsheets were taken as additional benchmarks. Eleven of these flowsheets have a wide range of feed concentrations and few of them are β-γ active runs with FBR fuels having a wide distribution of burnup and Pu ratios. A published total partitioning flowsheet using externally generated U(IV) was also simulated using SIMPSEX. SIMPSEX predictions were compared with listed predictions from conventional SEPHIS, PUMA, PUNE and PUBG. SIMPSEX results were found to be comparable and better than the result from above listed codes. In addition, recently reported UREX demo results along with AMUSE simulations are also compared with SIMPSEX predictions. Results of the benchmarking SIMPSEX with these 14 benchmark flowsheets are discussed in this report. (author)

  9. Use of system code to estimate equilibrium tritium inventory in fusion DT machines, such as ARIES-AT and components testing facilities

    International Nuclear Information System (INIS)

    Wong, C.P.C.; Merrill, B.

    2014-01-01

    Highlights: • With the use of a system code, tritium burn-up fraction (f burn ) can be determined. • Initial tritium inventory for steady state DT machines can be estimated. • f burn of ARIES-AT, CFETR and FNSF-AT are in the range of 1–2.8%. • Respective total tritium inventories of are 7.6 kg, 6.1 kg, and 5.2 kg. - Abstract: ITER is under construction and will begin operation in 2020. This is the first 500 MW fusion class DT device, and since it is not going to breed tritium, it will consume most of the limited supply of tritium resources in the world. Yet, in parallel, DT fusion nuclear component testing machines will be needed to provide technical data for the design of DEMO. It becomes necessary to estimate the tritium burn-up fraction and corresponding initial tritium inventory and the doubling time of these machines for the planning of future supply and utilization of tritium. With the use of a system code, tritium burn-up fraction and initial tritium inventory for steady state DT machines can be estimated. Estimated tritium burn-up fractions of FNSF-AT, CFETR-R and ARIES-AT are in the range of 1–2.8%. Corresponding total equilibrium tritium inventories of the plasma flow and tritium processing system, and with the DCLL blanket option are 7.6 kg, 6.1 kg, and 5.2 kg for ARIES-AT, CFETR-R and FNSF-AT, respectively

  10. Use of system code to estimate equilibrium tritium inventory in fusion DT machines, such as ARIES-AT and components testing facilities

    Energy Technology Data Exchange (ETDEWEB)

    Wong, C.P.C., E-mail: wongc@fusion.gat.com [General Atomics, San Diego, CA (United States); Merrill, B. [Idaho National Laboratory, Idaho Falls, ID (United States)

    2014-10-15

    Highlights: • With the use of a system code, tritium burn-up fraction (f{sub burn}) can be determined. • Initial tritium inventory for steady state DT machines can be estimated. • f{sub burn} of ARIES-AT, CFETR and FNSF-AT are in the range of 1–2.8%. • Respective total tritium inventories of are 7.6 kg, 6.1 kg, and 5.2 kg. - Abstract: ITER is under construction and will begin operation in 2020. This is the first 500 MW{sub fusion} class DT device, and since it is not going to breed tritium, it will consume most of the limited supply of tritium resources in the world. Yet, in parallel, DT fusion nuclear component testing machines will be needed to provide technical data for the design of DEMO. It becomes necessary to estimate the tritium burn-up fraction and corresponding initial tritium inventory and the doubling time of these machines for the planning of future supply and utilization of tritium. With the use of a system code, tritium burn-up fraction and initial tritium inventory for steady state DT machines can be estimated. Estimated tritium burn-up fractions of FNSF-AT, CFETR-R and ARIES-AT are in the range of 1–2.8%. Corresponding total equilibrium tritium inventories of the plasma flow and tritium processing system, and with the DCLL blanket option are 7.6 kg, 6.1 kg, and 5.2 kg for ARIES-AT, CFETR-R and FNSF-AT, respectively.

  11. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  12. GRS Method for Uncertainty and Sensitivity Evaluation of Code Results and Applications

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    During the recent years, an increasing interest in computational reactor safety analysis is to replace the conservative evaluation model calculations by best estimate calculations supplemented by uncertainty analysis of the code results. The evaluation of the margin to acceptance criteria, for example, the maximum fuel rod clad temperature, should be based on the upper limit of the calculated uncertainty range. Uncertainty analysis is needed if useful conclusions are to be obtained from best estimate thermal-hydraulic code calculations, otherwise single values of unknown accuracy would be presented for comparison with regulatory acceptance limits. Methods have been developed and presented to quantify the uncertainty of computer code results. The basic techniques proposed by GRS are presented together with applications to a large break loss of coolant accident on a reference reactor as well as on an experiment simulating containment behaviour

  13. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  14. Improving the accuracy of myocardial perfusion scintigraphy results by machine learning method

    International Nuclear Information System (INIS)

    Groselj, C.; Kukar, M.

    2002-01-01

    Full text: Machine learning (ML) as rapidly growing artificial intelligence subfield has already proven in last decade to be a useful tool in many fields of decision making, also in some fields of medicine. Its decision accuracy usually exceeds the human one. To assess applicability of ML in interpretation the results of stress myocardial perfusion scintigraphy for CAD diagnosis. The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy for the investigation was computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate of whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy for scintigraphy were expressed in this way. The results of both decision procedures were compared. With ML method 19 patients more out of 327 (5.8 %) were correctly diagnosed by stress myocardial perfusion scintigraphy. ML could be an important tool for decision making in myocardial perfusion scintigraphy. (author)

  15. Assessment of the system code DRUFAN/ATHLET using results of LOBI tests

    International Nuclear Information System (INIS)

    Burwell, J.M.; Kirmse, R.E.; Kyncl, M.; Malhotra, P.K.

    1989-09-01

    Four post-test analyses have been performed by GRS within the Shared Cost Action Programme (SCAP) sponsored by the Commission of the European Communities (contract 3015-86-07 EL ISP D) and by the Bundesminister fuer Forschung und Technologie of the Federal Republic of Germany (Research project RS 739). The four tests were mutually selected by the contractors (CEA, GRS, IKE, Univ. Pisa) of activity No. 3 and by the project organizer. Some of the tests were selected to be analyzed by more than one participant in order to allow comparison between analytical results obtained with different codes or obtained by different code-users. DRUFAN/ATHLET verification analyses were performed by IKE too. The four tests selected for the GRS activity are: - A2-77A (Natural Circulation Test), Analysis with ATHLET - A1-76 (Steam Generator Performance Test), Analysis with DRUFAN - BL-01 (Intermediate Leak), Analysis with ATHLET - A2-81 (Small Leak), Analysis with ATHLET. This final report contains the results of the four post test analysis including the comparison between measured and calculated quantities and the description of the applied codes, the selected model of the LOBI facility and the conclusions drawn for the improvement of the codes models

  16. Hysteresis and reluctance electric machines with bulk HTS elements. Recent results and future development

    International Nuclear Information System (INIS)

    Kovalev, L.K.; Ilushin, K.V.; Penkin, V.T.; Kovalev, K.L.; Koneev, S.M.-A.; Poltavets, V.N.; Larionoff, A.E.; Modestov, K.A.; Larionoff, S.A.; Gawalek, W.; Habisreuther, T.; Oswald, B.; Best, K.-J.; Strasser, T.

    2000-01-01

    Two new types of HTS electric machine are considered. The first type is hysteresis motors and generators with cylindrical and disc rotors containing bulk HTS elements. The second type is reluctance motors with compound HTS-ferromagnetic rotors. The compound HTS-ferromagnetic rotors, consisting of joined alternating bulk HTS (YBCO) and ferromagnetic (iron) plates, provide a new active material for electromechanical purposes. Such rotors have anisotropic properties (ferromagnetic in one direction and diamagnetic in the perpendicular one). Theoretical and experimental results for HTS hysteresis and reluctance motors are presented. A series of hysteresis HTS motors with output power rating from 1 kW (at 50 Hz) up to 4 kW (at 400 Hz) and a series of reluctance HTS motors with output power 2-18.5 kW (at 50 Hz) were constructed and successfully tested. It was shown that HTS reluctance motors could reach two to five times better overall dimensions and specific power than conventional asynchronous motors of the same size and will have higher values of power factor (cos φ≥0.7 to 0.8). (author)

  17. School vending machine purchasing behavior: results from the 2005 YouthStyles survey.

    Science.gov (United States)

    Thompson, Olivia M; Yaroch, Amy L; Moser, Richard P; Finney Rutten, Lila J; Agurs-Collins, Tanya

    2010-05-01

    Competitive foods are often available in school vending machines. Providing youth with access to school vending machines, and thus competitive foods, is of concern, considering the continued high prevalence of childhood obesity: competitive foods tend to be energy dense and nutrient poor and can contribute to increased energy intake in children and adolescents. To evaluate the relationship between school vending machine purchasing behavior and school vending machine access and individual-level dietary characteristics, we used population-level YouthStyles 2005 survey data to compare nutrition-related policy and behavioral characteristics by the number of weekly vending machine purchases made by public school children and adolescents (N = 869). Odds ratios (ORs) and corresponding 95% confidence intervals (CIs) were computed using age- and race/ethnicity-adjusted logistic regression models that were weighted on age and sex of child, annual household income, head of household age, and race/ethnicity of the adult in study. Data were collected in 2005 and analyzed in 2008. Compared to participants who did not purchase from a vending machine, participants who purchased >or=3 days/week were more likely to (1) have unrestricted access to a school vending machine (OR = 1.71; 95% CI = 1.13-2.59); (2) consume regular soda and chocolate candy >or=1 time/day (OR = 3.21; 95% CI = 1.87-5.51 and OR = 2.71; 95% CI = 1.34-5.46, respectively); and (3) purchase pizza or fried foods from a school cafeteria >or=1 day/week (OR = 5.05; 95% CI = 3.10-8.22). Future studies are needed to establish the contribution that the school-nutrition environment makes on overall youth dietary intake behavior, paying special attention to health disparities between whites and nonwhites.

  18. International outage coding system for nuclear power plants. Results of a co-ordinated research project

    International Nuclear Information System (INIS)

    2004-05-01

    The experience obtained in each individual plant constitutes the most relevant source of information for improving its performance. However, experience of the level of the utility, country and worldwide is also extremely valuable, because there are limitations to what can be learned from in-house experience. But learning from the experience of others is admittedly difficult, if the information is not harmonized. Therefore, such systems should be standardized and applicable to all types of reactors satisfying the needs of the broad set of nuclear power plant operators worldwide and allowing experience to be shared internationally. To cope with the considerable amount of information gathered from nuclear power plants worldwide, it is necessary to codify the information facilitating the identification of causes of outages, systems or component failures. Therefore, the IAEA established a sponsored Co-ordinated Research Project (CRP) on the International Outage Coding System to develop a general, internationally applicable system of coding nuclear power plant outages, providing worldwide nuclear utilities with a standardized tool for reporting outage information. This TECDOC summarizes the results of this CRP and provides information for transformation of the historical outage data into the new coding system, taking into consideration the existing systems for coding nuclear power plant events (WANO, IAEA-IRS and IAEA PRIS) but avoiding duplication of efforts to the maximum possible extent

  19. MELMRK 2.0: A description of computer models and results of code testing

    International Nuclear Information System (INIS)

    Wittman, R.S.; Denny, V.; Mertol, A.

    1992-01-01

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly

  20. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    Science.gov (United States)

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  1. Modelling 3-D mechanical phenomena in a 1-D industrial finite element code: results and perspectives

    International Nuclear Information System (INIS)

    Guicheret-Retel, V.; Trivaudey, F.; Boubakar, M.L.; Masson, R.; Thevenin, Ph.

    2005-01-01

    Assessing fuel rod integrity in PWR reactors must enjoin two opposite goals: a one-dimensional finite element code (axial revolution symmetry) is needed to provide industrial results at the scale of the reactor core, while the main risk of cladding failure [e.g. pellet-cladding interaction (PCI)] is based on fully three-dimensional phenomena. First, parametric three-dimensional elastic calculations were performed to identify the relevant parameters (fragment number, contact pellet-cladding conditions, etc.) as regards PCI. Axial fragment number as well as friction coefficient are shown to play a major role in PCI as opposed to other parameters. Next, the main limitations of the one-dimensional hypothesis of the finite element code CYRANO3 are identified. To overcome these limitations, both two- and three-dimensional emulations of CYRANO3 were developed. These developments are shown to significantly improve the results provided by CYRANO3. (authors)

  2. Review of solution approach, methods, and recent results of the RELAP5 system code

    International Nuclear Information System (INIS)

    Trapp, J.A.; Ransom, V.H.

    1983-01-01

    The present RELAP5 code is based on a semi-implicit numerical scheme for the hydrodynamic model. The basic guidelines employed in the development of the semi-implicit numerical scheme are discussed and the numerical features of the scheme are illustrated by analysis for a simple, but analogous, single-equation model. The basic numerical scheme is recorded and results from several simulations are presented. The experimental results and code simulations are used in a complementary fashion to develop insights into nuclear-plant response that would not be obtained if either tool were used alone. Further analysis using the simple single-equation model is carried out to yield insights that are presently being used to implement a more-implicit multi-step scheme in the experimental version of RELAP5. The multi-step implicit scheme is also described

  3. Verification of simulation model with COBRA-IIIP code by confrontment of experimental results

    International Nuclear Information System (INIS)

    Silva Galetti, M.R. da; Pontedeiro, A.C.; Oliveira Barroso, A.C. de

    1985-01-01

    It is presented an evaluation of the COBRA IIIP/MIT code (of thermal hydraulic analysis by subchannels), comparing their results with experimental data obtained in stationary and transient regimes. It was done a study to calculate the spatial and temporal critical heat flux. It is presented a sensitivity study of simulation model related to the turbulent mixture and the number of axial intervals. (M.C.K.) [pt

  4. Comparison of aerosol behavior codes with experimental results from a sodium fire in a containment

    International Nuclear Information System (INIS)

    Lhiaubet, G.; Kissane, M.P.; Seino, H.; Miyake, O.; Himeno, Y.

    1990-01-01

    The containment expert group (CONT), a subgroup of the CEC fast reactor Safety Working Group (SWG), has carried out several studies on the behavior of sodium aerosols which might form in a severe fast reactor accident during which primary sodium leaks into the secondary containment. These studies comprise an intercalibration of measurement devices used to determine the aerosol particle size spectrum, and the analysis and comparison of codes applied to the determination of aerosol behavior in a reactor containment. The paper outlines the results of measurements of typical data made for aerosols produced in a sodium fire and their comparison with results from different codes (PARDISEKO, AEROSIM, CONTAIN, AEROSOLS/B2). The sodium fire experiment took place at CEN-Cadarache (France) in a 400 m 3 vessel. The fire lasted 90 minutes and the aerosol measurements were made over 10 hours at different locations inside the vessel. The results showed that the suspended mass calculated along the time with different codes was in good agreement with the experiment. However, the calculated aerosol deposition on the walls was diverging and always significantly lower than the measured values

  5. Results of aerosol code comparisons with releases from ACE MCCI tests

    International Nuclear Information System (INIS)

    Fink, J.K.; Corradini, M.; Hidaka, A.; Hontanon, E.; Mignanelli, M.A.; Schroedl, E.; Strizhov, V.

    1992-01-01

    Results of aerosol release calculations by six groups from six countries are compared with the releases from ACE MCCI Test L6. The codes used for these calculations included: SOLGASMIX-PV, SOLGASMIX Reactor 1986, CORCON.UW, VANESA 1.01, and CORCON mod2.04/VANESA 1.01. Calculations were performed with the standard VANESA 1.01 code and with modifications to the VANESA code such as the inclusion of various zirconium-silica chemical reactions. Comparisons of results from these calculations were made with Test L6 release fractions for U, Zr, Si, the fission-product elements Te, Ba, Sr, Ce, La, Mo and control materials Ag, In, and Ru. Reasonable agreement was obtained between calculations and Test L6 results for the volatile elements Ag, In and Te. Calculated releases of the low volatility fission products ranged from within an order of magnitude to five orders of magnitude of Test L6 values. Releases were over and underestimated by calculations. Poorest agreements were obtained for Mo and Si

  6. Coupling the MCNP Monte Carlo code and the FISPACT activation code with automatic visualization of the results of simulations

    International Nuclear Information System (INIS)

    Bourauel, Peter; Nabbi, Rahim; Biel, Wolfgang; Forrest, Robin

    2009-01-01

    The MCNP 3D Monte Carlo computer code is used not only for criticality calculations of nuclear systems but also to simulate transports of radiation and particles. The findings so obtained about neutron flux distribution and the associated spectra allow information about materials activation, nuclear heating, and radiation damage to be obtained by means of activation codes such as FISPACT. The stochastic character of particle and radiation transport processes normally links findings to the materials cells making up the geometry model of MCNP. Where high spatial resolution is required for the activation calculations with FISPACT, fine segmentation of the MCNP geometry becomes compulsory, which implies considerable expense for the modeling process. For this reason, an alternative simulation technique has been developed in an effort to automate and optimize data transfer between MCNP and FISPACT. (orig.)

  7. Comparisons of the simulation results using different codes for ADS spallation target

    International Nuclear Information System (INIS)

    Yu Hongwei; Fan Sheng; Shen Qingbiao; Zhao Zhixiang; Wan Junsheng

    2002-01-01

    The calculations to the standard thick target were made by using different codes. The simulation of the thick Pb target with length of 60 cm, diameter of 20 cm bombarded with 800, 1000, 1500 and 2000 MeV energetic proton beam was carried out. The yields and the spectra of emitted neutron were studied. The spallation target was simulated by SNSP, SHIELD, DCM/CEM (Dubna Cascade Model /Cascade Evaporation Mode) and LAHET codes. The Simulation Results were compared with experiments. The comparisons show good agreement between the experiments and the SNSP simulated leakage neutron yield. The SHIELD simulated leakage neutron spectra are in good agreement with the LAHET and the DCM/CEM simulated leakage neutron spectra

  8. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1996-03-01

    During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs

  9. Verification of fire and explosion accident analysis codes (facility design and preliminary results)

    International Nuclear Information System (INIS)

    Gregory, W.S.; Nichols, B.D.; Talbott, D.V.; Smith, P.R.; Fenton, D.L.

    1985-01-01

    For several years, the US Nuclear Regulatory Commission has sponsored the development of methods for improving capabilities to analyze the effects of postulated accidents in nuclear facilities; the accidents of interest are those that could occur during nuclear materials handling. At the Los Alamos National Laboratory, this program has resulted in three computer codes: FIRAC, EXPAC, and TORAC. These codes are designed to predict the effects of fires, explosions, and tornadoes in nuclear facilities. Particular emphasis is placed on the movement of airborne radioactive material through the gaseous effluent treatment system of a nuclear installation. The design, construction, and calibration of an experimental ventilation system to verify the fire and explosion accident analysis codes are described. The facility features a large industrial heater and several aerosol smoke generators that are used to simulate fires. Both injected thermal energy and aerosol mass can be controlled using this equipment. Explosions are simulated with H 2 /O 2 balloons and small explosive charges. Experimental measurements of temperature, energy, aerosol release rates, smoke concentration, and mass accumulation on HEPA filters can be made. Volumetric flow rate and differential pressures also are monitored. The initial experiments involve varying parameters such as thermal and aerosol rate and ventilation flow rate. FIRAC prediction results are presented. 10 figs

  10. Three-dimensional thermal hydraulic best estimate code BAGIRA: new results of verification

    International Nuclear Information System (INIS)

    Peter Kohut; Sergey D Kalinichenko; Alexander E Kroshilin; Vladimir E Kroshilin; Alexander V Smirnov

    2005-01-01

    Full text of publication follows: BAGIRA is a three-dimensional inhomogeneous two-velocity two-temperature thermal hydraulic code of best estimate, elaborated in VNIIAES for modeling two-phase flows in the primary circuit and steam generators of VVER-type nuclear reactors under various accident, transient or normal operation conditions. In this talk we present verification results of the BAGIRA code, obtained on the basis of different experiments performed on special and integral thermohydraulic experimental facilities as well as on real NPPs. Special attention is paid to the verification of three-dimensional flow models. Besides that we expose new results of the code benchmark analysis made on the basis of two recent LOCA-type experiments - 'Leak 2 x 25% from the hot leg double-side rupture' and 'Leak 3% from the cold leg' - performed on the PSB-VVER integral test facility (Electrogorsk Research and Engineering Center, Electrogorsk, Russia) - the most up-to-date Russian large-scale four-loop unit which has been designed for modelling the primary circuit of VVER-1000 type reactors. (authors)

  11. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  12. AMENDMENTS TO THE FISCAL CODE REGARDING THE EXPENDITURES AND THE DETERMINATION OF THE EXERCISE RESULT

    Directory of Open Access Journals (Sweden)

    HOLT GHEORGHE

    2017-02-01

    Full Text Available rct Fiscal Code has brought many changes in the structure of the expenses used to calculate the results of the exercise and, thus, the profits tax. It treats the three items of expense categories (deductibles, with limited deductibility and non-deductible, bringing numerous changes in the structure of each of them. Expenses is recorded as decreases in economic benefits during the accounting period as outflows or decreases in assets and increases in the value of debt, which is reflected in the reductions of equity, other than those arising from their distribution to shareholders. The Fiscal Code has brought many changes to tax legislation in Romania, all the titles being affected, a particular importance being awarded to the changes regarding to the expenses deductibility, items that are the subject of this material. The basic concept regarding the deduction of expenses has been reformulated in the Fiscal Code, so that - currently - are deductible expenses that are performed for business purposes, unlike the general rule of deductibility valid until 31 December 2015, that were deductible only those expenses incurred in order to achieve taxable income.

  13. An experimental result of estimating an application volume by machine learning techniques.

    Science.gov (United States)

    Hasegawa, Tatsuhito; Koshino, Makoto; Kimura, Haruhiko

    2015-01-01

    In this study, we improved the usability of smartphones by automating a user's operations. We developed an intelligent system using machine learning techniques that periodically detects a user's context on a smartphone. We selected the Android operating system because it has the largest market share and highest flexibility of its development environment. In this paper, we describe an application that automatically adjusts application volume. Adjusting the volume can be easily forgotten because users need to push the volume buttons to alter the volume depending on the given situation. Therefore, we developed an application that automatically adjusts the volume based on learned user settings. Application volume can be set differently from ringtone volume on Android devices, and these volume settings are associated with each specific application including games. Our application records a user's location, the volume setting, the foreground application name and other such attributes as learning data, thereby estimating whether the volume should be adjusted using machine learning techniques via Weka.

  14. Results and problems in the development of machines employed in pressing technology

    Energy Technology Data Exchange (ETDEWEB)

    Dietrich, P; Hinne, H; Linke, L; Nerger, R

    1980-08-01

    Features specifications and technical improvements of nine GDR made briquetting presses from the Zemag Zeitz company. Briquetting presses have been produced by the company for more than 100 years, the present capacity of the machines ranges from 5.1 t/h to 20.4 t/h of nominal briquet production. Development trends are directed toward larger presses. The prototype PSA 400, based on the design of the four channel press PSA 300, will be tested in industrial operation during 1980/81. Various technical details on the general briquetting press design are enumerated, including experiences gained with steam or electric power operated presses, regulation of the pressing speed with a patented switch gear system, maintenance of crank gear bearings, greasing of steam driven presses, investigation of crack damages to the machine block, further measures for reducing wear of the pressing channel and mechanized welding methods for the channel overhaul. (In German)

  15. HYDROCOIN [HYDROlogic COde INtercomparison] Level 1: Benchmarking and verification test results with CFEST [Coupled Fluid, Energy, and Solute Transport] code: Draft report

    International Nuclear Information System (INIS)

    Yabusaki, S.; Cole, C.; Monti, A.M.; Gupta, S.K.

    1987-04-01

    Part of the safety analysis is evaluating groundwater flow through the repository and the host rock to the accessible environment by developing mathematical or analytical models and numerical computer codes describing the flow mechanisms. This need led to the establishment of an international project called HYDROCOIN (HYDROlogic COde INtercomparison) organized by the Swedish Nuclear Power Inspectorate, a forum for discussing techniques and strategies in subsurface hydrologic modeling. The major objective of the present effort, HYDROCOIN Level 1, is determining the numerical accuracy of the computer codes. The definition of each case includes the input parameters, the governing equations, the output specifications, and the format. The Coupled Fluid, Energy, and Solute Transport (CFEST) code was applied to solve cases 1, 2, 4, 5, and 7; the Finite Element Three-Dimensional Groundwater (FE3DGW) Flow Model was used to solve case 6. Case 3 has been ignored because unsaturated flow is not pertinent to SRP. This report presents the Level 1 results furnished by the project teams. The numerical accuracy of the codes is determined by (1) comparing the computational results with analytical solutions for cases that have analytical solutions (namely cases 1 and 4), and (2) intercomparing results from codes for cases which do not have analytical solutions (cases 2, 5, 6, and 7). Cases 1, 2, 6, and 7 relate to flow analyses, whereas cases 4 and 5 require nonlinear solutions. 7 refs., 71 figs., 9 tabs

  16. CODES AND PRACTICES OF IMPLEMENTATION OF CORPORATE GOVERNANCE IN ROMANIA AND RESULTS REPORTING

    Directory of Open Access Journals (Sweden)

    GROSU MARIA

    2011-12-01

    Full Text Available Corporate governance refers to the manner in which companies are directed and controlled. Business management was always guided by certain principles, but the current meaning of corporate governance concerns and the contribution that companies must have the overall development of modern society. Romania used quite late in adopting a code of good practice in corporate governance, being driven, in particular, the privatization process, but also the transfer of control and surveillance of political organizations by the Board of Directors (BD. Adoption of codes of corporate governance is necessary to harmonize internal business requirements of a functioning market economy. In addition, the CEE countries, the European Commission adopted an action plan announcing measures to modernize company law and enhance corporate governance. Romania takes steps in this direction by amending the Company Law, and other regulations, although the practice does not necessarily keep pace with the requirements. This study aims on the one hand, an analysis of the evolution of corporate governance codes adopted in Romania, but also an empirical research of the implementation of corporate governance principles of a representative sample of companies listed on the Bucharest Stock Exchange (BSE. Consider relevant research methodology, because the issuer of the Codes of CG in Romania is BSE listed companies requesting their voluntary implementation. Implementation results are summarized and interpreted at the expense of public reports of the companies studied. Most studies undertaken in this direction have been made on multinational companies which respects the rule of corporate governance codes of countries of origin. In addition, many studies also emphasize the fair treatment of stakeholders rather than on models of governance adopted (monist/dualist with implications for optimizing economic objectives but also social. Undertaken research attempts to highlight on the one

  17. Biomarkers of Eating Disorders Using Support Vector Machine Analysis of Structural Neuroimaging Data: Preliminary Results

    Directory of Open Access Journals (Sweden)

    Antonio Cerasa

    2015-01-01

    Full Text Available Presently, there are no valid biomarkers to identify individuals with eating disorders (ED. The aim of this work was to assess the feasibility of a machine learning method for extracting reliable neuroimaging features allowing individual categorization of patients with ED. Support Vector Machine (SVM technique, combined with a pattern recognition method, was employed utilizing structural magnetic resonance images. Seventeen females with ED (six with diagnosis of anorexia nervosa and 11 with bulimia nervosa were compared against 17 body mass index-matched healthy controls (HC. Machine learning allowed individual diagnosis of ED versus HC with an Accuracy ≥ 0.80. Voxel-based pattern recognition analysis demonstrated that voxels influencing the classification Accuracy involved the occipital cortex, the posterior cerebellar lobule, precuneus, sensorimotor/premotor cortices, and the medial prefrontal cortex, all critical regions known to be strongly involved in the pathophysiological mechanisms of ED. Although these findings should be considered preliminary given the small size investigated, SVM analysis highlights the role of well-known brain regions as possible biomarkers to distinguish ED from HC at an individual level, thus encouraging the translational implementation of this new multivariate approach in the clinical practice.

  18. Biomarkers of Eating Disorders Using Support Vector Machine Analysis of Structural Neuroimaging Data: Preliminary Results

    Science.gov (United States)

    Cerasa, Antonio; Castiglioni, Isabella; Salvatore, Christian; Funaro, Angela; Martino, Iolanda; Alfano, Stefania; Donzuso, Giulia; Perrotta, Paolo; Gioia, Maria Cecilia; Gilardi, Maria Carla; Quattrone, Aldo

    2015-01-01

    Presently, there are no valid biomarkers to identify individuals with eating disorders (ED). The aim of this work was to assess the feasibility of a machine learning method for extracting reliable neuroimaging features allowing individual categorization of patients with ED. Support Vector Machine (SVM) technique, combined with a pattern recognition method, was employed utilizing structural magnetic resonance images. Seventeen females with ED (six with diagnosis of anorexia nervosa and 11 with bulimia nervosa) were compared against 17 body mass index-matched healthy controls (HC). Machine learning allowed individual diagnosis of ED versus HC with an Accuracy ≥ 0.80. Voxel-based pattern recognition analysis demonstrated that voxels influencing the classification Accuracy involved the occipital cortex, the posterior cerebellar lobule, precuneus, sensorimotor/premotor cortices, and the medial prefrontal cortex, all critical regions known to be strongly involved in the pathophysiological mechanisms of ED. Although these findings should be considered preliminary given the small size investigated, SVM analysis highlights the role of well-known brain regions as possible biomarkers to distinguish ED from HC at an individual level, thus encouraging the translational implementation of this new multivariate approach in the clinical practice. PMID:26648660

  19. Quantum Virtual Machine (QVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.

  20. Further results on binary convolutional codes with an optimum distance profile

    DEFF Research Database (Denmark)

    Johannesson, Rolf; Paaske, Erik

    1978-01-01

    Fixed binary convolutional codes are considered which are simultaneously optimal or near-optimal according to three criteria: namely, distance profiled, free distanced_{ infty}, and minimum number of weightd_{infty}paths. It is shown how the optimum distance profile criterion can be used to limit...... codes. As a counterpart to quick-look-in (QLI) codes which are not "transparent," we introduce rateR = 1/2easy-look-in-transparent (ELIT) codes with a feedforward inverse(1 + D,D). In general, ELIT codes haved_{infty}superior to that of QLI codes....

  1. Nuclear Reactor Component Code CUPID-I: Numerical Scheme and Preliminary Assessment Results

    International Nuclear Information System (INIS)

    Cho, Hyoung Kyu; Jeong, Jae Jun; Park, Ik Kyu; Kim, Jong Tae; Yoon, Han Young

    2007-12-01

    A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components of a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAC, semi-implicit ICE, SIMPLE, Row Scheme and so on. Among them, the ICE scheme for the three-field model was presented in the present report. The CUPID code is utilizing unstructured mesh for the simulation of complicated geometries of the nuclear reactor components. The conventional ICE scheme that was applied to RELAP5 and COBRA-TF, therefore, were modified for the application to the unstructured mesh. Preliminary calculations for the unstructured semi-implicit ICE scheme have been conducted for a verification of the numerical method from a qualitative point of view. The preliminary calculation results showed that the present numerical scheme is robust and efficient for the prediction of phase changes and flow transitions due to a boiling and a flashing. These calculation results also showed the strong coupling between the pressure and void fraction changes. Thus, it is believed that the semi-implicit ICE scheme can be utilized for transient two-phase flows in a component of a nuclear reactor

  2. Nuclear Reactor Component Code CUPID-I: Numerical Scheme and Preliminary Assessment Results

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyoung Kyu; Jeong, Jae Jun; Park, Ik Kyu; Kim, Jong Tae; Yoon, Han Young

    2007-12-15

    A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components of a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAC, semi-implicit ICE, SIMPLE, Row Scheme and so on. Among them, the ICE scheme for the three-field model was presented in the present report. The CUPID code is utilizing unstructured mesh for the simulation of complicated geometries of the nuclear reactor components. The conventional ICE scheme that was applied to RELAP5 and COBRA-TF, therefore, were modified for the application to the unstructured mesh. Preliminary calculations for the unstructured semi-implicit ICE scheme have been conducted for a verification of the numerical method from a qualitative point of view. The preliminary calculation results showed that the present numerical scheme is robust and efficient for the prediction of phase changes and flow transitions due to a boiling and a flashing. These calculation results also showed the strong coupling between the pressure and void fraction changes. Thus, it is believed that the semi-implicit ICE scheme can be utilized for transient two-phase flows in a component of a nuclear reactor.

  3. Validation of Code ASTEC with LIVE-L1 Experimental Results

    International Nuclear Information System (INIS)

    Bachrata, Andrea

    2008-01-01

    The severe accidents with core melting are considered at the design stage of project at Generation 3+ of Nuclear Power Plants (NPP). Moreover, there is an effort to apply the severe accident management to the operated NPP. The one of main goals of severe accidents mitigation is corium localization and stabilization. The two strategies that fulfil this requirement are: the in-vessel retention (e.g. AP-600, AP- 1000) and the ex-vessel retention (e.g. EPR). To study the scenario of in-vessel retention, a large experimental program and the integrated codes have been developed. The LIVE-L1 experimental facility studied the formation of melt pools and the melt accumulation in the lower head using different cooling conditions. Nowadays, a new European computer code ASTEC is being developed jointly in France and Germany. One of the important steps in ASTEC development in the area of in-vessel retention of corium is its validation with LIVE-L1 experimental results. Details of the experiment are reported. Results of the ASTEC (module DIVA) application to the analysis of the test are presented. (author)

  4. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  5. Summary of aerosol code-comparison results for LWR aerosol containment tests LA1, LA2, and LA3

    International Nuclear Information System (INIS)

    Wright, A.L.; Wilson, J.H.; Arwood, P.C.

    1987-01-01

    The light-water reactor (LWR) aerosol containment experiments (LACE) are being performed in Richland, Washington, at the Hanford Engineering Development Laboratory under the leadership of an international project board and the Electric Power Research Institute. These tests have two objectives: (1) to investigate, at large scale, the inherent aerosol retention behavior in LWR containments under simulated severe accident conditions, and (2) to provide an experimental data base for validating aerosol behavior and thermal-hydraulic computer codes. Aerosol computer-code comparison activities for the LACE tests are being coordinated at the Oak Ridge National Laboratory. For each of the six experiments, pretest calculations (for code-to-code comparisons) and blind post-test calculations (for code-to-test data comparisons) are being performed. This paper presents a summary of the pretest aerosol-code results for tests LA1, LA2, and LA3

  6. Results from the Veterans Health Administration ICD-10-CM/PCS Coding Pilot Study.

    Science.gov (United States)

    Weems, Shelley; Heller, Pamela; Fenton, Susan H

    2015-01-01

    The Veterans Health Administration (VHA) of the US Department of Veterans Affairs has been preparing for the October 1, 2015, conversion to the International Classification of Diseases, Tenth Revision, Clinical Modification and Procedural Coding System (ICD-10-CM/PCS) for more than four years. The VHA's Office of Informatics and Analytics ICD-10 Program Management Office established an ICD-10 Learning Lab to explore expected operational challenges. This study was conducted to determine the effects of the classification system conversion on coding productivity. ICD codes are integral to VHA business processes and are used for purposes such as clinical studies, performance measurement, workload capture, cost determination, Veterans Equitable Resource Allocation (VERA) determination, morbidity and mortality classification, indexing of hospital records by disease and operations, data storage and retrieval, research purposes, and reimbursement. The data collection for this study occurred in multiple VHA sites across several months using standardized methods. It is commonly accepted that coding productivity will decrease with the implementation of ICD-10-CM/PCS. The findings of this study suggest that the decrease will be more significant for inpatient coding productivity (64.5 percent productivity decrease) than for ambulatory care coding productivity (6.7 percent productivity decrease). This study reveals the following important points regarding ICD-10-CM/PCS coding productivity: 1. Ambulatory care ICD-10-CM coding productivity is not expected to decrease as significantly as inpatient ICD-10-CM/PCS coding productivity. 2. Coder training and type of record (inpatient versus outpatient) affect coding productivity. 3. Inpatient coding productivity is decreased when a procedure requiring ICD-10-PCS coding is present. It is highly recommended that organizations perform their own analyses to determine the effects of ICD-10-CM/PCS implementation on coding productivity.

  7. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    Energy Technology Data Exchange (ETDEWEB)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A. [and others

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEU codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.

  8. Simulation of single-phase rod bundle flow. Comparison between CFD-code ESTET, PWR core code THYC and experimental results

    International Nuclear Information System (INIS)

    Mur, J.; Larrauri, D.

    1998-07-01

    Computer simulation of flow in configurations close to pressurized water reactor (PWR) geometry is of great interest for Electricite de France (EDF). Although simulation of the flow through a whole PWR core with an all purpose CFD-code is not yet achievable, such a tool cna be quite useful to perform numerical experiments in order to try and improve the modeling introduced in computer codes devoted to reactor core thermal-hydraulic analysis. Further to simulation in small bare rod bundle configurations, the present study is focused on the simulation, with CFD-code ESTET and PWR core code THYC, of the flow in the experimental configuration VATICAN-1. ESTET simulation results are compared on the one hand to local velocity and concentration measurements, on the other hand with subchannel averaged values calculated by THYC. As far as the comparison with measurements is concerned, ESTET results are quite satisfactory relatively to available experimental data and their uncertainties. The effect of spacer grids and the prediction of the evolution of an unbalanced velocity profile seem to be correctly treated. As far as the comparison with THYC subchannel averaged values is concerned, the difficulty of a direct comparison between subchannel averaged and local values is pointed out. ESTET calculated local values are close to experimental local values. ESTET subchannel averaged values are also close to THYC calculation results. Thus, THYC results are satisfactory whereas their direct comparison to local measurements could show some disagreement. (author)

  9. Sub-millimeter planar imaging with positron emitters: EGS4 code simulation and experimental results

    International Nuclear Information System (INIS)

    Bollini, D.; Del Guerra, A.; Di Domenico, G.

    1996-01-01

    Experimental data for Planar Imaging with positron emitters (pulse height, efficiency and spatial resolution) obtained with two matrices of 25 crystals (2 x 2 x 30 mm 3 each) of YAP:Ce coupled with a Position Sensitive PhotoMultiplier (Hamamatsu R2486-06) have been reproduced with high accuracy using the EGS4 code. Extensive simulation provides a detailed description of the performance of this type of detector as a function of the matrix granularity, the geometry of the detector and detection threshold. We present the Monte Carlo simulation and the preliminary experimental results of a prototype planar imaging system made of two matrices, each one consisting of 400 (2 x 2 x 30 mm 3 ) crystals of YAP-Ce

  10. Comparison and validation of the results of the AZNHEX v.1.0 code with the MCNP code simulating the core of a fast reactor cooled with sodium

    International Nuclear Information System (INIS)

    Galicia A, J.; Francois L, J. L.; Bastida O, G. E.; Esquivel E, J.

    2016-09-01

    The development of the AZTLAN platform for the analysis and design of nuclear reactors is led by Instituto Nacional de Investigaciones Nucleares (ININ) and divided into four working groups, which have well-defined activities to achieve significant progress in this project individually and jointly. Within these working groups is the users group, whose main task is to use the codes that make up the AZTLAN platform to provide feedback to the developers, and in this way to make the final versions of the codes are efficient and at the same time reliable and easy to understand. In this paper we present the results provided by the AZNHEX v.1.0 code when simulating the core of a fast reactor cooled with sodium at steady state. The validation of these results is a fundamental part of the platform development and responsibility of the users group, so in this research the results obtained with AZNHEX are compared and analyzed with those provided by the Monte Carlo code MCNP-5, software worldwide used and recognized. A description of the methodology used with MCNP-5 is also presented for the calculation of the interest variables and the difference that is obtained with respect to the calculated with AZNHEX. (Author)

  11. Untyped Memory in the Java Virtual Machine

    DEFF Research Database (Denmark)

    Gal, Andreas; Probst, Christian; Franz, Michael

    2005-01-01

    We have implemented a virtual execution environment that executes legacy binary code on top of the type-safe Java Virtual Machine by recompiling native code instructions to type-safe bytecode. As it is essentially impossible to infer static typing into untyped machine code, our system emulates...... untyped memory on top of Java’s type system. While this approach allows to execute native code on any off-the-shelf JVM, the resulting runtime performance is poor. We propose a set of virtual machine extensions that add type-unsafe memory objects to JVM. We contend that these JVM extensions do not relax...... Java’s type system as the same functionality can be achieved in pure Java, albeit much less efficiently....

  12. Comparison of Severe Accident Results Among SCDAP/RELAP5, MAAP, and MELCOR Codes

    International Nuclear Information System (INIS)

    Wang, T.-C.; Wang, S.-J.; Teng, J.-T.

    2005-01-01

    This paper demonstrates a large-break loss-of-coolant accident (LOCA) sequence of the Kuosheng nuclear power plant (NPP) and station blackout sequence of the Maanshan NPP with the SCDAP/RELAP5 (SR5), Modular Accident Analysis Program (MAAP), and MELCOR codes. The large-break sequence initiated with double-ended rupture of a recirculation loop. The main steam isolation valves (MSIVs) closed, the feedwater pump tripped, the reactor scrammed, and the assumed high-pressure and low-pressure spray systems of the emergency core cooling system (ECCS) were not functional. Therefore, all coolant systems to quench the core were lost. MAAP predicts a longer vessel failure time, and MELCOR predicts a shorter vessel failure time for the large-break LOCA sequence. The station blackout sequence initiated with a loss of all alternating-current (ac) power. The MSIVs closed, the feedwater pump tripped, and the reactor scrammed. The motor-driven auxiliary feedwater system and the high-pressure and low-pressure injection systems of the ECCS were lost because of the loss of all ac power. It was also assumed that the turbine-driven auxiliary feedwater pump was not functional. Therefore, the coolant system to quench the core was also lost. MAAP predicts a longer time of steam generator dryout, time interval between top of active fuel and bottom of active fuel, and vessel failure time than those of the SR5 and MELCOR predictions for the station blackout sequence. The three codes give similar results for important phenomena during the accidents, including SG dryout, core uncovery, cladding oxidation, cladding failure, molten pool formulation, debris relocation to the lower plenum, and vessel head failure. This paper successfully demonstrates the large-break LOCA sequence of the Kuosheng NPP and the station blackout sequence of the Maanshan NPP

  13. Mathematical models and illustrative results for the RINGBEARER II monopole/dipole beam-propagation code

    International Nuclear Information System (INIS)

    Chambers, F.W.; Masamitsu, J.A.; Lee, E.P.

    1982-01-01

    RINGBEARER II is a linearized monopole/dipole particle simulation code for studying intense relativistic electron beam propagation in gas. In this report the mathematical models utilized for beam particle dynamics and pinch field computation are delineated. Difficulties encountered in code operations and some remedies are discussed. Sample output is presented detailing the diagnostics and the methods of display and analysis utilized

  14. NIRS report of utilization of MRI machine for research. Results in 2003

    International Nuclear Information System (INIS)

    2006-04-01

    The report is an achievement of cooperative research and development by private and official facilities of the National Institute of Radiological Sciences (NIRS) MRI machine, and its applied and medical uses in 2003. Contained are the reports on the magnet (1 topic), antennae (4), physical mensurations (5), basic biological researches (6), basic studies on human body (6) and clinical studies (13), which are finally summarized in the list of the personnel, event calendar and published scientific papers. The basic studies by the MRI involve those of the brain damage by heavy particle irradiation, pediatric surgical diseases by MR-microscopy, implanted tumor volumetry in the rat, biodistribution of BPA- gadolinium-diethylenetriamine pentaacetic acid (Gd-DTPA) for neutron capture therapy, ultra-high speed microscopic MRI measurement of microcirculation in the tumor, micro-imaging of human eye, hepatic glycogen content by MRS, flow analysis of cerebrospinal fluid, autopsy imaging system, numerical phantom of human body and so on. Clinical studies involve those of the drug metabolism and disposition, efficacy evaluation of radiotherapy, PET-CT-MRI image, schizophrenia, GSH detection, MP4A-PET image standardization, intracranial lymph systems, brain function, GSH in schizophrenia, obstructive hypertrophic cardiomyopathy, cholangiography, glycosaminoglycan in cartilage and high-speed imaging of prostate cancer by sensitivity encoding. (T.I.)

  15. Towards Intelligent Interpretation of Low Strain Pile Integrity Testing Results Using Machine Learning Techniques.

    Science.gov (United States)

    Cui, De-Mi; Yan, Weizhong; Wang, Xiao-Quan; Lu, Lie-Min

    2017-10-25

    Low strain pile integrity testing (LSPIT), due to its simplicity and low cost, is one of the most popular NDE methods used in pile foundation construction. While performing LSPIT in the field is generally quite simple and quick, determining the integrity of the test piles by analyzing and interpreting the test signals (reflectograms) is still a manual process performed by experienced experts only. For foundation construction sites where the number of piles to be tested is large, it may take days before the expert can complete interpreting all of the piles and delivering the integrity assessment report. Techniques that can automate test signal interpretation, thus shortening the LSPIT's turnaround time, are of great business value and are in great need. Motivated by this need, in this paper, we develop a computer-aided reflectogram interpretation (CARI) methodology that can interpret a large number of LSPIT signals quickly and consistently. The methodology, built on advanced signal processing and machine learning technologies, can be used to assist the experts in performing both qualitative and quantitative interpretation of LSPIT signals. Specifically, the methodology can ease experts' interpretation burden by screening all test piles quickly and identifying a small number of suspected piles for experts to perform manual, in-depth interpretation. We demonstrate the methodology's effectiveness using the LSPIT signals collected from a number of real-world pile construction sites. The proposed methodology can potentially enhance LSPIT and make it even more efficient and effective in quality control of deep foundation construction.

  16. COSY Control Status. First results with rapid prototyped man-machine interface for accelerator control

    Energy Technology Data Exchange (ETDEWEB)

    Hacker, U [Forschungszentrum Juelich, Postfach 1913, 52425 Juelich (Germany); Haberbosch, C [Forschungszentrum Juelich, Postfach 1913, 52425 Juelich (Germany); Henn, K [Forschungszentrum Juelich, Postfach 1913, 52425 Juelich (Germany); Weinert, A [Forschungszentrum Juelich, Postfach 1913, 52425 Juelich (Germany)

    1994-12-15

    The experience gained with the COSY Control System after a six month commissioning period followed by a six month production period will be presented. The COSY Control System runs approximately 300 VME and VXI target systems using a total of about 1000 CPUs, the systems are driven by the diskless operating environment RT/OS, hosted by eight workcells. Application software is implemented using Object-Orientated programming paradigms. All accelerator components become interface functions as instances of an abstract device model class. Methods defined here present an abstract picture of the accelerator giving immediate access to device states and parameters. Operator interaction is defined by building views and controllers for the model. Higher level functions, such as defining an acceleration cycle, are easily developed and modified with the accelerator connected on-line to the model. In the first year of COSY operation the object based approach for a control system, together with a rapid prototyped man-machine interface has brought to light the potential of new functions such as on-line, real time programming on a running system yielding high programming performance. The advantages of this approach have not been, until now, fully appreciated. ((orig.))

  17. NIRS report of utilization of MRI machine for research. Results in 2004

    International Nuclear Information System (INIS)

    2007-04-01

    The report is an achievement of cooperative research and development by private and official facilities of the National Institute of Radiological Sciences (NIRS) MRI machine, and its applied and medical uses in 2004. Contained are the reports on the magnet (1 topic), RF coils (6), basic studies on measurements (4), biological studies on measurements (4) and clinical studies on measurements (18), which are finally summarized in the list of the personnel, event calendar and published scientific papers. The basic studies by the MRI involve those of metabolism and molecular transport measurements using MRI contrast agents, macro-analysis and micro-analysis of cerebral blood flow by CFD, samples for radiation dose measurements using polymer gel materials and the MRS method for quantification of substances in the body. Biological studies involve those of the brain damage by heavy particle irradiation, pediatric surgical diseases by MR-microscopy, PET data analysis of the monkey taking MPTP and multiple sclerosis mice. Clinical studies involve those of blood vessel coupling of cerebral nerve, micro-imaging of human eye, hepatic glycogen content by MRS, autopsy imaging, numerical phantom of human body, PET-CT-MRI image, schizophrenia, GSH detection, MP4A-PET image standardization, MR imaging of perivascular space, brain function, GSH in schizophrenia, occlusive arterial disease of lower extremity, cholangiography, glycosaminoglycan in cartilage, MR imaging of wrist joint, high-speed imaging of prostate cancer by sensitivity encoding and volumetry of rats' tissues. (J.P.N.)

  18. Uncertainty analysis for results of thermal hydraulic codes of best-estimate-type

    International Nuclear Information System (INIS)

    Alva N, J.

    2010-01-01

    In this thesis, some fundamental knowledge is presented about uncertainty analysis and about diverse methodologies applied in the study of nuclear power plant transient event analysis, particularly related to thermal hydraulics phenomena. These concepts and methodologies mentioned in this work come from a wide bibliographical research in the nuclear power subject. Methodologies for uncertainty analysis have been developed by quite diverse institutions, and they have been widely used worldwide for application to results from best-estimate-type computer codes in nuclear reactor thermal hydraulics and safety analysis. Also, the main uncertainty sources, types of uncertainties, and aspects related to best estimate modeling and methods are introduced. Once the main bases of uncertainty analysis have been set, and some of the known methodologies have been introduced, it is presented in detail the CSAU methodology, which will be applied in the analyses. The main objective of this thesis is to compare the results of an uncertainty and sensibility analysis by using the Response Surface Technique to the application of W ilks formula, apply through a loss coolant experiment and an event of rise in a BWR. Both techniques are options in the part of uncertainty and sensibility analysis of the CSAU methodology, which was developed for the analysis of transients and accidents at nuclear power plants, and it is the base of most of the methodologies used in licensing of nuclear power plants practically everywhere. Finally, the results of applying both techniques are compared and discussed. (Author)

  19. Battelle integrity of nuclear piping program. Summary of results and implications for codes/standards

    International Nuclear Information System (INIS)

    Miura, Naoki

    2005-01-01

    The BINP(Battelle Integrity of Nuclear Piping) program was proposed by Battelle to elaborate pipe fracture evaluation methods and to improve LBB and in-service flaw evaluation criteria. The program has been conducted from October 1998 to September 2003. In Japan, CRIEPI participated in the program on behalf of electric utilities and fabricators to catch up the technical backgrounds for possible future revision of LBB and in-service flaw evaluation standards and to investigate the issues needed to be reflected to current domestic standards. A series of the results obtained from the program has been well utilized for the new LBB Regulatory Guide Program by USNRC and for proposal of revised in-service flaw evaluation criteria to the ASME Code Committee. The results were assessed whether they had implications for the existing or future domestic standards. As a result, the impact of many of these issues, which were concerned to be adversely affected to LBB approval or allowable flaw sizes in flaw evaluation criteria, was found to be relatively minor under actual plant conditions. At the same time, some issues that needed to be resolved to address advanced and rational standards in the future were specified. (author)

  20. ADREA-I: A transient three dimensional transport code for atmospheric and other applications - some preliminary results

    International Nuclear Information System (INIS)

    Bartzis, G.

    1985-02-01

    In this work a general description of the ADREA-I code is presented and some preliminary results are discussed. The ADREA-I is a transient three dimensional computer code aimed at transport analysis with particular emphasis on atmospheric dispersion under any realistic terrain conditions (complex or not) applicable to the planetary boundary layer in a distance extending up to a hundred kilometers or more. The complex geometry applications and the reasonable results obtained constitute a solid indication of the broad capability of the code. (author)

  1. NIRS report of utilization of MRI machine for research. Results in 2005

    International Nuclear Information System (INIS)

    2007-04-01

    The report is an achievement of cooperative research and development by private and official facilities of the National Institute of Radiological Sciences (NIRS) MRI machine, and its applied and medical uses in 2005. Contained are the reports on the magnet (1 topic), RF coils (5), basic studies on measurements (11), biological studies on measurements (4) and clinical studies on measurements (18), which are finally summarized in the list of the personnel, event calendar and published scientific papers. The basic studies by the MRI involve those of digestive tract's movement, samples for radiation dose measurements using polymer gel materials, disposition tracing of 5-FU by 19F chemical shift images, 19F images at 3T, 3T MRS, 13C measurement at 7T MR, spectrum at 7T MR, development of measurement system for elastic modulus distribution in living tissues, measurement of biological function by 170 MRI and measurement of acetylcholinesterase in brain. Biological studies involve those of the brain damage by heavy particle irradiation, functional brain mapping of monkey's abstract operation using PET, multiple sclerosis mice and development of a new cardiac function evaluation method. Clinical studies involve those of blood vessel coupling of cerebral nerve, micro-imaging of human eye, autopsy imaging, numerical phantom of human body, measurements of physiological parameters of brain, measurement of sugar metabolism function, radiation therapy evaluation method of brain tumors, metabolism analysis by 7T MR spectroscopy, statistical test of AChE activity, measurement of beta-amyloid in brain by Pittsburgh Compound-B, brain function, development of longitudinal relaxation time calculation software (T1Wizard), GSH in schizophrenia, evaluation method of forms and functions of hearts, occlusive arterial disease of lower extremity, MRI image of prostate using 3.0T, cholangiography, elucidation of activation mechanism of higher brain network by occlusal chew stimulation and MR

  2. Potential Job Creation in Nevada as a Result of Adopting New Residential Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Niemeyer, Jackie M.

    2013-09-01

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levels of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.

  3. Potential Job Creation in Tennessee as a Result of Adopting New Residential Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Niemeyer, Jackie M.

    2013-09-01

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levels of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.

  4. Potential Job Creation in Rhode Island as a Result of Adopting New Residential Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Niemeyer, Jackie M.

    2013-09-01

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levels of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.

  5. Potential Job Creation in Minnesota as a Result of Adopting New Residential Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Niemeyer, Jackie M.

    2013-09-01

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levels of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.

  6. First Results for Fluid Dynamics, Neutronics and Fission Product Behaviour in HTR applying the HTR Code Package (HCP) Prototype

    International Nuclear Information System (INIS)

    Allelein, H.-J.; Kasselmann, S.; Xhonneux, A.; Lambertz, D.

    2014-01-01

    To simulate the different aspects of High Temperature Reactor (HTR) cores, a variety of specialized computer codes have been developed at Forschungszentrum Jülich (IEK-6) and Aachen University (LRST) in the last decades. In order to preserve knowledge, to overcome present limitations and to make these codes applicable to modern computer clusters, these individual programs are being integrated into a consistent code package. The so-called HTR code package (HCP) couples the related and recently applied physics models in a highly integrated manner and therefore allows to simulate phenomena with higher precision in space and time while at the same time applying state-of-the-art programming techniques and standards. This paper provides an overview of the status of the HCP and reports about first benchmark results for an HCP prototype which couples the fluid dynamics and time dependent neutronics code MGT-3D, the burn up code TNT and the fission product release code STACY. Due to the coupling of MGT-3D and TNT, a first step towards a new reactor operation and accident simulation code was made, where nuclide concentrations calculated by TNT are fed back into a new spectrum code of the HCP. Selected operation scenarios of the HTR-Module 200 concept plant and the HTTR were chosen to be simulated with the HCP prototype. The fission product release during normal operation conditions will be calculated with STACY based on a core status derived from SERPENT and MGT–3D. Comparisons will be shown against data generated by the legacy codes VSOP99/11, NAKURE and FRESCO-II. (author)

  7. Code comparison results for the loft LP-FP-2 experiment

    International Nuclear Information System (INIS)

    Merilo, M.; Mecham, D.C.

    1991-01-01

    Computer code calculations are compared with thermal hydraulic and fission product release, transport, and deposition data obtained from the OECD-LOFT LP-FP-2 experiment. Except for the MAAP code, which is a fully integrated severe accident code, the thermalhydraulic and fission product behavior were calculated with different codes. Six organizations participated in the thermal hydraulic portion of the code comparison exercise. These calculations were performed with RELAP 5, SCDAP/RELAP 5, and MAAP. The comparisons show generally well developed capabilities to determine the thermal-hydraulic conditions during the early stages of a severe core damage accident. Four participants submitted detailed fission product behavior calculations. Except for MAAP, as stated previously, the fission product inventory, core damage, fission product release, transport and deposition were calculated independently with different codes. Much larger differences than observed for the thermalhydraulic comparison were evident. The fission product inventory calculations were generally in good agreement with each other. Large differences were observed for release fractions and amounts of deposition. Net release calculations from the primary system were generally accurate within a factor of two or three for the more important fission products

  8. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    Science.gov (United States)

    Melzani, Mickaël; Winisdoerffer, Christophe; Walder, Rolf; Folini, Doris; Favre, Jean M.; Krastanov, Stefan; Messmer, Peter

    2013-10-01

    simulations. The other is that the level of electric field fluctuations scales as 1/ΛPIC ∝ p. We provide a corresponding exact expression, taking into account the finite superparticle size. We confirm both expectations with simulations. Fourth, we compare the Vlasov-Maxwell theory, often used for code benchmarking, to the PIC model. The former describes a phase-space fluid with Λ = + ∞ and no correlations, while the PIC plasma features a small Λ and a high level of correlations when compared to a real plasma. These differences have to be kept in mind when interpreting and validating PIC results against the Vlasov-Maxwell theory and when modeling real physical plasmas.

  9. Coupling External Radiation Transport Code Results to the GADRAS Detector Response Function

    International Nuclear Information System (INIS)

    Mitchell, Dean J.; Thoreson, Gregory G.; Horne, Steven M.

    2014-01-01

    Simulating gamma spectra is useful for analyzing special nuclear materials. Gamma spectra are influenced not only by the source and the detector, but also by the external, and potentially complex, scattering environment. The scattering environment can make accurate representations of gamma spectra difficult to obtain. By coupling the Monte Carlo Nuclear Particle (MCNP) code with the Gamma Detector Response and Analysis Software (GADRAS) detector response function, gamma spectrum simulations can be computed with a high degree of fidelity even in the presence of a complex scattering environment. Traditionally, GADRAS represents the external scattering environment with empirically derived scattering parameters. By modeling the external scattering environment in MCNP and using the results as input for the GADRAS detector response function, gamma spectra can be obtained with a high degree of fidelity. This method was verified with experimental data obtained in an environment with a significant amount of scattering material. The experiment used both gamma-emitting sources and moderated and bare neutron-emitting sources. The sources were modeled using GADRAS and MCNP in the presence of the external scattering environment, producing accurate representations of the experimental data.

  10. Results and code prediction comparisons of lithium-air reaction and aerosol behavior tests

    International Nuclear Information System (INIS)

    Jeppson, D.W.

    1986-03-01

    The Hanford Engineering Development Laboratory (HEDL) Fusion Safety Support Studies include evaluation of potential safety and environmental concerns associated with the use of liquid lithium as a breeder and coolant for fusion reactors. Potential mechanisms for volatilization and transport of radioactive metallic species associated with breeder materials are of particular interest. Liquid lithium pool-air reaction and aerosol behavior tests were conducted with lithium masses up to 100 kg within the 850-m 3 containment vessel in the Containment Systems Test Facility. Lithium-air reaction rates, aerosol generation rates, aerosol behavior and characterization, as well as containment atmosphere temperature and pressure responses were determined. Pool-air reaction and aerosol behavior test results were compared with computer code calculations for reaction rates, containment atmosphere response, and aerosol behavior. The volatility of potentially radioactive metallic species from a lithium pool-air reaction was measured. The response of various aerosol detectors to the aerosol generated was determined. Liquid lithium spray tests in air and in nitrogen atmospheres were conducted with lithium temperatures of about 427 0 and 650 0 C. Lithium reaction rates, containment atmosphere response, and aerosol generation and characterization were determined for these spray tests

  11. Offshore Code Comparison Collaboration within IEA Wind Task 23: Phase IV Results Regarding Floating Wind Turbine Modeling; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Jonkman, J.; Larsen, T.; Hansen, A.; Nygaard, T.; Maus, K.; Karimirad, M.; Gao, Z.; Moan, T.; Fylling, I.

    2010-04-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation codes that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, and foundation dynamics of the support structure. This paper describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, which operates under Subtask 2 of the International Energy Agency Wind Task 23. In the latest phase of the project, participants used an assortment of codes to model the coupled dynamic response of a 5-MW wind turbine installed on a floating spar buoy in 320 m of water. Code predictions were compared from load-case simulations selected to test different model features. The comparisons have resulted in a greater understanding of offshore floating wind turbine dynamics and modeling techniques, and better knowledge of the validity of various approximations. The lessons learned from this exercise have improved the participants' codes, thus improving the standard of offshore wind turbine modeling.

  12. Offshore code comparison collaboration continuation within IEA Wind Task 30: Phase II results regarding a floating semisubmersible wind system

    DEFF Research Database (Denmark)

    Robertson, Amy; Jonkman, Jason M.; Vorpahl, Fabian

    2014-01-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation tools (or codes) that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, mooring dynamics, and founda......Offshore wind turbines are designed and analyzed using comprehensive simulation tools (or codes) that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, mooring dynamics......, and foundation dynamics of the support structure. This paper describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration Continuation project, which operates under the International Energy Agency Wind Task 30. In the latest phase of the project......, participants used an assortment of simulation codes to model the coupled dynamic response of a 5-MW wind turbine installed on a floating semisubmersible in 200 m of water. Code predictions were compared from load case simulations selected to test different model features. The comparisons have resulted...

  13. First results for fluid dynamics, neutronics and fission product behavior in HTR applying the HTR code package (HCP) prototype

    Energy Technology Data Exchange (ETDEWEB)

    Allelein, H.-J., E-mail: h.j.allelein@fz-juelich.de [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH Aachen University, 52064 Aachen (Germany); Kasselmann, S.; Xhonneux, A.; Tantillo, F.; Trabadela, A.; Lambertz, D. [Forschungszentrum Jülich, 52425 Jülich (Germany)

    2016-09-15

    To simulate the different aspects of High Temperature Reactor (HTR) cores, a variety of specialized computer codes have been developed at Forschungszentrum Jülich (IEK-6) and Aachen University (LRST) in the last decades. In order to preserve knowledge, to overcome present limitations and to make these codes applicable to modern computer clusters, these individual programs are being integrated into a consistent code package. The so-called HTR code package (HCP) couples the related and recently applied physics models in a highly integrated manner and therefore allows to simulate phenomena with higher precision in space and time while at the same time applying state-of-the-art programming techniques and standards. This paper provides an overview of the status of the HCP and reports about first benchmark results for an HCP prototype which couples the fluid dynamics and time dependent neutronics code MGT-3D, the burn up code TNT and the fission product release code STACY. Due to the coupling of MGT-3D and TNT, a first step towards a new reactor operation and accident simulation code was made, where nuclide concentrations calculated by TNT lead to new cross sections, which are fed back into MGT-3D. Selected operation scenarios of the HTR-Module 200 concept plant and the HTTR were chosen to be simulated with the HCP prototype. The fission product release during normal operation conditions will be calculated with STACY based on a core status derived from SERPENT and MGT-3D. Comparisons will be shown against data generated by SERPENT and the legacy codes VSOP99/11, NAKURE and FRESCO-II.

  14. SwingStates: adding state machines to the swing toolkit

    OpenAIRE

    Appert , Caroline; Beaudouin-Lafon , Michel

    2006-01-01

    International audience; This article describes SwingStates, a library that adds state machines to the Java Swing user interface toolkit. Unlike traditional approaches, which use callbacks or listeners to define interaction, state machines provide a powerful control structure and localize all of the interaction code in one place. SwingStates takes advantage of Java's inner classes, providing programmers with a natural syntax and making it easier to follow and debug the resulting code. SwingSta...

  15. Energy Consumption Model and Measurement Results for Network Coding-enabled IEEE 802.11 Meshed Wireless Networks

    DEFF Research Database (Denmark)

    Paramanathan, Achuthan; Rasmussen, Ulrik Wilken; Hundebøll, Martin

    2012-01-01

    This paper presents an energy model and energy measurements for network coding enabled wireless meshed networks based on IEEE 802.11 technology. The energy model and the energy measurement testbed is limited to a simple Alice and Bob scenario. For this toy scenario we compare the energy usages...... for a system with and without network coding support. While network coding reduces the number of radio transmissions, the operational activity on the devices due to coding will be increased. We derive an analytical model for the energy consumption and compare it to real measurements for which we build...... a flexible, low cost tool to be able to measure at any given node in a meshed network. We verify the precision of our tool by comparing it to a sophisticated device. Our main results in this paper are the derivation of an analytical energy model, the implementation of a distributed energy measurement testbed...

  16. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1995-01-01

    This report is a compilation of the information submitted by AECL, CIAE, JAERI, ORNL and Siemens in response to a need identified at the 'Workshop on R and D Needs' at the IGORR-3 meeting. The survey compiled information on the national standards applied to the Safety Quality Assurance (SQA) programs undertaken by the participants. Information was assembled for the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods used to verify and validate the codes and libraries. Although the survey was not comprehensive, it provides a basis for exchanging information of common interest to the research reactor community

  17. Armature reaction effects on a high temperature superconducting field winding of an synchronous machine: experimental results

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech

    2014-01-01

    This paper presents experimental results from the Superwind laboratory setup. Particular focus in the paper has been placed on describing and quantifying the influence of armature reaction on performance of the HTS filed winding. Presented experimental results have confirmed the HTS field winding...

  18. Python for probability, statistics, and machine learning

    CERN Document Server

    Unpingco, José

    2016-01-01

    This book covers the key ideas that link probability, statistics, and machine learning illustrated using Python modules in these areas. The entire text, including all the figures and numerical results, is reproducible using the Python codes and their associated Jupyter/IPython notebooks, which are provided as supplementary downloads. The author develops key intuitions in machine learning by working meaningful examples using multiple analytical methods and Python codes, thereby connecting theoretical concepts to concrete implementations. Modern Python modules like Pandas, Sympy, and Scikit-learn are applied to simulate and visualize important machine learning concepts like the bias/variance trade-off, cross-validation, and regularization. Many abstract mathematical ideas, such as convergence in probability theory, are developed and illustrated with numerical examples. This book is suitable for anyone with an undergraduate-level exposure to probability, statistics, or machine learning and with rudimentary knowl...

  19. THE RESULTS OF THE STUDY BOILING POINT OUT OZONE-SAFE REFRIGERANT R410A IN THE EVAPORATORS OF REFRIGERATING MACHINES

    Directory of Open Access Journals (Sweden)

    V. G. Bukin

    2012-01-01

    Full Text Available The results of experimental research boiling heat transfer of ozone-friendly R410A refrigerant in evaporators machines and the possibility of its use in place of the prohibited refrigerant R22.

  20. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    Science.gov (United States)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal

  1. Effect of Micro Electrical Discharge Machining Process Conditions on Tool Wear Characteristics: Results of an Analytic Study

    DEFF Research Database (Denmark)

    Puthumana, Govindan; P., Rajeev

    2016-01-01

    Micro electrical discharge machining is one of the established techniques to manufacture high aspect ratio features on electrically conductive materials. This paper presents the results and inferences of an analytical study for estimating theeffect of process conditions on tool electrode wear...... characteristicsin micro-EDM process. A new approach with two novel factors anticipated to directly control the material removal mechanism from the tool electrode are proposed; using discharge energyfactor (DEf) and dielectric flushing factor (DFf). The results showed that the correlation between the tool wear rate...... (TWR) and the factors is poor. Thus, individual effects of each factor on TWR are analyzed. The factors selected for the study of individual effects are pulse on-time, discharge peak current, gap voltage and gap flushing pressure. The tool wear rate decreases linearly with an increase in the pulse on...

  2. Code portability and data management considerations in the SAS3D LMFBR accident-analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.

    1981-01-01

    The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available

  3. Assessment of national dosimetry quality audits results for teletherapy machines from 1989 to 2015.

    Science.gov (United States)

    Muhammad, Wazir; Ullah, Asad; Mahmood, Khalid; Matiullah

    2016-01-01

    The purpose of this study was to ensure accuracy in radiation dose delivery, external dosimetry quality audit has an equal importance with routine dosimetry performed at clinics. To do so, dosimetry quality audit was organized by the Secondary Standard Dosimetry Laboratory (SSDL) of Pakistan Institute of Nuclear Science and Technology (PINSTECH) at the national level to investigate and minimize uncertainties involved in the measurement of absorbed dose, and to improve the accuracy of dose measurement at different radiotherapy hospitals. A total of 181 dosimetry quality audits (i.e., 102 of Co-60 and 79 of linear accelerators) for teletherapy units installed at 22 different sites were performed from 1989 to 2015. The percent deviation between users’ calculated/stated dose and evaluated dose (in the result of on-site dosimetry visits) were calculated and the results were analyzed with respect to the limits of ± 2.5% (ICRU "optimal model") ± 3.0% (IAEA on-site dosimetry visits limit) and ± 5.0% (ICRU minimal or "lowest acceptable" model). The results showed that out of 181 total on-site dosimetry visits, 20.44%, 16.02%, and 4.42% were out of acceptable limits of ± 2.5% ± 3.0%, and ± 5.0%, respectively. The importance of a proper ongoing quality assurance program, recommendations of the followed protocols, and properly calibrated thermometers, pressure gauges, and humidity meters at radiotherapy hospitals are essential in maintaining consistency and uniformity of absorbed dose measurements for precision in dose delivery.

  4. Comparison of Analytical and Measured Performance Results on Network Coding in IEEE 802.11 Ad-Hoc Networks

    DEFF Research Database (Denmark)

    Zhao, Fang; Médard, Muriel; Hundebøll, Martin

    2012-01-01

    CATWOMAN that can run on standard WiFi hardware. We present an analytical model to evaluate the performance of COPE in simple networks, and our results show the excellent predictive quality of this model. By closely examining the performance in two simple topologies, we observe that the coding gain results...

  5. Validation of one-dimensional module of MARS 2.1 computer code by comparison with the RELAP5/MOD3.3 developmental assessment results

    International Nuclear Information System (INIS)

    Lee, Y. J.; Bae, S. W.; Chung, B. D.

    2003-02-01

    This report records the results of the code validation for the one-dimensional module of the MARS 2.1 thermal hydraulics analysis code by means of result-comparison with the RELAP5/MOD3.3 computer code. For the validation calculations, simulations of the RELAP5 code development assessment problem, which consists of 22 simulation problems in 3 categories, have been selected. The results of the 3 categories of simulations demonstrate that the one-dimensional module of the MARS 2.1 code and the RELAP5/MOD3.3 code are essentially the same code. This is expected as the two codes have basically the same set of field equations, constitutive equations and main thermal hydraulic models. The results suggests that the high level of code validity of the RELAP5/MOD3.3 can be directly applied to the MARS one-dimensional module

  6. Preliminary In-vivo Results For Spatially Coded Synthetic Transmit Aperture Ultrasound Based On Frequency Division

    DEFF Research Database (Denmark)

    Gran, Fredrik; Hansen, Kristoffer Lindskov; Jensen, Jørgen Arendt

    2006-01-01

    This paper investigates the possibility of using spatial coding based on frequency division for in-vivo synthetic transmit aperture (STA) ultrasound imaging. When using spatial encoding for STA, it is possible to use several transmitters simultaneously and separate the signals at the receiver....... This increases the maximum transmit power compared to conventional STA, where only one transmitter can be active. The signal-to-noise-ratio can therefore he increased and better penetration can be obtained. For frequency division, the coding is achieved by designing a number of transmit waveforms with disjoint...... spectral support, spanning the passband of the ultrasound transducer. The signals can therefore he separated at the receiver using matched filtering. The method is tested using a commercial linear array transducer with a center frequency of 9 MHz and 68% fractional bandwidth. In this paper, the transmit...

  7. Design of an Object-Oriented Turbomachinery Analysis Code: Initial Results

    Science.gov (United States)

    Jones, Scott M.

    2015-01-01

    Performance prediction of turbomachines is a significant part of aircraft propulsion design. In the conceptual design stage, there is an important need to quantify compressor and turbine aerodynamic performance and develop initial geometry parameters at the 2-D level prior to more extensive Computational Fluid Dynamics (CFD) analyses. The Object-oriented Turbomachinery Analysis Code (OTAC) is being developed to perform 2-D meridional flowthrough analysis of turbomachines using an implicit formulation of the governing equations to solve for the conditions at the exit of each blade row. OTAC is designed to perform meanline or streamline calculations; for streamline analyses simple radial equilibrium is used as a governing equation to solve for spanwise property variations. While the goal for OTAC is to allow simulation of physical effects and architectural features unavailable in other existing codes, it must first prove capable of performing calculations for conventional turbomachines. OTAC is being developed using the interpreted language features available in the Numerical Propulsion System Simulation (NPSS) code described by Claus et al (1991). Using the NPSS framework came with several distinct advantages, including access to the pre-existing NPSS thermodynamic property packages and the NPSS Newton-Raphson solver. The remaining objects necessary for OTAC were written in the NPSS framework interpreted language. These new objects form the core of OTAC and are the BladeRow, BladeSegment, TransitionSection, Expander, Reducer, and OTACstart Elements. The BladeRow and BladeSegment consumed the initial bulk of the development effort and required determining the equations applicable to flow through turbomachinery blade rows given specific assumptions about the nature of that flow. Once these objects were completed, OTAC was tested and found to agree with existing solutions from other codes; these tests included various meanline and streamline comparisons of axial

  8. Comparison of computer code calculations with experimental results obtained in the NSPP series of experiments

    International Nuclear Information System (INIS)

    Tobias, M.L.

    1987-01-01

    Experiments were done on several aerosols in air atmospheres at varying temperatures and humidity conditions of interest in forming a data base for testing aerosol behavior models used as part of the process of evaluating the ''source term'' in light water reactor accidents. This paper deals with the problems of predicting the observed experimental data for suspended aerosol concentration with aerosol calculational codes. Comparisons of measured versus predicted data are provided

  9. Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

    Directory of Open Access Journals (Sweden)

    Marcus Völp

    2012-11-01

    Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

  10. Rod behaviour under base load, load follow and frequency control operation: CYRANO 2 code predictions versus experimental results

    International Nuclear Information System (INIS)

    Gautier, B.; Raybaud, A.

    1984-01-01

    The French PWR reactors are now currently operating under load follow and frequency control. In order to demonstrate that these operating conditions were not able to increase the fuel failure rate, fuel rod behaviour calculations have been performed by E.D.F. with CYRANO 2 code. In parallel with these theoretical calculations, code predictions have been compared to experimental results. The paper presents some of the comparisons performed on 17x17 fuel irradiated in FESSENHEIM 2 up to 30 GWd/tU under base load operation and in the CAP reactor under load follow and frequency control conditions. It is shown that experimental results can be predicted with a reasonable accuracy by CYRANO 2 code. The experimental work was carried out under joint R and D programs by EDF, FRAGEMA, CEA, and WESTINGHOUSE (CAP program by French partners only). (author)

  11. Gamma spectroscopy modelization intercomparison of the modelization results using two different codes (MCNP, and Pascalys-mercure)

    International Nuclear Information System (INIS)

    Luneville, L.; Chiron, M.; Toubon, H.; Dogny, S.; Huver, M.; Berger, L.

    2001-01-01

    The research performed in common these last 3 years by the French Atomic Commission CEA, COGEMA and Eurisys Mesures had for main subject the realization of a complete tool of modelization for the largest range of realistic cases, the Pascalys modelization software. The main purpose of the modelization was to calculate the global measurement efficiency, which delivers the most accurate relationship between the photons emitted by the nuclear source in volume, punctual or deposited form and the germanium hyper pure detector, which detects and analyzes the received photons. It has been stated since long time that experimental global measurement efficiency becomes more and more difficult to address especially for complex scene as we can find in decommissioning and dismantling or in case of high activities for which the use of high activity reference sources become difficult to use for both health physics point of view and regulations. The choice of a calculation code is fundamental if accurate modelization is searched. MCNP represents the reference code but its use is long time calculation consuming and then not practicable in line on the field. Direct line-of-sight point kernel code as the French Atomic Commission 3-D analysis Mercure code can represent the practicable compromise between the most accurate MCNP reference code and the realistic performances needed in modelization. The comparison between the results of Pascalys-Mercure and MCNP code taking in account the last improvements of Mercure in the low energy range where the most important errors can occur, is presented in this paper, Mercure code being supported in line by the recent Pascalys 3-D modelization scene software. The incidence of the intrinsic efficiency of the Germanium detector is also approached for the total efficiency of measurement. (authors)

  12. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  13. Does health promotion need a Code of Ethics? Results from an IUHPE mixed method survey.

    Science.gov (United States)

    Bull, Torill; Riggs, Elisha; Nchogu, Sussy N

    2012-09-01

    Health promotion is an ethically challenging field involving constant reflection of values across multiple cultures of what is regarded as good and bad health promotion practice. While many disciplines are guided by a Code of Ethics (CoE) no such guide is available to health promoters. The International Union for Health Promotion and Education (IUHPE) has been nominated as a suitable candidate for developing such a code. It is within this context that the IUHPE Student and Early Career Network (ISECN), through its Ethics Working Group, has taken up the challenge of preparing the foundations for a CoE for health promotion. An online survey comprising open and closed-answer questions was used to gather the opinions of IUHPE members regarding the need for a CoE for health promotion. The quantitative data were calculated with descriptive analyses. A thematic analysis approach was used to analyze and interpret the qualitative data. IUHPE members (n = 236) from all global regions responded to the survey. The majority (52%) of the respondents had 11 years' experience or more in the field of health promotion. Ethical dilemmas were commonly encountered. The need for a CoE for health promotion was expressed by 83% of respondents. Respondents also offered their views of possibilities, ideas and challenges regarding the development of a CoE for health promotion. Considering that health promoters encounter ethical dilemmas frequently in their practice, this study reinforces the need to develop a CoE for the field. The recommendations from the survey provide a good basis for future work to develop such a code.

  14. Increasing asthma mortality in Denmark 1969-88 not a result of a changed coding practice

    DEFF Research Database (Denmark)

    Juel, K; Pedersen, P A

    1992-01-01

    We have studied asthma mortality in Denmark from 1969 to 1988. Age standardized mortality rates calculated in three age groups, 10-34, 35-59, and greater than or equal to 60 years, disclosed similar trends. Increasing mortality from asthma in the mid-1970s to 1988 was seen in all three age groups...... with higher mortality in 1979-88 as compared with 1969-78 of 95%, 55%, and 69%, respectively. Since the eighth revision of the International Classification of Diseases (ICD8) was used in Denmark over the entire 20-year period, changes in coding practice due to change of classification system cannot explain...

  15. Calculation of conversion coefficients Hp(3)/K air using the PENELOPE Monte Carlo code and comparison with MCNP calculation results

    International Nuclear Information System (INIS)

    Daures, J.; Gouriou, J.; Bordy, J.M.

    2010-01-01

    The authors report calculations performed using the MNCP and PENELOPE codes to determine the Hp(3)/K air conversion coefficient which allows the Hp(3) dose equivalent to be determined from the measured value of the kerma in the air. They report the definition of the phantom, a 20 cm diameter and 20 cm high cylinder which is considered as representative of a head. Calculations are performed for an energy range corresponding to interventional radiology or cardiology (20 keV-110 keV). Results obtained with both codes are compared

  16. Transduplication resulted in the incorporation of two protein-coding sequences into the Turmoil-1 transposable element of C. elegans

    Directory of Open Access Journals (Sweden)

    Pupko Tal

    2008-10-01

    Full Text Available Abstract Transposable elements may acquire unrelated gene fragments into their sequences in a process called transduplication. Transduplication of protein-coding genes is common in plants, but is unknown of in animals. Here, we report that the Turmoil-1 transposable element in C. elegans has incorporated two protein-coding sequences into its inverted terminal repeat (ITR sequences. The ITRs of Turmoil-1 contain a conserved RNA recognition motif (RRM that originated from the rsp-2 gene and a fragment from the protein-coding region of the cpg-3 gene. We further report that an open reading frame specific to C. elegans may have been created as a result of a Turmoil-1 insertion. Mutations at the 5' splice site of this open reading frame may have reactivated the transduplicated RRM motif. Reviewers This article was reviewed by Dan Graur and William Martin. For the full reviews, please go to the Reviewers' Reports section.

  17. RSAP - A Code for Display of Neutron Cross Section Data and SAMMY Fit Results

    International Nuclear Information System (INIS)

    Sayer, R.O.

    2001-01-01

    RSAP is a computer code for display of neutron cross section data and selected SAMMY output. SAMMY is a multilevel R-matrix code for fitting neutron time-of-flight cross-section data using Bayes' method. RSAP, which runs on the Digital Unix Alpha platform, reads ORELA Data Files (ODF) created by SAMMY and uses graphics routines from the PLPLOT package. In addition, RSAP can read data and/or computed values from ASCII files with a format specified by the user. Plot output may be displayed in an X window, sent to a postscript file (rsap.ps), or sent to a color postscript file (rsap.psc). Thirteen plot types are supported, allowing the user to display cross section data, transmission data, errors, theory, Bayes fits, and residuals in various combinations. In this document the designations theory and Bayes refer to the initial and final theoretical cross sections, respectively, as evaluated by SAMMY. Special plot types include Bayes/Data, Theory--Data, and Bayes--Data. Output from two SAMMY runs may be compared by plotting the ratios Theory2/Theory1 and Bayes2/Bayes1 or by plotting the differences (Theory2-Theory1) and (Bayes2-Bayes1)

  18. Review of solution approach, methods, and recent results of the TRAC-PF1 system code

    International Nuclear Information System (INIS)

    Mahaffy, J.H.; Liles, D.R.; Knight, T.D.

    1983-01-01

    The current version of the Transient Reactor Analysis Code (TRAC-PF1) was created to improve on the capabilities of its predecessor (TRAC-PD2) for analyzing slow reactor transients such as small-break loss-of-coolant accidents. TRAC-PF1 continues to use a semi-implicit finite-difference method for modeling three-dimensional flows in the reactor vessel. However, it contains a new stability-enhancing two-step (SETS) finite-difference tecnique for one-dimensional flow calculations. This method is not restricted by a material Courant stability condition, allowing much larger time-step sizes during slow transients than would a semi-implicit method. These have been successfully applied to the analysis of a variety of experiments and hypothetical plant transients covering a full range of two-phase flow regimes

  19. Graphic man-machine interface applied to nuclear reactor designs

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A; Mol, Antonio Carlos A.

    1999-01-01

    The Man-Machine Interfaces have been of interest of many researchers in the area of nuclear human factors engineering, principally applied to monitoring systems. The clarity of information provides best adaptation of the men to the machine. This work proposes the development of a Graphic Man-Machine Interface applied to nuclear reactor designs as a tool to optimize them. Here is present a prototype of a graphic man-machine interface for the Hammer code developed for PC under the Windows environment. The results of its application are commented. (author)

  20. Autocoding State Machine in Erlang

    DEFF Research Database (Denmark)

    Guo, Yu; Hoffman, Torben; Gunder, Nicholas

    2008-01-01

    This paper presents an autocoding tool suit, which supports development of state machine in a model-driven fashion, where models are central to all phases of the development process. The tool suit, which is built on the Eclipse platform, provides facilities for the graphical specification...... of a state machine model. Once the state machine is specified, it is used as input to a code generation engine that generates source code in Erlang....

  1. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  2. Estimation of process capability indices from the results of limit gauge inspection of dimensional parameters in machining industry

    Science.gov (United States)

    Masterenko, Dmitry A.; Metel, Alexander S.

    2018-03-01

    The process capability indices Cp, Cpk are widely used in the modern quality management as statistical measures of the ability of a process to produce output X within specification limits. The customer's requirement to ensure Cp ≥ 1.33 is often applied in contracts. Capability indices estimates may be calculated with the estimates of the mean µ and the variability 6σ, and for it, the quality characteristic in a sample of pieces should be measured. It requires, in turn, using advanced measuring devices and well-qualified staff. From the other hand, quality inspection by attributes, fulfilled with limit gauges (go/no-go) is much simpler and has a higher performance, but it does not give the numerical values of the quality characteristic. The described method allows estimating the mean and the variability of the process on the basis of the results of limit gauge inspection with certain lower limit LCL and upper limit UCL, which separates the pieces into three groups: where X control of the manufacturing process. It is very important for improving quality of articles in machining industry through their tolerance.

  3. First results of saturation curve measurements of heat-resistant steel using GEANT4 and MCNP5 codes

    International Nuclear Information System (INIS)

    Hoang, Duc-Tam; Tran, Thien-Thanh; Le, Bao-Tran; Vo, Hoang-Nguyen; Chau, Van-Tao; Tran, Kim-Tuyet; Huynh, Dinh-Chuong

    2015-01-01

    A gamma backscattering technique is applied to calculate the saturation curve and the effective mass attenuation coefficient of material. A NaI(Tl) detector collimated by collimator of large diameter is modeled by Monte Carlo technique using both MCNP5 and GEANT4 codes. The result shows a good agreement in response function of the scattering spectra for the two codes. Based on such spectra, the saturation curve of heat-resistant steel is determined. The results represent a strong confirmation that it is appropriate to use the detector collimator of large diameter to obtain the scattering spectra and this work is also the basis of experimental set-up for determining the thickness of material. (author)

  4. Machine Perfusion of Porcine Livers with Oxygen-Carrying Solution Results in Reprogramming of Dynamic Inflammation Networks

    Directory of Open Access Journals (Sweden)

    David Sadowsky

    2016-11-01

    Full Text Available Background: Ex vivo machine perfusion (MP can better preserve organs for transplantation. We have recently reported on the first application of a MP protocol in which liver allografts were fully oxygenated, under dual pressures and subnormothermic conditions, with a new hemoglobin-based oxygen carrier solution specifically developed for ex vivo utilization. In those studies, MP improved organ function post-operatively and reduced inflammation in porcine livers. Herein, we sought to refine our knowledge regarding the impact of MP by defining dynamic networks of inflammation in both tissue and perfusate. Methods: Porcine liver allografts were preserved either with MP (n = 6 or with cold static preservation (CSP; n = 6, then transplanted orthotopically after 9 h of preservation. Fourteen inflammatory mediators were measured in both tissue and perfusate during liver preservation at multiple time points, and analyzed using Dynamic Bayesian Network (DyBN inference to define feedback interactions, as well as Dynamic Network Analysis (DyNA to define the time-dependent development of inflammation networks.Results: Network analyses of tissue and perfusate suggested an NLRP3 inflammasome-regulated response in both treatment groups, driven by the pro-inflammatory cytokine interleukin (IL-18 and the anti-inflammatory mediator IL-1 receptor antagonist (IL-1RA. Both DyBN and DyNA suggested a reduced role of IL-18 and increased role of IL-1RA with MP, along with increased liver damage with CSP. DyNA also suggested divergent progression of responses over the 9 h preservation time, with CSP leading to a stable pattern of IL-18-induced liver damage and MP leading to a resolution of the pro-inflammatory response. These results were consistent with prior clinical, biochemical, and histological findings after liver transplantation. Conclusion: Our results suggest that analysis of dynamic inflammation networks in the setting of liver preservation may identify novel

  5. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  6. FURTHER IMPROVEMENT OF THE INVESTMENT CLIMATE IN RUSSIA AS A RESULT OF MODERNIZATION OF THE RUSSIAN CIVIL CODE

    Directory of Open Access Journals (Sweden)

    V. Musin

    2014-01-01

    Full Text Available This article traces the history, and discusses some of the recent changes in the Russian Federation Civil Code, which result in a more favorable business climate inRussia. In particular, it discusses the development of changes related to the documentation of contracts, expansion in the durations and uses of powers of attorney, and the modernization of the statute of limitations period for bringing an action.">Russia >

  7. Analysis of results of AZTRAN and AZKIND codes for a BWR; Analisis de resultados de los codigos AZTRAN y AZKIND para un BWR

    Energy Technology Data Exchange (ETDEWEB)

    Bastida O, G. E.; Vallejo Q, J. A.; Galicia A, J.; Francois L, J. L. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Morelos (Mexico); Xolocostli M, J. V.; Rodriguez H, A.; Gomez T, A. M., E-mail: gbo729@yahoo.com.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2016-09-15

    This paper presents an analysis of results obtained from simulations performed with the neutron transport code AZTRAN and the kinetic code of neutron diffusion AZKIND, based on comparisons with models corresponding to a typical BWR, in order to verify the behavior and reliability of the values obtained with said code for its current development. For this, simulations of different geometries were made using validated nuclear codes, such as CASMO, MCNP5 and Serpent. The results obtained are considered adequate since they are comparable with those obtained and reported with other codes, based mainly on the neutron multiplication factor and the power distribution of the same. (Author)

  8. Comparison of Crack Growth Test Results at Elevated Temperature and Design Code Material Properties for Grade 91 Steel

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyeong-Yeon; Kim, Woo-Gon; Kim, Nak-Hyun [Korea Atomic Energy Reserach Institute, Daejeon (Korea, Republic of)

    2015-01-15

    The material properties of crack growth models at an elevated temperature were derived from the results of numerous crack growth tests for Mod.9Cr-1Mo (ASME Grade 91) steel specimens under fatigue loading and creep loading at an elevated temperature. These crack growth models were needed for defect assessment under creep-fatigue loading. The mathematical crack growth rate models for fatigue crack growth (FCG) and creep crack growth (CCG) were determined based on the test results, and the models were compared with those of the French design code RCCMRx to investigate the conservatism of the code. The French design code RCC-MRx provides an FCG model and a CCG model for Grade 91 steel in Section III Tome 6. It was shown that the FCG model of RCC-MRx is conservative, while the CCG model is non-conservative compared with the present test data. Thus, it was shown that further validation of the property was required. Mechanical strength tests and creep tests were also conducted, and the test results were compared with those of RCC-MRx.

  9. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  10. Comparison of the results of several heat transfer computer codes when applied to a hypothetical nuclear waste repository

    International Nuclear Information System (INIS)

    Claiborne, H.C.; Wagner, R.S.; Just, R.A.

    1979-12-01

    A direct comparison of transient thermal calculations was made with the heat transfer codes HEATING5, THAC-SIP-3D, ADINAT, SINDA, TRUMP, and TRANCO for a hypothetical nuclear waste repository. With the exception of TRUMP and SINDA (actually closer to the earlier CINDA3G version), the other codes agreed to within +-5% for the temperature rises as a function of time. The TRUMP results agreed within +-5% up to about 50 years, where the maximum temperature occurs, and then began an oscillary behavior with up to 25% deviations at longer times. This could have resulted from time steps that were too large or from some unknown system problems. The available version of the SINDA code was not compatible with the IBM compiler without using an alternative method for handling a variable thermal conductivity. The results were about 40% low, but a reasonable agreement was obtained by assuming a uniform thermal conductivity; however, a programming error was later discovered in the alternative method. Some work is required on the IBM version to make it compatible with the system and still use the recommended method of handling variable thermal conductivity. TRANCO can only be run as a 2-D model, and TRUMP and CINDA apparently required longer running times and did not agree in the 2-D case; therefore, only HEATING5, THAC-SIP-3D, and ADINAT were used for the 3-D model calculations. The codes agreed within +-5%; at distances of about 1 ft from the waste canister edge, temperature rises were also close to that predicted by the 3-D model

  11. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  12. Specified international joint research. Report for fiscal 1997 on the result of `Development of Machining Supporting System`; Kokusai tokutei kyodo kenkyu. `Kikai kako shien system no kaihatsu` 1997 nendo seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    On the basis of information obtained from actually performed designing of machines with the aid of computers, researches are conducted for the development of a system that automatically designs required machine tools, machining procedures, machining conditions, and tool paths. The research and development efforts made in fiscal 1997 are enumerated below. In the development of man-machine interfaces, one that integrates a machining procedure designing system, machining condition designing system, and a tool path designing system, all of which are subsystems belonging in a machining supporting system, is developed. In a system evaluation performed through actual machining, an interface between CAD (Computer-Aided Design) technology and a machining supporting system is evaluated, when machining is actually performed for experimentation in an environment in which a machining procedure designing system, machining condition designing system, tool path designing system, and CNC (Computerized Numerical Control) technology collaborate as integrated. As the result, the performance expected to be achieved at the beginning is realized. Two scientists of Russian Academy of Sciences are invited, and researches are conducted concerning knowledge processing technology. 20 refs., 21 figs., 10 tabs.

  13. Validation and verification of MCNP6 against intermediate and high-energy experimental data and results by other codes

    International Nuclear Information System (INIS)

    Mashnik, Stepan G.

    2011-01-01

    MCNP6, the latest and most advanced LANL transport code representing a recent merger of MCNP5 and MCNPX, has been Validated and Verified (V and V) against a variety of intermediate and high-energy experimental data and against results by different versions of MCNPX and other codes. In the present work, we V and V MCNP6 using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.02 and LAQGSM03.03. We found that MCNP6 describes reasonably well various reactions induced by particles and nuclei at incident energies from 18 MeV to about 1 TeV per nucleon measured on thin and thick targets and agrees very well with similar results obtained with MCNPX and calculations by CEM03.02, LAQGSM03.01 (03.03), INCL4 + ABLA, and Bertini INC + Dresner evaporation, EPAX, ABRABLA, HIPSE, and AMD, used as stand alone codes. Most of several computational bugs and more serious physics problems observed in MCNP6/X during our V and V have been fixed; we continue our work to solve all the known problems before MCNP6 is distributed to the public. (author)

  14. Position Paper: Applying Machine Learning to Software Analysis to Achieve Trusted, Repeatable Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Prowell, Stacy J [ORNL; Symons, Christopher T [ORNL

    2015-01-01

    Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.

  15. Benchmarking the cad-based attila discrete ordinates code with experimental data of fusion experiments and to the results of MCNP code in simulating ITER

    International Nuclear Information System (INIS)

    Youssef, M. Z.

    2007-01-01

    Attila is a newly developed finite element code based on Sn neutron, gamma, and charged particle transport in 3-D geometry in which unstructured tetrahedral meshes are generated to describe complex geometry that is based on CAD input (Solid Works, Pro/Engineer, etc). In the present work we benchmark its calculation accuracy by comparing its prediction to the measured data inside two experimental mock-ups bombarded with 14 MeV neutrons. The results are also compared to those based on MCNP calculations. The experimental mock-ups simulate parts of the International Thermonuclear Experimental Reactor (ITER) in-vessel components, namely: (1) the Tungsten mockup configuration (54.3 cm x 46.8 cm x 45 cm), and (2) the ITER shielding blanket followed by the SCM region (simulated by alternating layers of SS316 and copper). In the latter configuration, a high aspect ratio rectangular streaming channel was introduced (to simulate steaming paths between ITER blanket modules) which ends with a rectangular cavity. The experiments on these two fusion-oriented integral experiments were performed at the Fusion Neutron Generator (FNG) facility, Frascati, Italy. In addition, the nuclear performance of the ITER MCNP 'Benchmark' CAD model has been performed with Attila to compare its results to those obtained with CAD-based MCNP approach developed by several ITER participants. The objective of this paper is to compare results based on two distinctive 3-D calculation tools using the same nuclear data, FENDL2.1, and the same response functions of several reaction rates measured in ITER mock-ups and to enhance confidence from the international neutronics community in the Attila code and how it can precisely quantify the nuclear field in large and complex systems, such as ITER. Attila has the advantage of providing a full flux mapping visualization everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. In addition, the

  16. Description of premixing with the MC3D code including molten jet behavior modeling. Comparison with FARO experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Berthoud, G.; Crecy, F. de; Meignen, R.; Valette, M. [CEA-G, DRN/DTP/SMTH, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France)

    1998-01-01

    The premixing phase of a molten fuel-coolant interaction is studied by the way of mechanistic multidimensional calculation. Beside water and steam, corium droplet flow and continuous corium jet flow are calculated independent. The 4-field MC3D code and a detailed hot jet fragmentation model are presented. MC3D calculations are compared to the FARO L14 experiment results and are found to give satisfactory results; heat transfer and jet fragmentation models are still to be improved to predict better final debris size values. (author)

  17. Verification of results of core physics on-line simulation by NGFM code

    International Nuclear Information System (INIS)

    Zhao Yu; Cao Xinrong; Zhao Qiang

    2008-01-01

    Nodal Green's Function Method program NGFM/TNGFM has been trans- planted to windows system. The 2-D and 3-D benchmarks have been checked by this program. And the program has been used to check the results of QINSHAN-II reactor simulation. It is proved that the NGFM/TNGFM program is applicable for reactor core physics on-line simulation system. (authors)

  18. Equilibrium optimization code OPEQ and results of applying it to HT-7U

    International Nuclear Information System (INIS)

    Zha Xuejun; Zhu Sizheng; Yu Qingquan

    2003-01-01

    The plasma equilibrium configuration has a strong impact on the confinement and MHD stability in tokamaks. For designing a tokamak device, it is an important issue to determine the sites and currents of poloidal coils which have some constraint conditions from physics and engineering with a prescribed equilibrium shape of the plasma. In this paper, an effective method based on multi-variables equilibrium optimization is given. The method can optimize poloidal coils when the previously prescribed plasma parameters are treated as an object function. We apply it to HT-7U equilibrium calculation, and obtain good results

  19. Comparative simulation of Stirling and Sibling cycle cryocoolers with two codes

    International Nuclear Information System (INIS)

    Mitchell, M.P.; Wilson, K.J.; Bauwens, L.

    1989-01-01

    The authors present a comparative analysis of Stirling and Sibling Cycle cryocoolers conducted with two different computer simulation codes. One code (CRYOWEISS) performs an initial analysis on the assumption of isothermal conditions in the machines and adjusts that result with decoupled loss calculations. The other code (MS*2) models fluid flows and heat transfers more realistically but ignores significant loss mechanisms, including flow friction and heat conduction through the metal of the machines. Surprisingly, MS*2 is less optimistic about performance of all machines even though it ignores losses that are modelled by CRYOWEISS. Comparison between constant-bore Stirling and Sibling machines shows that their performance is generally comparable over a range of temperatures, pressures and operating speeds. No machine was consistently superior or inferior according to both codes over the whole range of conditions studied

  20. Machine Shop Grinding Machines.

    Science.gov (United States)

    Dunn, James

    This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…

  1. Application of PHEBUS results to benchmarking of nuclear plant safety codes

    International Nuclear Information System (INIS)

    Birchley, J.; Cripps, R.; Guentay, S.; Hosemann, J.P.

    2001-01-01

    The PHEBUS Fission Product project comprises six nuclear reactor severe accident simulations, using prototypic core materials and representative geometry and boundary conditions for the coolant loop and containment. The data thus produced are being used to benchmark the computer tools used for nuclear plant accident analysis to reduce the excessive conservatism typical for estimates of the radiological source term. A set of calculations has been carried out to simulate the results of experiment PHEBUS FPT-1 through each of its main stages, using computer models and methods analogous to those currently employed at PSI for assessments of Swiss nuclear plants. Good agreement for the core degradation and containment behaviour builds confidence in the models, while some open questions remain concerning some aspects of the release of fission products from the fuel, their transport and chemical speciation. Of potentially great importance to the reduction in source term estimates is the formation of the non-volatile species, silver iodide. Current investigations are focused on the uncertainty concerning fission product behaviour and the stability of silver iodide under irradiation. (author)

  2. Computational results with a branch and cut code for the capacitated vehicle routing problem

    Energy Technology Data Exchange (ETDEWEB)

    Augerat, P.; Naddef, D. [Institut National Polytechnique, 38 - Grenoble (France); Belenguer, J.M.; Benavent, E.; Corberan, A. [Valencia Univ. (Spain); Rinaldi, G. [Consiglio Nazionale delle Ricerche, Rome (Italy)

    1995-09-01

    The Capacitated Vehicle Routing Problem (CVRP) we consider in this paper consists in the optimization of the distribution of goods from a single depot to a given set of customers with known demand using a given number of vehicles of fixed capacity. There are many practical routing applications in the public sector such as school bus routing, pick up and mail delivery, and in the private sector such as the dispatching of delivery trucks. We present a Branch and Cut algorithm to solve the CVRP which is based in the partial polyhedral description of the corresponding polytope. The valid inequalities used in our method can ne found in Cornuejols and Harche (1993), Harche and Rinaldi (1991) and in Augerat and Pochet (1995). We concentrated mainly on the design of separation procedures for several classes of valid inequalities. The capacity constraints (generalized sub-tour eliminations inequalities) happen to play a crucial role in the development of a cutting plane algorithm for the CVRP. A large number of separation heuristics have been implemented and compared for these inequalities. There has been also implemented heuristic separation algorithms for other classes of valid inequalities that also lead to significant improvements: comb and extended comb inequalities, generalized capacity inequalities and hypo-tour inequalities. The resulting cutting plane algorithm has been applied to a set of instances taken from the literature and the lower bounds obtained are better than the ones previously known. Some branching strategies have been implemented to develop a Branch an Cut algorithm that has been able to solve large CVRP instances, some of them which had never been solved before. (authors). 32 refs., 3 figs., 10 tabs.

  3. Comparisons of numerical simulations with ASTRID code against experimental results in rod bundle geometry for boiling flows

    International Nuclear Information System (INIS)

    Larrauri, D.; Briere, E.

    1997-12-01

    After different validation simulations of flows through cylindrical and annular channels, a subcooled boiling flow through a rod bundle has been simulated with ASTRID Steam-Water of software. The experiment simulated is called Poseidon. It is a vertical rectangular channel with three heating rods inside. The thermohydraulic conditions of the simulated flow were close to the DNB conditions. The simulation results were analysed and compared against the available measurements of liquid and wall temperatures. ASTRID Steam-Water produced satisfactory results. The wall and the liquid temperatures were well predicted in the different parts of the flow. The void fraction reached 40 % in the vicinity of the heating rods. The distribution of the different calculated variables showed that a three-dimensional simulation gives essential information for the analysis of the physical phenomena involved in this kind of flow. The good results obtained in Poseidon geometry will encourage future rod bundle flow simulations and analyses with ASTRID Steam-Water code. (author)

  4. Offshore code comparison collaboration continuation (OC4), phase I - Results of coupled simulations of an offshore wind turbine with jacket support structure

    DEFF Research Database (Denmark)

    Popko, Wojciech; Vorpahl, Fabian; Zuga, Adam

    2012-01-01

    In this paper, the exemplary results of the IEA Wind Task 30 "Offshore Code Comparison Collaboration Continuation" (OC4) Project - Phase I, focused on the coupled simulation of an offshore wind turbine (OWT) with a jacket support structure, are presented. The focus of this task has been the verif......In this paper, the exemplary results of the IEA Wind Task 30 "Offshore Code Comparison Collaboration Continuation" (OC4) Project - Phase I, focused on the coupled simulation of an offshore wind turbine (OWT) with a jacket support structure, are presented. The focus of this task has been...... the verification of OWT modeling codes through code-to-code comparisons. The discrepancies between the results are shown and the sources of the differences are discussed. The importance of the local dynamics of the structure is depicted in the simulation results. Furthermore, attention is given to aspects...

  5. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  6. Characterization of coded random access with compressive sensing based multi user detection

    DEFF Research Database (Denmark)

    Ji, Yalei; Stefanovic, Cedomir; Bockelmann, Carsten

    2014-01-01

    The emergence of Machine-to-Machine (M2M) communication requires new Medium Access Control (MAC) schemes and physical (PHY) layer concepts to support a massive number of access requests. The concept of coded random access, introduced recently, greatly outperforms other random access methods...... coded random access with CS-MUD on the PHY layer and show very promising results for the resulting protocol....

  7. Stability analysis by ERATO code

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Matsuura, Toshihiko; Azumi, Masafumi; Kurita, Gen-ichi

    1979-12-01

    Problems in MHD stability calculations by ERATO code are described; which concern convergence property of results, equilibrium codes, and machine optimization of ERATO code. It is concluded that irregularity on a convergence curve is not due to a fault of the ERATO code itself but due to inappropriate choice of the equilibrium calculation meshes. Also described are a code to calculate an equilibrium as a quasi-inverse problem and a code to calculate an equilibrium as a result of a transport process. Optimization of the code with respect to I/O operations reduced both CPU time and I/O time considerably. With the FACOM230-75 APU/CPU multiprocessor system, the performance is about 6 times as high as with the FACOM230-75 CPU, showing the effectiveness of a vector processing computer for the kind of MHD computations. This report is a summary of the material presented at the ERATO workshop 1979(ORNL), supplemented with some details. (author)

  8. How do primary care doctors in England and Wales code and manage people with chronic kidney disease? Results from the National Chronic Kidney Disease Audit.

    Science.gov (United States)

    Kim, Lois G; Cleary, Faye; Wheeler, David C; Caplin, Ben; Nitsch, Dorothea; Hull, Sally A

    2017-10-16

    In the UK, primary care records are electronic and require doctors to ascribe disease codes to direct care plans and facilitate safe prescribing. We investigated factors associated with coding of chronic kidney disease (CKD) in patients with reduced kidney function and the impact this has on patient management. We identified patients meeting biochemical criteria for CKD (two estimated glomerular filtration rates 90 days apart) from 1039 general practitioner (GP) practices in a UK audit. Clustered logistic regression was used to identify factors associated with coding for CKD and improvement in coding as a result of the audit process. We investigated the relationship between coding and five interventions recommended for CKD: achieving blood pressure targets, proteinuria testing, statin prescription and flu and pneumococcal vaccination. Of 256 000 patients with biochemical CKD, 30% did not have a GP CKD code. Males, older patients, those with more severe CKD, diabetes or hypertension or those prescribed statins were more likely to have a CKD code. Among those with continued biochemical CKD following audit, these same characteristics increased the odds of improved coding. Patients without any kidney diagnosis were less likely to receive optimal care than those coded for CKD [e.g. odds ratio for meeting blood pressure target 0.78 (95% confidence interval 0.76-0.79)]. Older age, male sex, diabetes and hypertension are associated with coding for those with biochemical CKD. CKD coding is associated with receiving key primary care interventions recommended for CKD. Increased efforts to incentivize CKD coding may improve outcomes for CKD patients. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA.

  9. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    Science.gov (United States)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They

  10. Kinetic instabilities of thin current sheets: Results of two-and-one-half-dimensional Vlasov code simulations

    International Nuclear Information System (INIS)

    Silin, I.; Buechner, J.

    2003-01-01

    Nonlinear triggering of the instability of thin current sheets is investigated by two-and-one-half- dimensional Vlasov code simulations. A global drift-resonant instability (DRI) is found, which results from the lower-hybrid-drift waves penetrating from the current sheet edges to the center where they resonantly interact with unmagnetized ions. This resonant nonlinear instability grows faster than a Kelvin-Helmholtz instability obtained in previous studies. The DRI is either asymmetric or symmetric mode or a combination of the two, depending on the relative phase of the lower-hybrid-drift waves at the edges of the current sheet. With increasing particle mass ratio the wavenumber of the fastest-growing mode increases as kL z ∼(m i /m e ) 1/2 /2 and the growth rate of the DRI saturates at a finite level

  11. Study of surface integrity AISI 4140 as result of hard, dry and high speed machining using CBN

    Science.gov (United States)

    Ginting, B.; Sembiring, R. W.; Manurung, N.

    2017-09-01

    The concept of hard, dry and high speed machining can be combined, to produce high productivity, with lower production costs in manufacturing industry. Hard lathe process can be a solution to reduce production time. In lathe hard alloy steels reported problems relating to the integrity of such surface roughness, residual stress, the white layer and the surface integrity. AISI 4140 material is used for high reliable hydraulic system components. This material includes in cold work tool steel. Consideration election is because this material is able to be hardened up to 55 HRC. In this research, the experimental design using CCD model fit with three factors, each factor is composed of two levels, and six central point, experiments were conducted with 1 replications. The experimental design research using CCD model fit.

  12. Machine learning topological states

    Science.gov (United States)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-11-01

    Artificial neural networks and machine learning have now reached a new era after several decades of improvement where applications are to explode in many fields of science, industry, and technology. Here, we use artificial neural networks to study an intriguing phenomenon in quantum physics—the topological phases of matter. We find that certain topological states, either symmetry-protected or with intrinsic topological order, can be represented with classical artificial neural networks. This is demonstrated by using three concrete spin systems, the one-dimensional (1D) symmetry-protected topological cluster state and the 2D and 3D toric code states with intrinsic topological orders. For all three cases, we show rigorously that the topological ground states can be represented by short-range neural networks in an exact and efficient fashion—the required number of hidden neurons is as small as the number of physical spins and the number of parameters scales only linearly with the system size. For the 2D toric-code model, we find that the proposed short-range neural networks can describe the excited states with Abelian anyons and their nontrivial mutual statistics as well. In addition, by using reinforcement learning we show that neural networks are capable of finding the topological ground states of nonintegrable Hamiltonians with strong interactions and studying their topological phase transitions. Our results demonstrate explicitly the exceptional power of neural networks in describing topological quantum states, and at the same time provide valuable guidance to machine learning of topological phases in generic lattice models.

  13. Refining Prediction in Treatment-Resistant Depression: Results of Machine Learning Analyses in the TRD III Sample.

    Science.gov (United States)

    Kautzky, Alexander; Dold, Markus; Bartova, Lucie; Spies, Marie; Vanicek, Thomas; Souery, Daniel; Montgomery, Stuart; Mendlewicz, Julien; Zohar, Joseph; Fabbri, Chiara; Serretti, Alessandro; Lanzenberger, Rupert; Kasper, Siegfried

    The study objective was to generate a prediction model for treatment-resistant depression (TRD) using machine learning featuring a large set of 47 clinical and sociodemographic predictors of treatment outcome. 552 Patients diagnosed with major depressive disorder (MDD) according to DSM-IV criteria were enrolled between 2011 and 2016. TRD was defined as failure to reach response to antidepressant treatment, characterized by a Montgomery-Asberg Depression Rating Scale (MADRS) score below 22 after at least 2 antidepressant trials of adequate length and dosage were administered. RandomForest (RF) was used for predicting treatment outcome phenotypes in a 10-fold cross-validation. The full model with 47 predictors yielded an accuracy of 75.0%. When the number of predictors was reduced to 15, accuracies between 67.6% and 71.0% were attained for different test sets. The most informative predictors of treatment outcome were baseline MADRS score for the current episode; impairment of family, social, and work life; the timespan between first and last depressive episode; severity; suicidal risk; age; body mass index; and the number of lifetime depressive episodes as well as lifetime duration of hospitalization. With the application of the machine learning algorithm RF, an efficient prediction model with an accuracy of 75.0% for forecasting treatment outcome could be generated, thus surpassing the predictive capabilities of clinical evaluation. We also supply a simplified algorithm of 15 easily collected clinical and sociodemographic predictors that can be obtained within approximately 10 minutes, which reached an accuracy of 70.6%. Thus, we are confident that our model will be validated within other samples to advance an accurate prediction model fit for clinical usage in TRD. © Copyright 2017 Physicians Postgraduate Press, Inc.

  14. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  15. Virtual Machine in Automation Projects

    OpenAIRE

    Xing, Xiaoyuan

    2010-01-01

    Virtual machine, as an engineering tool, has recently been introduced into automation projects in Tetra Pak Processing System AB. The goal of this paper is to examine how to better utilize virtual machine for the automation projects. This paper designs different project scenarios using virtual machine. It analyzes installability, performance and stability of virtual machine from the test results. Technical solutions concerning virtual machine are discussed such as the conversion with physical...

  16. Interactive QR code beautification with full background image embedding

    Science.gov (United States)

    Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo

    2017-06-01

    QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.

  17. Fuel model studies. Comparison of our present version of GAPCON-THERMAL-2 with results from the EPRI code comparison study. Partial report

    International Nuclear Information System (INIS)

    Malen, K.; Jansson, L.

    1978-08-01

    Runs with our present version of GAPCON-THERMAL-2 have been compared to results from the EPRI code comparison study. Usually also our version of GAPCON predicts high temperatures, 100-300 K or 10-15% higher than average code predictions and experimental results. The well-known temperaturegas release instablility is found also with GAPCON. In this case one identifies the gas release limits 1400 deg C and 1700 deg C as instablility points. (author)

  18. Coronal mass ejection hits mercury: A.I.K.E.F. hybrid-code results compared to MESSENGER data

    Science.gov (United States)

    Exner, W.; Heyner, D.; Liuzzo, L.; Motschmann, U.; Shiota, D.; Kusano, K.; Shibayama, T.

    2018-04-01

    Mercury is the closest orbiting planet around the sun and is therefore embedded in an intensive and highly varying solar wind. In-situ data from the MESSENGER spacecraft of the plasma environment near Mercury indicates that a coronal mass ejection (CME) passed the planet on 23 November 2011 over the span of the 12 h MESSENGER orbit. Slavin et al. (2014) derived the upstream parameters of the solar wind at the time of that orbit, and were able to explain the observed MESSENGER data in the cusp and magnetopause segments of MESSENGER's trajectory. These upstream parameters will be used for our first simulation run. We use the hybrid code A.I.K.E.F. which treats ions as individual particles and electrons as a mass-less fluid, to conduct hybrid simulations of Mercury's magnetospheric response to the impact of the CME on ion gyro time scales. Results from the simulation are in agreement with magnetic field measurements from the inner day-side magnetosphere and the bow-shock region. However, at the planet's nightside, Mercury's plasma environment seemed to be governed by different solar wind conditions, in conclusion, Mercury's interaction with the CME is not sufficiently describable by only one set of upstream parameters. Therefore, to simulate the magnetospheric response while MESSENGER was located in the tail region, we use parameters obtained from the MHD solar wind simulation code SUSANOO (Shiota et al. (2014)) for our second simulation run. The parameters of the SUSANOO model achieve a good agreement of the data concerning the plasma tail crossing and the night-side approach to Mercury. However, the polar and closest approach are hardly described by both upstream parameters, namely, neither upstream dataset is able to reproduce the MESSENGER crossing of Mercury's magnetospheric cusp. We conclude that the respective CME was too variable on the timescale of the MESSENGER orbit to be described by only two sets of upstream conditions. Our results suggest locally strong

  19. Comparative analysis of the results obtained by computer code ASTEC V2 and RELAP 5.3.2 for small leak ID 80 for VVER 1000

    International Nuclear Information System (INIS)

    Atanasova, B.; Grudev, P.

    2011-01-01

    The purpose of this report is to present the results obtained by simulation and subsequent analysis of emergency mode for small leak with ID 80 for WWER 1000/B320 - Kozloduy NPP Units 5 and 6. Calculations were performed with the ASTEC v2 computer code used for calculation of severe accident, which was designed by French and German groups - IRSN and GRS. Integral RELAP5 computer code is used as a reference for comparison of results. The analyzes are focused on the processes occurring in reactor internals phase of emergency mode with significant core damage. The main thermohydraulic parameters, start of reactor core degradation and subsequent fuel relocalization till reactor vessel failure are evaluated in the analysis. RELAP5 computer code is used as a reference code to compare the results obtained till early core degradation that occurs after core stripping and excising of fuel temperature above 1200 0 C

  20. Comparison of ASME Code NB-3200 and NB-3600 results for fatigue analysis of B31.1 branch nozzles

    International Nuclear Information System (INIS)

    Nitzel, M.E.; Ware, A.G.; Morton, D.K.

    1996-01-01

    Fatigue analyses wre conducted on two reactor coolant system branch nozzles in an operating PWR designed to the B31.1 Code, for which no explicit fatigue analysis was required by the licensing basis. These analyses were performed as part of resolving issues connected with NRC's Fatigue Action Plan to determine if the cumulative usage factor (CUF) for these nozzles, using the 1992 ASME Code and representative PWR transients, were comparable to nozzles designed and analyzed to the ASME Code. Both NB-3200 and NB-3600 ASME Code methods were used. NB-3200 analyses included the development of finite element models for each nozzle. Although detailed thermal transients were not available for the plant analyzed, representative transients from similar PWRs were applied in each method. CUFs calculated using NB-3200 methods were significantly less than using NB-3600. The paper points out differences in analysis methods and highlights difficulties and unknowns in performing more detailed analyses to reduce conservative assumptions

  1. A multidisciplinary audit of clinical coding accuracy in otolaryngology: financial, managerial and clinical governance considerations under payment-by-results.

    Science.gov (United States)

    Nouraei, S A R; O'Hanlon, S; Butler, C R; Hadovsky, A; Donald, E; Benjamin, E; Sandhu, G S

    2009-02-01

    To audit the accuracy of otolaryngology clinical coding and identify ways of improving it. Prospective multidisciplinary audit, using the 'national standard clinical coding audit' methodology supplemented by 'double-reading and arbitration'. Teaching-hospital otolaryngology and clinical coding departments. Otolaryngology inpatient and day-surgery cases. Concordance between initial coding performed by a coder (first cycle) and final coding by a clinician-coder multidisciplinary team (MDT; second cycle) for primary and secondary diagnoses and procedures, and Health Resource Groupings (HRG) assignment. 1250 randomly-selected cases were studied. Coding errors occurred in 24.1% of cases (301/1250). The clinician-coder MDT reassigned 48 primary diagnoses and 186 primary procedures and identified a further 209 initially-missed secondary diagnoses and procedures. In 203 cases, patient's initial HRG changed. Incorrect coding caused an average revenue loss of 174.90 pounds per patient (14.7%) of which 60% of the total income variance was due to miscoding of a eight highly-complex head and neck cancer cases. The 'HRG drift' created the appearance of disproportionate resource utilisation when treating 'simple' cases. At our institution the total cost of maintaining a clinician-coder MDT was 4.8 times lower than the income regained through the double-reading process. This large audit of otolaryngology practice identifies a large degree of error in coding on discharge. This leads to significant loss of departmental revenue, and given that the same data is used for benchmarking and for making decisions about resource allocation, it distorts the picture of clinical practice. These can be rectified through implementing a cost-effective clinician-coder double-reading multidisciplinary team as part of a data-assurance clinical governance framework which we recommend should be established in hospitals.

  2. Machine learning with R

    CERN Document Server

    Lantz, Brett

    2013-01-01

    Written as a tutorial to explore and understand the power of R for machine learning. This practical guide that covers all of the need to know topics in a very systematic way. For each machine learning approach, each step in the process is detailed, from preparing the data for analysis to evaluating the results. These steps will build the knowledge you need to apply them to your own data science tasks.Intended for those who want to learn how to use R's machine learning capabilities and gain insight from your data. Perhaps you already know a bit about machine learning, but have never used R; or

  3. Description and validation of ANTEO, an optimised PC code the thermalhydraulic analysis of fuel bundles

    International Nuclear Information System (INIS)

    Cevolani, S.

    1995-01-01

    The paper deals with the description of a Personal Computer oriented subchannel code, devoted to the steady state thermal hydraulic analysis of nuclear reactor fuel bundles. The development of such a code was made possible by two facts: firstly, the increase, in the computing power of the desk machines; secondly, the fact that several years of experience into operate subchannels codes have shown how to simplify many of the physical models without a sensible loss of accuracy. For sake of validation, the developed code was compared with a traditional subchannel code, the COBRA one. The results of the comparison show a very good agreement between the two codes. (author)

  4. ANTEO: An optimised PC computer code for the steady state thermal hydraulic analysis of rod bundles

    International Nuclear Information System (INIS)

    Cevolani, S.

    1996-07-01

    The paper deals with the description of a Personal Computer oriented subchannel code, devoted to the steady state thermal hydraulic analysis of nuclear reactor fuel bundles. The development of a such code was made possible by two facts: first, the increase the computing power of the desk machines; secondly, the fact several years of experience into operate subchannels codes have shown how to simplify many of the physical models without a sensible loss of accuracy. For sake of validation, the developed code was compared with a traditional subchannel code, the COBRA one. The results of the comparison show a very good agreement between the two codes

  5. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  6. Keyboard with Universal Communication Protocol Applied to CNC Machine

    Directory of Open Access Journals (Sweden)

    Mejía-Ugalde Mario

    2014-04-01

    Full Text Available This article describes the use of a universal communication protocol for industrial keyboard based microcontroller applied to computer numerically controlled (CNC machine. The main difference among the keyboard manufacturers is that each manufacturer has its own programming of source code, producing a different communication protocol, generating an improper interpretation of the function established. The above results in commercial industrial keyboards which are expensive and incompatible in their connection with different machines. In the present work the protocol allows to connect the designed universal keyboard and the standard keyboard of the PC at the same time, it is compatible with all the computers through the communications USB, AT or PS/2, to use in CNC machines, with extension to other machines such as robots, blowing, injection molding machines and others. The advantages of this design include its easy reprogramming, decreased costs, manipulation of various machine functions and easy expansion of entry and exit signals. The results obtained of performance tests were satisfactory, because each key has the programmed and reprogrammed facility in different ways, generating codes for different functions, depending on the application where it is required to be used.

  7. Surviving "Payment by Results": a simple method of improving clinical coding in burn specialised services in the United Kingdom.

    Science.gov (United States)

    Wallis, Katy L; Malic, Claudia C; Littlewood, Sonia L; Judkins, Keith; Phipps, Alan R

    2009-03-01

    Coding inpatient episodes plays an important role in determining the financial remuneration of a clinical service. Insufficient or incomplete data may have very significant consequences on its viability. We created a document that improves the coding process in our Burns Centre. At Yorkshire Regional Burns Centre an inpatient summary sheet was designed to prospectively record and present essential information on a daily basis, for use in the coding process. The level of care was also recorded. A 3-month audit was conducted to assess the efficacy of the new forms. Forty-nine patients were admitted to the Burns Centre with a mean age of 27.6 years and TBSA ranging from 0.5% to 65%. The total stay in the Burns Centre was 758 days, of which 22% were at level B3-B5 and 39% at level B2. The use of the new discharge document identified potential income of about 500,000 GB pound sterling at our local daily tariffs for high dependency and intensive care. The new form is able to ensure a high quality of coding with a possible direct impact on the financial resources accrued for burn care.

  8. Barriers to data quality resulting from the process of coding health information to administrative data: a qualitative study.

    Science.gov (United States)

    Lucyk, Kelsey; Tang, Karen; Quan, Hude

    2017-11-22

    Administrative health data are increasingly used for research and surveillance to inform decision-making because of its large sample sizes, geographic coverage, comprehensivity, and possibility for longitudinal follow-up. Within Canadian provinces, individuals are assigned unique personal health numbers that allow for linkage of administrative health records in that jurisdiction. It is therefore necessary to ensure that these data are of high quality, and that chart information is accurately coded to meet this end. Our objective is to explore the potential barriers that exist for high quality data coding through qualitative inquiry into the roles and responsibilities of medical chart coders. We conducted semi-structured interviews with 28 medical chart coders from Alberta, Canada. We used thematic analysis and open-coded each transcript to understand the process of administrative health data generation and identify barriers to its quality. The process of generating administrative health data is highly complex and involves a diverse workforce. As such, there are multiple points in this process that introduce challenges for high quality data. For coders, the main barriers to data quality occurred around chart documentation, variability in the interpretation of chart information, and high quota expectations. This study illustrates the complex nature of barriers to high quality coding, in the context of administrative data generation. The findings from this study may be of use to data users, researchers, and decision-makers who wish to better understand the limitations of their data or pursue interventions to improve data quality.

  9. On the Representation of Aquifer Compressibility in General Subsurface Flow Codes: How an Alternate Definition of Aquifer Compressibility Matches Results from the Groundwater Flow Equation

    Science.gov (United States)

    Birdsell, D.; Karra, S.; Rajaram, H.

    2017-12-01

    The governing equations for subsurface flow codes in deformable porous media are derived from the fluid mass balance equation. One class of these codes, which we call general subsurface flow (GSF) codes, does not explicitly track the motion of the solid porous media but does accept general constitutive relations for porosity, density, and fluid flux. Examples of GSF codes include PFLOTRAN, FEHM, STOMP, and TOUGH2. Meanwhile, analytical and numerical solutions based on the groundwater flow equation have assumed forms for porosity, density, and fluid flux. We review the derivation of the groundwater flow equation, which uses the form of Darcy's equation that accounts for the velocity of fluids with respect to solids and defines the soil matrix compressibility accordingly. We then show how GSF codes have a different governing equation if they use the form of Darcy's equation that is written only in terms of fluid velocity. The difference is seen in the porosity change, which is part of the specific storage term in the groundwater flow equation. We propose an alternative definition of soil matrix compressibility to correct for the untracked solid velocity. Simulation results show significantly less error for our new compressibility definition than the traditional compressibility when compared to analytical solutions from the groundwater literature. For example, the error in one calculation for a pumped sandstone aquifer goes from 940 to <70 Pa when the new compressibility is used. Code users and developers need to be aware of assumptions in the governing equations and constitutive relations in subsurface flow codes, and our newly-proposed compressibility function should be incorporated into GSF codes.

  10. Face machines

    Energy Technology Data Exchange (ETDEWEB)

    Hindle, D.

    1999-06-01

    The article surveys latest equipment available from the world`s manufacturers of a range of machines for tunnelling. These are grouped under headings: excavators; impact hammers; road headers; and shields and tunnel boring machines. Products of thirty manufacturers are referred to. Addresses and fax numbers of companies are supplied. 5 tabs., 13 photos.

  11. Electric machine

    Science.gov (United States)

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  12. Machine Learning.

    Science.gov (United States)

    Kirrane, Diane E.

    1990-01-01

    As scientists seek to develop machines that can "learn," that is, solve problems by imitating the human brain, a gold mine of information on the processes of human learning is being discovered, expert systems are being improved, and human-machine interactions are being enhanced. (SK)

  13. Nonplanar machines

    International Nuclear Information System (INIS)

    Ritson, D.

    1989-05-01

    This talk examines methods available to minimize, but never entirely eliminate, degradation of machine performance caused by terrain following. Breaking of planar machine symmetry for engineering convenience and/or monetary savings must be balanced against small performance degradation, and can only be decided on a case-by-case basis. 5 refs

  14. Parallelization of the MAAP-A code neutronics/thermal hydraulics coupling

    International Nuclear Information System (INIS)

    Froehle, P.H.; Wei, T.Y.C.; Weber, D.P.; Henry, R.E.

    1998-01-01

    A major new feature, one-dimensional space-time kinetics, has been added to a developmental version of the MAAP code through the introduction of the DIF3D-K module. This code is referred to as MAAP-A. To reduce the overall job time required, a capability has been provided to run the MAAP-A code in parallel. The parallel version of MAAP-A utilizes two machines running in parallel, with the DIF3D-K module executing on one machine and the rest of the MAAP-A code executing on the other machine. Timing results obtained during the development of the capability indicate that reductions in time of 30--40% are possible. The parallel version can be run on two SPARC 20 (SUN OS 5.5) workstations connected through the ethernet. MPI (Message Passing Interface standard) needs to be implemented on the machines. If necessary the parallel version can also be run on only one machine. The results obtained running in this one-machine mode identically match the results obtained from the serial version of the code

  15. Machine learning of the reactor core loading pattern critical parameters

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2007-01-01

    The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. In this paper we investigate the applicability of a machine learning model which could be used for fast loading pattern evaluation. We employed a recently introduced machine learning technique, Support Vector Regression (SVR), which has a strong theoretical background in statistical learning theory. Superior empirical performance of the method has been reported on difficult regression problems in different fields of science and technology. SVR is a data driven, kernel based, nonlinear modelling paradigm, in which model parameters are automatically determined by solving a quadratic optimization problem. The main objective of the work reported in this paper was to evaluate the possibility of applying SVR method for reactor core loading pattern modelling. The starting set of experimental data for training and testing of the machine learning algorithm was obtained using a two-dimensional diffusion theory reactor physics computer code. We illustrate the performance of the solution and discuss its applicability, i.e., complexity, speed and accuracy, with a projection to a more realistic scenario involving machine learning from the results of more accurate and time consuming three-dimensional core modelling code. (author)

  16. Validation of One-Dimensional Module of MARS-KS1.2 Computer Code By Comparison with the RELAP5/MOD3.3/patch3 Developmental Assessment Results

    International Nuclear Information System (INIS)

    Bae, S. W.; Chung, B. D.

    2010-07-01

    This report records the results of the code validation for the one-dimensional module of the MARS-KS thermal hydraulics analysis code by means of result-comparison with the RELAP5/MOD3.3 computer code. For the validation calculations, simulations of the RELAP5 Code Developmental Assessment Problem, which consists of 22 simulation problems in 3 categories, have been selected. The results of the 3 categories of simulations demonstrate that the one-dimensional module of the MARS code and the RELAP5/MOD3.3 code are essentially the same code. This is expected as the two codes have basically the same set of field equations, constitutive equations and main thermal hydraulic models. The result suggests that the high level of code validity of the RELAP5/MOD3.3 can be directly applied to the MARS one-dimensional module

  17. Creativity in Machine Learning

    OpenAIRE

    Thoma, Martin

    2016-01-01

    Recent machine learning techniques can be modified to produce creative results. Those results did not exist before; it is not a trivial combination of the data which was fed into the machine learning system. The obtained results come in multiple forms: As images, as text and as audio. This paper gives a high level overview of how they are created and gives some examples. It is meant to be a summary of the current work and give people who are new to machine learning some starting points.

  18. Some uncertainty results obtained by the statistical version of the KARATE code system related to core design and safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Panka, Istvan; Hegyi, Gyoergy; Maraczy, Csaba; Temesvari, Emese [Hungarian Academy of Sciences, Budapest (Hungary). Reactor Analysis Dept.

    2017-11-15

    The best-estimate KARATE code system has been widely used for core design calculations and simulations of slow transients of VVER reactors. Recently there has been an increasing need for assessing the uncertainties of such calculations by propagating the basic input uncertainties of the models through the full calculation chain. In order to determine the uncertainties of quantities of interest during the burnup, the statistical version of the KARATE code system has been elaborated. In the first part of the paper, the main features of the new code system are discussed. The applied statistical method is based on Monte-Carlo sampling of the considered input data taking into account mainly the covariance matrices of the cross sections and/or the technological uncertainties. In the second part of the paper, only the uncertainties of cross sections are considered and an equilibrium cycle related to a VVER-440 type reactor is investigated. The burnup dependence of the uncertainties of some safety related parameters (e.g. critical boron concentration, rod worth, feedback coefficients, assembly-wise radial power and burnup distribution) are discussed and compared to the recently used limits.

  19. Wire Array Z-pinches on Sphinx Machine: Experimental Results and Relevant Points of Microsecond Implosion Physics

    Science.gov (United States)

    Calamy, H.; Hamann, F.; Lassalle, F.; Bayol, F.; Mangeant, C.; Morell, A.; Huet, D.; Bedoch, J. P.; Chittenden, J. P.; Lebedev, S. V.; Jennings, C. A.; Bland, S. N.

    2006-01-01

    Centre d'Etudes de Gramat (France) has developed an efficient long implosion time (800 ns) Aluminum plasma radiation source (PRS). Based on the LTD technology, the SPHINX facility is developed as a 1-3MJ, 1μs rise time, 4-10 MA current driver. In this paper, it was used in 1MJ, 4MA configuration to drive Aluminum nested wire arrays Z-pinches with K-shell yield up to 20 kJ and a FWHM of the x-ray pulse of about 50 ns. We present latest SPHINX experiments and some of the main physic issues of the microsecond regime. Experimental setup and results are described with the aim of giving trends that have been obtained. The main features of microsecond implosion of wire arrays can be analyzed thanks to same methods and theories as used for faster Z-pinches. The effect of load polarity was examined. The stability of the implosion , one of the critical point of microsecond wire arrays due to the load dimensions imposed by the time scale, is tackled. A simple scaling from 100 ns Z-pinch results to 800 ns ones gives good results and the use of nested arrays improves dramatically the implosion quality and the Kshell yield of the load. However, additional effects such as the impact of the return current can geometry on the implosion have to be taken into account on our loads. Axial inhomogeneity of the implosion the origin of which is not yet well understood occurs in some shots and impacts the radiation output. The shape of the radiative pulse is discussed and compared with the homogeneity of the implosion. Numerical 2D R-Z and R-θ simulations are used to highlight some experimental results and understand the plasma conditions during these microsecond wire arrays implosions.

  20. Wire Array Z-pinches on Sphinx Machine: Experimental Results and Relevant Points of Microsecond Implosion Physics

    International Nuclear Information System (INIS)

    Calamy, H.; Hamann, F.; Lassalle, F.; Bayol, F.; Mangeant, C.; Morell, A.; Huet, D.; Bedoch, J.P.; Chittenden, J.P.; Lebedev, S.V.; Jennings, C.A.; Bland, S.N.

    2006-01-01

    Centre d'Etudes de Gramat (France) has developed an efficient long implosion time (800 ns) Aluminum plasma radiation source (PRS). Based on the LTD technology, the SPHINX facility is developed as a 1-3MJ, 1μs rise time, 4-10 MA current driver. In this paper, it was used in 1MJ, 4MA configuration to drive Aluminum nested wire arrays Z-pinches with K-shell yield up to 20 kJ and a FWHM of the x-ray pulse of about 50 ns. We present latest SPHINX experiments and some of the main physic issues of the microsecond regime. Experimental setup and results are described with the aim of giving trends that have been obtained. The main features of microsecond implosion of wire arrays can be analyzed thanks to same methods and theories as used for faster Z-pinches. The effect of load polarity was examined. The stability of the implosion , one of the critical point of microsecond wire arrays due to the load dimensions imposed by the time scale, is tackled. A simple scaling from 100 ns Z-pinch results to 800 ns ones gives good results and the use of nested arrays improves dramatically the implosion quality and the Kshell yield of the load. However, additional effects such as the impact of the return current can geometry on the implosion have to be taken into account on our loads. Axial inhomogeneity of the implosion the origin of which is not yet well understood occurs in some shots and impacts the radiation output. The shape of the radiative pulse is discussed and compared with the homogeneity of the implosion. Numerical 2D R-Z and R-θ simulations are used to highlight some experimental results and understand the plasma conditions during these microsecond wire arrays implosions

  1. Constructing kinetics fatigue diagrams using testing results obtained on a machine with rigid loading for specimens of various thickness

    International Nuclear Information System (INIS)

    Simin'kovich, V.N.; Gladkij, Ya.N.; Deev, N.A.

    1981-01-01

    Bending tests of 40KhS steel specimens, tempered at 200 and 500 deg C, are conducted to investigate the possible effects of specimen thickness on fatigue crack growth. Kinetic fatigue diagrams are constructed using the investigation results. An increase in crack growth with thickness is observed only in high-tempered specimens. Changes in specimen thickness do not affect crack growth in 40KhS low-tempered steel [ru

  2. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  3. A performance indicator of the effectiveness of human-machine interfaces for nuclear power plants: Preliminary results

    International Nuclear Information System (INIS)

    Moray, N.; Lee, J.; Vicente, K.J.; Jones, B.G.; Brock, R.; Djemil, T.; Rasmussen, J.

    1992-01-01

    In this paper the authors report the results of experiments based on deGroot's work to assess the value of memory tests for measuring the quality of displays and the level of expertise of operators. Three kinds of display and people with three levels of expertise were included in the experiments. The displays were computer generated versions of traditional analog meters, traditional analog meters supplemented by a dynamic graphic representing the relation between temperature and pressure in some subsystems, and a dynamic graphic representing the underlying thermodynamics of power generation using the Rankine Cycle. The levels of expertise were represented by undergraduates with one semester of thermodynamics, graduate students of thermodynamics and nuclear engineering, and professional nuclear power plant operators. Each group watched a set of transients presented on the displays, using data generated by a high fidelity NPP training simulator, and were then asked three kinds of questions. The first measured their ability to recall the exact values of system state variables. The second measured their ability to recall what qualitative states the system had entered during the transient. The third measured their ability diagnose the nature of the transient. The results of the experiments are reported in relation to the possible use of memory tests to evaluate displays and the interaction of the quality of displays with the level of expertise of operators

  4. Machine Translation Effect on Communication

    DEFF Research Database (Denmark)

    Jensen, Mika Yasuoka; Bjørn, Pernille

    2011-01-01

    Intercultural collaboration facilitated by machine translation has gradually spread in various settings. Still, little is known as for the practice of machine-translation mediated communication. This paper investigates how machine translation affects intercultural communication in practice. Based...... on communication in which multilingual communication system is applied, we identify four communication types and its’ influences on stakeholders’ communication process, especially focusing on establishment and maintenance of common ground. Different from our expectation that quality of machine translation results...

  5. Machine translation

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, M

    1982-04-01

    Each language has its own structure. In translating one language into another one, language attributes and grammatical interpretation must be defined in an unambiguous form. In order to parse a sentence, it is necessary to recognize its structure. A so-called context-free grammar can help in this respect for machine translation and machine-aided translation. Problems to be solved in studying machine translation are taken up in the paper, which discusses subjects for semantics and for syntactic analysis and translation software. 14 references.

  6. Fundamentals of machine design

    CERN Document Server

    Karaszewski, Waldemar

    2011-01-01

    A forum of researchers, educators and engineers involved in various aspects of Machine Design provided the inspiration for this collection of peer-reviewed papers. The resultant dissemination of the latest research results, and the exchange of views concerning the future research directions to be taken in this field will make the work of immense value to all those having an interest in the topics covered. The book reflects the cooperative efforts made in seeking out the best strategies for effecting improvements in the quality and the reliability of machines and machine parts and for extending

  7. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  8. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi; Jordan, Kirk; Kaushik, Dinesh; Perrone, Michael; Sachdeva, Vipin; Tautges, Timothy J.; Magerlein, John

    2012-01-01

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  9. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi

    2012-06-02

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  10. Machine learning and medical imaging

    CERN Document Server

    Shen, Dinggang; Sabuncu, Mert

    2016-01-01

    Machine Learning and Medical Imaging presents state-of- the-art machine learning methods in medical image analysis. It first summarizes cutting-edge machine learning algorithms in medical imaging, including not only classical probabilistic modeling and learning methods, but also recent breakthroughs in deep learning, sparse representation/coding, and big data hashing. In the second part leading research groups around the world present a wide spectrum of machine learning methods with application to different medical imaging modalities, clinical domains, and organs. The biomedical imaging modalities include ultrasound, magnetic resonance imaging (MRI), computed tomography (CT), histology, and microscopy images. The targeted organs span the lung, liver, brain, and prostate, while there is also a treatment of examining genetic associations. Machine Learning and Medical Imaging is an ideal reference for medical imaging researchers, industry scientists and engineers, advanced undergraduate and graduate students, a...

  11. Tattoo machines, needles and utilities.

    Science.gov (United States)

    Rosenkilde, Frank

    2015-01-01

    Starting out as a professional tattooist back in 1977 in Copenhagen, Denmark, Frank Rosenkilde has personally experienced the remarkable development of tattoo machines, needles and utilities: all the way from home-made equipment to industrial products of substantially improved quality. Machines can be constructed like the traditional dual-coil and single-coil machines or can be e-coil, rotary and hybrid machines, with the more convenient and precise rotary machines being the recent trend. This development has resulted in disposable needles and utilities. Newer machines are more easily kept clean and protected with foil to prevent crosscontaminations and infections. The machines and the tattooists' knowledge and awareness about prevention of infection have developed hand-in-hand. For decades, Frank Rosenkilde has been collecting tattoo machines. Part of his collection is presented here, supplemented by his personal notes. © 2015 S. Karger AG, Basel.

  12. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  13. Representing high-dimensional data to intelligent prostheses and other wearable assistive robots: A first comparison of tile coding and selective Kanerva coding.

    Science.gov (United States)

    Travnik, Jaden B; Pilarski, Patrick M

    2017-07-01

    Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.

  14. Machine Learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.

  15. Machine Translation

    Indian Academy of Sciences (India)

    Research Mt System Example: The 'Janus' Translating Phone Project. The Janus ... based on laptops, and simultaneous translation of two speakers in a dialogue. For more ..... The current focus in MT research is on using machine learning.

  16. Using Peephole Optimization on Intermediate Code

    NARCIS (Netherlands)

    Tanenbaum, A.S.; van Staveren, H.; Stevenson, J.W.

    1982-01-01

    Many portable compilers generate an intermediate code that is subsequently translated into the target machine's assembly language. In this paper a stack-machine-based intermediate code suitable for algebraic languages (e.g., PASCAL, C, FORTRAN) and most byte-addressed mini- and microcomputers is

  17. Validation of activity determination codes and nuclide vectors by using results from processing of retired components and operational waste

    International Nuclear Information System (INIS)

    Lundgren, Klas; Larsson, Arne

    2012-01-01

    Decommissioning studies for nuclear power reactors are performed in order to assess the decommissioning costs and the waste volumes as well as to provide data for the licensing and construction of the LILW repositories. An important part of this work is to estimate the amount of radioactivity in the different types of decommissioning waste. Studsvik ALARA Engineering has performed such assessments for LWRs and other nuclear facilities in Sweden. These assessments are to a large content depending on calculations, senior experience and sampling on the facilities. The precision in the calculations have been found to be relatively high close to the reactor core. Of natural reasons the precision will decline with the distance. Even if the activity values are lower the content of hard to measure nuclides can cause problems in the long term safety demonstration of LLW repositories. At the same time Studsvik is processing significant volumes of metallic and combustible waste from power stations in operation and in decommissioning phase as well as from other nuclear facilities such as research and waste treatment facilities. Combining the unique knowledge in assessment of radioactivity inventory and the large data bank the waste processing represents the activity determination codes can be validated and the waste processing analysis supported with additional data. The intention with this presentation is to highlight how the European nuclear industry jointly could use the waste processing data for validation of activity determination codes. (authors)

  18. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  19. A rich solution spray as a refining method in a small capacity, single effect, solar assisted absorption machine with the pair NH3/H2O: Experimental results

    International Nuclear Information System (INIS)

    Mendes, L.F.; Collares-Pereira, M.; Ziegler, F.

    2007-01-01

    Ammonia vapour refining is a common procedure in ammonia-water absorption machines. A solar assisted single effect absorption machine that uses the pair ammonia-water was developed and tested. Its desorber has a built-in adiabatic refining column constituted by a rich solution spray. The refining method proved its feasibility. The spray provided a more or less constant ammonia vapour enrichment of about 1% which is enough for the working temperature ranges of this type of machine. It was also verified that the refining effect of the spray is almost independent of the refrigerant vapour and solution mass flow rates

  20. Effects of Secondary Circuit Modeling on Results of Pressurized Water Reactor Main Steam Line Break Benchmark Calculations with New Coupled Code TRAB-3D/SMABRE

    International Nuclear Information System (INIS)

    Daavittila, Antti; Haemaelaeinen, Anitta; Kyrki-Rajamaeki, Riitta

    2003-01-01

    All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important

  1. Study of axial protections of unloading machines of graphite piles

    International Nuclear Information System (INIS)

    Duco, Jacques; Pepin, Pierre; Cabaret, Guy; Dubor, Monique

    1969-10-01

    As previous studies resulted in the development of a simple calculation formula based on experimental results for the calculation of neutron protection thicknesses for loading machines, this study aimed at determining axial protections of these machines which represent a specific problem: scattering of delayed neutrons in the machine inner cavity may result in an important neutron leakage through the upper part, at the level of the winch enclosure. In an experimental part, this study comprises the measurement of the neutron dose in a 2.60 m long and 54 cm diameter cylindrical cavity, and in the thickness of the surrounding concrete protection. In the second part, the authors present a calculation method which uses the Zeus and Mercure codes to interpret the results [fr

  2. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  3. Prediction of Machine Tool Condition Using Support Vector Machine

    International Nuclear Information System (INIS)

    Wang Peigong; Meng Qingfeng; Zhao Jian; Li Junjie; Wang Xiufeng

    2011-01-01

    Condition monitoring and predicting of CNC machine tools are investigated in this paper. Considering the CNC machine tools are often small numbers of samples, a condition predicting method for CNC machine tools based on support vector machines (SVMs) is proposed, then one-step and multi-step condition prediction models are constructed. The support vector machines prediction models are used to predict the trends of working condition of a certain type of CNC worm wheel and gear grinding machine by applying sequence data of vibration signal, which is collected during machine processing. And the relationship between different eigenvalue in CNC vibration signal and machining quality is discussed. The test result shows that the trend of vibration signal Peak-to-peak value in surface normal direction is most relevant to the trend of surface roughness value. In trends prediction of working condition, support vector machine has higher prediction accuracy both in the short term ('One-step') and long term (multi-step) prediction compared to autoregressive (AR) model and the RBF neural network. Experimental results show that it is feasible to apply support vector machine to CNC machine tool condition prediction.

  4. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  5. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  6. Metalworking and machining fluids

    Science.gov (United States)

    Erdemir, Ali; Sykora, Frank; Dorbeck, Mark

    2010-10-12

    Improved boron-based metal working and machining fluids. Boric acid and boron-based additives that, when mixed with certain carrier fluids, such as water, cellulose and/or cellulose derivatives, polyhydric alcohol, polyalkylene glycol, polyvinyl alcohol, starch, dextrin, in solid and/or solvated forms result in improved metalworking and machining of metallic work pieces. Fluids manufactured with boric acid or boron-based additives effectively reduce friction, prevent galling and severe wear problems on cutting and forming tools.

  7. HUMAN DECISIONS AND MACHINE PREDICTIONS.

    Science.gov (United States)

    Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil

    2018-02-01

    Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; and these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals. JEL Codes: C10 (Econometric and statistical methods and methodology), C55 (Large datasets: Modeling and analysis), K40 (Legal procedure, the legal system, and illegal behavior).

  8. Machine Protection

    International Nuclear Information System (INIS)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012

  9. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  10. Machine Protection

    CERN Document Server

    Zerlauth, Markus; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  11. Machine Protection

    Energy Technology Data Exchange (ETDEWEB)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg [European Organization for Nuclear Research, Geneva (Switzerland)

    2012-07-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  12. Implementation of the International Code of Practice on Dosimetry in Diagnostic Radiology (TRS 457): Review of Test Results

    International Nuclear Information System (INIS)

    2011-01-01

    In 2007, the IAEA published Dosimetry in Diagnostic Radiology: An International Code of Practice (IAEA Technical Reports Series No. 457). This publication recommends procedures for calibration and dosimetric measurement for the attainment of standardized dosimetry. It also addresses requirements both in standards dosimetry laboratories, especially Secondary Standards Dosimetry Laboratories (SSDLs), and in clinical centres for radiology, as found in most hospitals. The implementation of TRS No. 457 decreases the uncertainty in the dosimetry of diagnostic radiology beams and provides Member States with a unified and consistent framework for dosimetry in diagnostic radiology, which previously did not exist. A coordinated research project (CRP E2.10.06) was established in order to provide practical guidance to professionals at SSDLs and to clinical medical physicists on the implementation of TRS No. 457. This includes the calibration of radiological dosimetry instrumentation, the dissemination of calibration coefficients to clinical centres and the establishment of dosimetric measurement processes in clinical settings. The main goals of the CRP were to: Test the procedures recommended in TRS No. 457 for calibration of radiation detectors in different types of diagnostic beams and measuring instruments for varying diagnostic X ray modalities; Test the clinical dosimetry procedures, including the use of phantoms and patient dose surveys; Report on the practical implementation of TRS No. 457 at both SSDLs and hospital sites. Testing of TRS No. 457 was performed by a group of medical physicists from hospitals and SSDLs from various institutions worldwide

  13. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  14. Teletherapy machine

    International Nuclear Information System (INIS)

    Panyam, Vinatha S.; Rakshit, Sougata; Kulkarni, M.S.; Pradeepkumar, K.S.

    2017-01-01

    Radiation Standards Section (RSS), RSSD, BARC is the national metrology institute for ionizing radiation. RSS develops and maintains radiation standards for X-ray, beta, gamma and neutron radiations. In radiation dosimetry, traceability, accuracy and consistency of radiation measurements is very important especially in radiotherapy where the success of patient treatment is dependent on the accuracy of the dose delivered to the tumour. Cobalt teletherapy machines have been used in the treatment of cancer since the early 1950s and India had its first cobalt teletherapy machine installed at the Cancer Institute, Chennai in 1956

  15. Coordinate measuring machines

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with three exercises of 2 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercises concern three aspects of coordinate measuring: 1) Measuring and verification of tolerances on coordinate measuring machines, 2) Traceabilit...... and uncertainty during coordinate measurements, 3) Digitalisation and Reverse Engineering. This document contains a short description of each step in the exercise and schemes with room for taking notes of the results.......This document is used in connection with three exercises of 2 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercises concern three aspects of coordinate measuring: 1) Measuring and verification of tolerances on coordinate measuring machines, 2) Traceability...

  16. Overview of ACTYS project on development of indigenous state-of-the-art code suites for nuclear activation analysis

    International Nuclear Information System (INIS)

    Subhash, P.V.; Tadepalli, Sai Chaitanya; Deshpande, Shishir P.; Kanth, Priti; Srinivasan, R.

    2017-01-01

    Rigorous activation calculations are warranted for safer and efficient design of future fusion machines. Suitable activation codes, which yield accurate results with faster performance yet include all fusion relevant reactions are a prerequisite. To meet these, an indigenous project called ACTYS-Project is initiated and as a result, four state-of-art codes are developed so far. The goal of this project is to develop indigenous state-of-the-art code suites for nuclear activation analysis

  17. Preliminary results of the seventh three-dimensional AER dynamic benchmark problem calculation. Solution with DYN3D and RELAP5-3D codes

    International Nuclear Information System (INIS)

    Bencik, M.; Hadek, J.

    2011-01-01

    The paper gives a brief survey of the seventh three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at Nuclear Research Institute Rez. This benchmark was defined at the twentieth AER Symposium in Hanassari (Finland). It is focused on investigation of transient behaviour in a WWER-440 nuclear power plant. Its initiating event is opening of the main isolation valve and re-connection of the loop with its main circulation pump in operation. The WWER-440 plant is at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations were performed with the code DYN3D. Transient calculation was made with the system code RELAP5-3D. The two-group homogenized cross sections library HELGD05 created by HELIOS code was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the seventh AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was coupled with 49 core thermal-hydraulic channels and 8 reflector channels connected with the three-dimensional model of the reactor vessel. The detailed nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5-3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. (Authors)

  18. Developing a methodology for the evaluation of results uncertainties in CFD codes; Desarrollo de una Metodologia para la Evaluacion de Incertidumbres en los Resultados de Codigos de CFD

    Energy Technology Data Exchange (ETDEWEB)

    Munoz-cobo, J. L.; Chiva, S.; Pena, C.; Vela, E.

    2014-07-01

    In this work the development of a methodology is studied to evaluate the uncertainty in the results of CFD codes and is compatible with the VV-20 standard Standard for Verification and Validation in CFD and Heat Transfer {sup ,} developed by the Association of Mechanical Engineers ASME . Similarly, the alternatives are studied for obtaining existing uncertainty in the results to see which is the best choice from the point of view of implementation and time. We have developed two methods for calculating uncertainty of the results of a CFD code, the first method based on the use of techniques of Monte-Carlo for the propagation of uncertainty in this first method we think it is preferable to use the statistics of the order to determine the number of cases to execute the code, because this way we can always determine the confidence interval desired level of output quantities. The second type of method we have developed is based on non-intrusive polynomial chaos. (Author)

  19. Defending Malicious Script Attacks Using Machine Learning Classifiers

    Directory of Open Access Journals (Sweden)

    Nayeem Khan

    2017-01-01

    Full Text Available The web application has become a primary target for cyber criminals by injecting malware especially JavaScript to perform malicious activities for impersonation. Thus, it becomes an imperative to detect such malicious code in real time before any malicious activity is performed. This study proposes an efficient method of detecting previously unknown malicious java scripts using an interceptor at the client side by classifying the key features of the malicious code. Feature subset was obtained by using wrapper method for dimensionality reduction. Supervised machine learning classifiers were used on the dataset for achieving high accuracy. Experimental results show that our method can efficiently classify malicious code from benign code with promising results.

  20. Machine testning

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with a laboratory exercise of 3 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercise includes a series of tests carried out by the student on a conventional and a numerically controled lathe, respectively. This document...

  1. Machine rates for selected forest harvesting machines

    Science.gov (United States)

    R.W. Brinker; J. Kinard; Robert Rummer; B. Lanford

    2002-01-01

    Very little new literature has been published on the subject of machine rates and machine cost analysis since 1989 when the Alabama Agricultural Experiment Station Circular 296, Machine Rates for Selected Forest Harvesting Machines, was originally published. Many machines discussed in the original publication have undergone substantial changes in various aspects, not...

  2. Application of MELCOR Code to a French PWR 900 MWe Severe Accident Sequence and Evaluation of Models Performance Focusing on In-Vessel Thermal Hydraulic Results

    International Nuclear Information System (INIS)

    De Rosa, Felice

    2006-01-01

    In the ambit of the Severe Accident Network of Excellence Project (SARNET), funded by the European Union, 6. FISA (Fission Safety) Programme, one of the main tasks is the development and validation of the European Accident Source Term Evaluation Code (ASTEC Code). One of the reference codes used to compare ASTEC results, coming from experimental and Reactor Plant applications, is MELCOR. ENEA is a SARNET member and also an ASTEC and MELCOR user. During the first 18 months of this project, we performed a series of MELCOR and ASTEC calculations referring to a French PWR 900 MWe and to the accident sequence of 'Loss of Steam Generator (SG) Feedwater' (known as H2 sequence in the French classification). H2 is an accident sequence substantially equivalent to a Station Blackout scenario, like a TMLB accident, with the only difference that in H2 sequence the scram is forced to occur with a delay of 28 seconds. The main events during the accident sequence are a loss of normal and auxiliary SG feedwater (0 s), followed by a scram when the water level in SG is equal or less than 0.7 m (after 28 seconds). There is also a main coolant pumps trip when ΔTsat < 10 deg. C, a total opening of the three relief valves when Tric (core maximal outlet temperature) is above 603 K (330 deg. C) and accumulators isolation when primary pressure goes below 1.5 MPa (15 bar). Among many other points, it is worth noting that this was the first time that a MELCOR 1.8.5 input deck was available for a French PWR 900. The main ENEA effort in this period was devoted to prepare the MELCOR input deck using the code version v.1.8.5 (build QZ Oct 2000 with the latest patch 185003 Oct 2001). The input deck, completely new, was prepared taking into account structure, data and same conditions as those found inside ASTEC input decks. The main goal of the work presented in this paper is to put in evidence where and when MELCOR provides good enough results and why, in some cases mainly referring to its

  3. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  4. XS data recalculation with HELIOS-1.8 and statistical investigation of C-PORCA and GEPETTO codes results based on in-core measurements

    International Nuclear Information System (INIS)

    Szabo, Sandor Patai; Parko, Tamas; Pos, Istvan

    2005-01-01

    As a part of the power up rate process at the NPP PAKS some reactor physical model development and testing were fulfilled. The model development mainly focussed on the more flexible handling of assemblies with different initial material compositions in axial direction and the renewing of few group XS data storage. Parallel with this modification all of the few group XS data were recalculated by the newest HELIOS version.To satisfy the correct and accurate off-line and on-line reactor physical analysis of reactor cores a comprehensive investigation of the relevant codes has been done. During this process the accuracy of applied models was determined and their appropriateness was also demonstrated. The paper shows the main features of modifications and code developments and basic results of tests (Authors)

  5. Model description of CHERPAC (Chalk River Environmental Research Pathways Analysis Code); results of testing with post-Chernobyl data from Finland

    International Nuclear Information System (INIS)

    Peterson, S-R.

    1994-07-01

    CHERPAC (Chalk River Environmental Research Pathways Analysis Code), a time-dependent code for assessing doses from accidental and routine releases of radionuclides, has been under development since 1987. A complete model description is provide here with equations, parameter values, assumptions and information on parameter distributions for uncertainty analysis. Concurrently, CHERPAC has been used to participate in the two internal model validation exercises BIOMOVS (BIOspheric MOdel Validation Study) and VAMP (VAlidation of Assessment Model Predictions, a co-ordinated research program of the International Atomic Energy Agency). CHERPAC has been tested for predictions of concentrations of 137 Cs in foodstuffs, body burden and dose over time using data collected after the Chernobyl accident of 1986 April. CHERPAC's results for the recent VAMP scenario for southern Finland are particularly accurate and should represent what the code can do under Canadian conditions. CHERPAC's predictions are compared with the observations from Finland for four and one-half years after the accident as well as with the results of the other participating models from nine countries. (author). 18 refs., 23 figs., 2 appendices

  6. Model description of CHERPAC (Chalk River Environmental Research Pathways Analysis Code); results of testing with post-Chernobyl data from Finland

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, S-R

    1994-07-01

    CHERPAC (Chalk River Environmental Research Pathways Analysis Code), a time-dependent code for assessing doses from accidental and routine releases of radionuclides, has been under development since 1987. A complete model description is provide here with equations, parameter values, assumptions and information on parameter distributions for uncertainty analysis. Concurrently, CHERPAC has been used to participate in the two internal model validation exercises BIOMOVS (BIOspheric MOdel Validation Study) and VAMP (VAlidation of Assessment Model Predictions, a co-ordinated research program of the International Atomic Energy Agency). CHERPAC has been tested for predictions of concentrations of {sup 137}Cs in foodstuffs, body burden and dose over time using data collected after the Chernobyl accident of 1986 April. CHERPAC`s results for the recent VAMP scenario for southern Finland are particularly accurate and should represent what the code can do under Canadian conditions. CHERPAC`s predictions are compared with the observations from Finland for four and one-half years after the accident as well as with the results of the other participating models from nine countries. (author). 18 refs., 23 figs., 2 appendices.

  7. TSOAK-M1: a computer code to determine tritium reaction/adsorption/release parameters from experimental results of air-detritiation tests

    International Nuclear Information System (INIS)

    Land, R.H.; Maroni, V.A.; Minkoff, M.

    1979-01-01

    A computer code has been developed which permits the determination of tritium reaction (T 2 to HTO)/adsorption/release and instrument correction parameters from enclosure (building) - detritiation test data. The code is based on a simplified model which treats each parameter as a normalized time-independent constant throughout the data-unfolding steps. Because of the complicated four-dimensional mathematical surface generated by the resulting differential equation system, occasional local-minima effects are observed, but these effects can be overcome in most instances by selecting a series of trial guesses for the initial parameter values and observing the reproducibility of final parameter values for cases where the best overall fit to experimental data is achieved. The code was then used to analyze existing small-cubicle test data with good success, and the resulting normalized parameters were employed to evaluate hypothetical reactor-building detritiation scenarios. It was concluded from the latter evaluation that the complications associated with moisture formation, adsorption, and release, particularly in terms of extended cleanup times, may not be as great as was previously thought. It is recommended that the validity of the TSOAK-M1 model be tested using data from detritiation tests conducted on large experimental enclosures (5 to 10 cm 3 ) and, if possible, actual facility buildings

  8. Direct Simulation Monte Carlo (DSMC) on the Connection Machine

    International Nuclear Information System (INIS)

    Wong, B.C.; Long, L.N.

    1992-01-01

    The massively parallel computer Connection Machine is utilized to map an improved version of the direct simulation Monte Carlo (DSMC) method for solving flows with the Boltzmann equation. The kinetic theory is required for analyzing hypersonic aerospace applications, and the features and capabilities of the DSMC particle-simulation technique are discussed. The DSMC is shown to be inherently massively parallel and data parallel, and the algorithm is based on molecule movements, cross-referencing their locations, locating collisions within cells, and sampling macroscopic quantities in each cell. The serial DSMC code is compared to the present parallel DSMC code, and timing results show that the speedup of the parallel version is approximately linear. The correct physics can be resolved from the results of the complete DSMC method implemented on the connection machine using the data-parallel approach. 41 refs

  9. Human population doses: Comparative analysis of CREAM code results with currently computer codes of Nuclear Regulatory Authority; Dosis en la poblacion: comparacion de los resultados del codigo CREAM con resultados de modelos vigentes en la ARN

    Energy Technology Data Exchange (ETDEWEB)

    Alonso Jimenez, Maria Teresa; Curti, Adriana [Autoridad Regulatoria Nuclear, Buenos Aires (Argentina)]. E-mail: mtalonso@sede.arn.gov.ar; acurti@sede.arn.gov.ar

    2001-07-01

    The Nuclear Regulatory Authority is performing an analysis with PC CREAM, developed at the NRPB, for updating computer programs and models used for calculating the transfer of radionuclides through the environment. For CREAM dose assessment verification for local scenarios, this paper presents a comparison of population doses assessed with the computer codes used nowadays and with CREAM, for unitary releases of main radionuclides in nuclear power plant discharges. The results of atmospheric dispersion processes and the transfer of radionuclides through the environment for local scenarios are analysed. The programs used are PLUME for atmospheric dispersion, FARMLAND for the transfer of radionuclides into foodstuffs following atmospheric deposition in the terrestrial environment and ASSESSOR for individual and collective dose assessments.This paper presents the general assumptions made for dose assessments. The results show some differences between doses due to differences in models, in the complexity level of the same models, or in parameters. (author)

  10. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  11. Failure Identification of Hacksaw Machine REMOR 400

    International Nuclear Information System (INIS)

    Paidjo; Abdul Hafid; Sagino

    2007-01-01

    REMOR 400 Hack sawing machine is one of machines type has been old age. For arrange of cutting pressure and repeat lifting load after cutting process by using the hydraulic system. Beside of worn-out of hacksaw blade, failure cutting earn also because of leakage from the hydraulic system of machine. Leakage of hydraulic system occurs because of over load factor using or aging. Base on inspection result, hacksaw machine REMOR 400 fault on hydraulic system in the 2006 year. This matter will be seen from its seal brittle from the machine. For activate to return machine so much replacement repeat the seals used by machine. (author)

  12. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  13. Investigation of reactivity changes due to flooding the irradiation sites of the MNSR reactor using the MCNP code and comparison with experimental results

    Directory of Open Access Journals (Sweden)

    A Shirani

    2010-06-01

    Full Text Available In this work, the Isfahan Miniature Neutron Source Reactor (MNSR has been simulated using the MCNP code, and reactivity worth of flooding the inner irradiation sites of this reactor in an accident has been calculated. Also, by inserting polyethylene capsules containing water inside the inner irradiation sites, reactivity changes of this reactor in same such accident have been measured, the results of which are in good agreements with the calculated results. In this work, the reactivity worth due to flooding one inner irradiation site is 0.53mk , and reactivity worth due to flooding of the whole 5 inner irradiation sites is 2.61 mk.

  14. Interim results of the sixth three-dimensional AER dynamic benchmark problem calculation. Solution of problem with DYN3D and RELAP5-3D codes

    International Nuclear Information System (INIS)

    Hadek, J.; Kral, P.; Macek, J.

    2001-01-01

    The paper gives a brief survey of the 6 th three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAPS-3D at NRI Rez. This benchmark was defined at the 10 th AER Symposium. Its initiating event is a double ended break in the steam line of steam generator No. I in a WWER-440/213 plant at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations as well as tuning of initial state before the transient were performed with the code DYN3D. Transient calculations were made with the system code RELAPS-3D.The KASSETA library was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the 6 th AER dynamic benchmark purposes. The RELAPS-3D full core neutronic model was connected with seven coolant channels thermal-hydraulic model of the core (Authors)

  15. Genetic Recombination Between Stromal and Cancer Cells Results in Highly Malignant Cells Identified by Color-Coded Imaging in a Mouse Lymphoma Model.

    Science.gov (United States)

    Nakamura, Miki; Suetsugu, Atsushi; Hasegawa, Kousuke; Matsumoto, Takuro; Aoki, Hitomi; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Hoffman, Robert M

    2017-12-01

    The tumor microenvironment (TME) promotes tumor growth and metastasis. We previously established the color-coded EL4 lymphoma TME model with red fluorescent protein (RFP) expressing EL4 implanted in transgenic C57BL/6 green fluorescent protein (GFP) mice. Color-coded imaging of the lymphoma TME suggested an important role of stromal cells in lymphoma progression and metastasis. In the present study, we used color-coded imaging of RFP-lymphoma cells and GFP stromal cells to identify yellow-fluorescent genetically recombinant cells appearing only during metastasis. The EL4-RFP lymphoma cells were injected subcutaneously in C57BL/6-GFP transgenic mice and formed subcutaneous tumors 14 days after cell transplantation. The subcutaneous tumors were harvested and transplanted to the abdominal cavity of nude mice. Metastases to the liver, perigastric lymph node, ascites, bone marrow, and primary tumor were imaged. In addition to EL4-RFP cells and GFP-host cells, genetically recombinant yellow-fluorescent cells, were observed only in the ascites and bone marrow. These results indicate genetic exchange between the stromal and cancer cells. Possible mechanisms of genetic exchange are discussed as well as its ramifications for metastasis. J. Cell. Biochem. 118: 4216-4221, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Modification and application of TOUGH2 as a variable-density, saturated-flow code and comparison to SWIFT II results

    International Nuclear Information System (INIS)

    Christian-Frear, T.L.; Webb, S.W.

    1995-01-01

    Human intrusion scenarios at the Waste Isolation Pilot Plant (WIPP) involve penetration of the repository and an underlying brine reservoir by a future borehole. Brine and gas from the brine reservoir and the repository may flow up the borehole and into the overlying Culebra formation, which is saturated with water containing different amounts of dissolved 'solids resulting in a spatially varying density. Current modeling approaches involve perturbing a steady-state Culebra flow field by inflow of gas and/or brine from a breach borehole that has passed through the repository. Previous studies simulating steady-state flow in the Culebra have been done. One specific study by LaVenue et al. (1990) used the SWIFT 2 code, a single-phase flow and transport code, to develop the steady-state flow field. Because gas may also be present in the fluids from the intrusion borehole, a two-phase code such as TOUGH2 can be used to determine the effect that emitted fluids may have on the steady-state Culebra flow field. Thus a comparison between TOUGH2 and SWIFT2 was prompted. In order to compare the two codes and to evaluate the influence of gas on flow in the Culebra, modifications were made to TOUGH2. Modifications were performed by the authors to allow for element-specific values of permeability, porosity, and elevation. The analysis also used a new equation of state module for a water-brine-air mixture, EOS7 (Pruess, 1991), which was developed to simulate variable water densities by assuming a miscible mixture of water and brine phases and allows for element-specific brine concentration in the INCON file

  17. Electric machines

    CERN Document Server

    Gross, Charles A

    2006-01-01

    BASIC ELECTROMAGNETIC CONCEPTSBasic Magnetic ConceptsMagnetically Linear Systems: Magnetic CircuitsVoltage, Current, and Magnetic Field InteractionsMagnetic Properties of MaterialsNonlinear Magnetic Circuit AnalysisPermanent MagnetsSuperconducting MagnetsThe Fundamental Translational EM MachineThe Fundamental Rotational EM MachineMultiwinding EM SystemsLeakage FluxThe Concept of Ratings in EM SystemsSummaryProblemsTRANSFORMERSThe Ideal n-Winding TransformerTransformer Ratings and Per-Unit ScalingThe Nonideal Three-Winding TransformerThe Nonideal Two-Winding TransformerTransformer Efficiency and Voltage RegulationPractical ConsiderationsThe AutotransformerOperation of Transformers in Three-Phase EnvironmentsSequence Circuit Models for Three-Phase Transformer AnalysisHarmonics in TransformersSummaryProblemsBASIC MECHANICAL CONSIDERATIONSSome General PerspectivesEfficiencyLoad Torque-Speed CharacteristicsMass Polar Moment of InertiaGearingOperating ModesTranslational SystemsA Comprehensive Example: The ElevatorP...

  18. Charging machine

    International Nuclear Information System (INIS)

    Medlin, J.B.

    1976-01-01

    A charging machine for loading fuel slugs into the process tubes of a nuclear reactor includes a tubular housing connected to the process tube, a charging trough connected to the other end of the tubular housing, a device for loading the charging trough with a group of fuel slugs, means for equalizing the coolant pressure in the charging trough with the pressure in the process tubes, means for pushing the group of fuel slugs into the process tube and a latch and a seal engaging the last object in the group of fuel slugs to prevent the fuel slugs from being ejected from the process tube when the pusher is removed and to prevent pressure liquid from entering the charging machine. 3 claims, 11 drawing figures

  19. Genesis machines

    CERN Document Server

    Amos, Martyn

    2014-01-01

    Silicon chips are out. Today's scientists are using real, wet, squishy, living biology to build the next generation of computers. Cells, gels and DNA strands are the 'wetware' of the twenty-first century. Much smaller and more intelligent, these organic computers open up revolutionary possibilities. Tracing the history of computing and revealing a brave new world to come, Genesis Machines describes how this new technology will change the way we think not just about computers - but about life itself.

  20. Dynamic models for radionuclide transport in agricultural ecosystems: summary of results from a UK code comparison exercise

    International Nuclear Information System (INIS)

    Meekings, G.F.; Walters, B.

    1986-01-01

    In recent years, models have been developed by three organisations in the UK to represent the time-dependent behaviour of radionuclides in agricultural ecosystems. These models were developed largely independently of each other and, in view of their potential applications in relation to radioactive waste management and discharge, the Food Science Division of the Ministry of Agriculture, Fisheries and Food initiated a calculational intercomparison exercise with the agreement and cooperation of all three organisations involved. A subset of the results obtained is reported here. In general a high degree of consistency between the results of the various models was obtained particularly regarding the responses with time. The exercise supported the case for using dynamic models in radiological assessment studies. It also demonstrated areas where differences in results from the models are a consequence of a lack of appropriate data on the environmental behaviour of the radionuclides considered. (author)

  1. Numerical experimentation on focusing time and neutron yield in GN1 plasma focus machine

    International Nuclear Information System (INIS)

    Singh, Arwinder; Lee, Sing; Saw, S.H.

    2014-01-01

    In this paper, we have shown how we have fitted Lee's six phase model code to analyze the current waveform of the GN1 plasma focus machine working in deuterium gas. The Lee's 6-phase model codes was later configured to work between 0.5 to 6 Torr and the results of both focusing time and neutron yield was then compared with the published experimental results. The final results indicate that Lee's code, gives realistic plasma dynamics and focus properties together with a realistic neutron yield for GN1 plasma focus, without the need of any adjustable parameters, needing only to fit the computed current trace to a measured current trace. (author)

  2. APC-II: an electron beam propagation code

    International Nuclear Information System (INIS)

    Iwan, D.C.; Freeman, J.R.

    1984-05-01

    The computer code APC-II simulates the propagation of a relativistic electron beam through air. APC-II is an updated version of the APC envelope model code. It incorporates an improved conductivity model which significantly extends the range of stable calculations. A number of test cases show that these new models are capable of reproducing the simulations of the original APC code. As the result of a major restructuring and reprogramming of the code, APC-II is now friendly to both the occasional user and the experienced user who wishes to make modifications. Most of the code is in standard ANS-II Fortran 77 so that it can be easily transported between machines

  3. Visual Input Enhancement via Essay Coding Results in Deaf Learners' Long-Term Retention of Improved English Grammatical Knowledge

    Science.gov (United States)

    Berent, Gerald P.; Kelly, Ronald R.; Schmitz, Kathryn L.; Kenney, Patricia

    2009-01-01

    This study explored the efficacy of visual input enhancement, specifically "essay enhancement", for facilitating deaf college students' improvement in English grammatical knowledge. Results documented students' significant improvement immediately after a 10-week instructional intervention, a replication of recent research. Additionally, the…

  4. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Kanemoto, Shigeru; Watanabe, Masaya [The University of Aizu, Aizuwakamatsu (Japan); Yusa, Noritaka [Tohoku University, Sendai (Japan)

    2014-08-15

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology.

  5. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    International Nuclear Information System (INIS)

    Kanemoto, Shigeru; Watanabe, Masaya; Yusa, Noritaka

    2014-01-01

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology

  6. Modification of the code BEAMCORR, and some simulation results of the magnet and achromat misalignments for the SLC South Arc

    International Nuclear Information System (INIS)

    Shoaee, H.; Kheifets, S.

    1984-01-01

    An important decision has been made regarding the correction scheme for the arcs leading to the adoption of the so called scheme I. In this scheme the beam position data are collected from single-plane x and y Beam Position Monitors (BPMs), which are placed in the drift spaces adjacent to the downstream D- and F-magnets correspondingly. Similarly, single-plane x and y correctors are used for moving the upstream end of the corresponding magnets. In the present simulation this scheme is used exclusively. The first order calculations performed by means of TRANSPORT appear to be unsatisfactory from the point of view of the beam spotsize at the interaction point (IP). In this note we describe the modification to our program BEAMCORR which employs second order calculations by means of the program TURTLE. We also present the results of the following simulations: (a) study of the effects of two different levels of magnet misalignment on the beam spotsize at IP, and comparison of the results with those obtained by means of the program DINGBAT; (b) study of disjoints between achromats (both the displacement of the adjacent ends and angular discontinuity between achromats)

  7. Results of small break LOCA analysis for Kuosheng nuclear power plant using the RELAP5YA computer code

    International Nuclear Information System (INIS)

    Wang, L.C.; Jeng, S.C.; Chung, N.M.

    2004-01-01

    One lesson learned from the Three Mile Island (TMI) accident was the analysis methods used by Nuclear Steam Supply System (NSSS) vendors and/or nuclear fuel suppliers for small break Loss Of Coolant Accident (LOCA) analysis for compliance with appendix K to 10CFR50 should be revised, documented and submitted for USNRC approval and the plant-specific calculations using NRC-approved models for small-break LOCA to show compliance with 10CFR50.46 should be submitted for NRC approval. A study by Taiwan Power Company (TPC) under the guidance of Yankee Atomic Electric Company (YAEC) has been undertaken to perform this analysis for Kuosheng nuclear power plant. This paper presents the results of the analysis that are useful in satisfying the same requirements of the Republic Of China Atomic Energy Commission (ROCAEC). (author)

  8. Comparing the Floating Point Systems, Inc. AP-190L to representative scientific computers: some benchmark results

    International Nuclear Information System (INIS)

    Brengle, T.A.; Maron, N.

    1980-01-01

    Results are presented of comparative timing tests made by running a typical FORTRAN physics simulation code on the following machines: DEC PDP-10 with KI processor; DEC PDP-10, KI processor, and FPS AP-190L; CDC 7600; and CRAY-1. Factors such as DMA overhead, code size for the AP-190L, and the relative utilization of floating point functional units for the different machines are discussed. 1 table

  9. Verification of the both hydrogeological and hydrogeochemical code results by an on-site test in granitic rocks

    Directory of Open Access Journals (Sweden)

    Michal Polák

    2007-01-01

    Full Text Available The project entitled “Methods and tools for the evaluation of the effect of engeneered barriers on distant interactions in the environment of a deep repository facility” deals with the ability to validate the behavior of applied engeneered barriers on hydrodynamic and migration parameters in the water-bearing granite environment of a radioactive waste deep repository facility. A part of the project represents a detailed mapping of the fracture network by means of geophysical and drilling surveys on the test-site (active granite quarry, construction of model objects (about 100 samples with the shape of cylinders, ridges and blocks, and the mineralogical, petrological and geochemical description of granite. All the model objects were subjected to migration and hydrodynamic tests with the use of fluorescein and NaCl as tracers. The tests were performed on samples with simple fractures, injected fractures and with an undisturbed integrity (verified by ultrasonic. The gained hydrodynamic and migration parameters of the model objects were processed with the modeling software NAPSAC and FEFLOW. During the following two years, these results and parameters will be verified (on the test-site by means of a long-term field test including the tuning of the software functionality.

  10. Representational Machines

    DEFF Research Database (Denmark)

    Photography not only represents space. Space is produced photographically. Since its inception in the 19th century, photography has brought to light a vast array of represented subjects. Always situated in some spatial order, photographic representations have been operatively underpinned by social...... to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...... possibilities, and genre distinctions. Presenting several distinct ways of producing space photographically, this book opens a new and important field of inquiry for photography research....

  11. Shear machines

    International Nuclear Information System (INIS)

    Astill, M.; Sunderland, A.; Waine, M.G.

    1980-01-01

    A shear machine for irradiated nuclear fuel elements has a replaceable shear assembly comprising a fuel element support block, a shear blade support and a clamp assembly which hold the fuel element to be sheared in contact with the support block. A first clamp member contacts the fuel element remote from the shear blade and a second clamp member contacts the fuel element adjacent the shear blade and is advanced towards the support block during shearing to compensate for any compression of the fuel element caused by the shear blade (U.K.)

  12. Asymmetric quantum cloning machines

    International Nuclear Information System (INIS)

    Cerf, N.J.

    1998-01-01

    A family of asymmetric cloning machines for quantum bits and N-dimensional quantum states is introduced. These machines produce two approximate copies of a single quantum state that emerge from two distinct channels. In particular, an asymmetric Pauli cloning machine is defined that makes two imperfect copies of a quantum bit, while the overall input-to-output operation for each copy is a Pauli channel. A no-cloning inequality is derived, characterizing the impossibility of copying imposed by quantum mechanics. If p and p ' are the probabilities of the depolarizing channels associated with the two outputs, the domain in (√p,√p ' )-space located inside a particular ellipse representing close-to-perfect cloning is forbidden. This ellipse tends to a circle when copying an N-dimensional state with N→∞, which has a simple semi-classical interpretation. The symmetric Pauli cloning machines are then used to provide an upper bound on the quantum capacity of the Pauli channel of probabilities p x , p y and p z . The capacity is proven to be vanishing if (√p x , √p y , √p z ) lies outside an ellipsoid whose pole coincides with the depolarizing channel that underlies the universal cloning machine. Finally, the tradeoff between the quality of the two copies is shown to result from a complementarity akin to Heisenberg uncertainty principle. (author)

  13. Health sciences librarians' awareness and assessment of the Medical Library Association Code of Ethics for Health Sciences Librarianship: the results of a membership survey.

    Science.gov (United States)

    Byrd, Gary D; Devine, Patricia J; Corcoran, Kate E

    2014-10-01

    The Medical Library Association (MLA) Board of Directors and president charged an Ethical Awareness Task Force and recommended a survey to determine MLA members' awareness of and opinions about the current Code of Ethics for Health Sciences Librarianship. THE TASK FORCE AND MLA STAFF CRAFTED A SURVEY TO DETERMINE: (1) awareness of the MLA code and its provisions, (2) use of the MLA code to resolve professional ethical issues, (3) consultation of other ethical codes or guides, (4) views regarding the relative importance of the eleven MLA code statements, (5) challenges experienced in following any MLA code provisions, and (6) ethical problems not clearly addressed by the code. Over 500 members responded (similar to previous MLA surveys), and while most were aware of the code, over 30% could not remember when they had last read or thought about it, and nearly half had also referred to other codes or guidelines. The large majority thought that: (1) all code statements were equally important, (2) none were particularly difficult or challenging to follow, and (3) the code covered every ethical challenge encountered in their professional work. Comments provided by respondents who disagreed with the majority views suggest that the MLA code could usefully include a supplementary guide with practical advice on how to reason through a number of ethically challenging situations that are typically encountered by health sciences librarians.

  14. FY 1991 Research and development project for large-scale industrial technologies. Report on results of R and D of superhigh technological machining systems; 1991 nendo chosentan kako system no kenkyu kaihatsu seika hokokusho. Chosentan kako system no kenkyu kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-03-01

    Described herein are the FY 1991 results of the R and D project aimed at establishment of superprecision machining technologies for developing machining technologies and nano-technologies aided by excited beams. The researches on the superprecision machining technologies involve design and development, on a trial basis, of the totally static pressure type positioning device, for which automatically controlling drawing is adopted to improve its rigidity. The researches on the surface modification technologies aided by ion beams involve scanning the ion beams onto the metallic plate to be provided around the glass substrate. The results indicate that the secondary electrons generated can be used to control charge-up. In addition, part of a 30cm square glass substrate is modified by implantation of the spot type ions of high current density, and the modified portion is used to produce a thin-film silicon transistor. The researches on superhigh-technological machining standard measurement involve improvement of precision of the system aided by a dye laser, which attains a precision of 0 to 30nm in a 0.1m measurement range. (NEDO)

  15. Electricity of machine tool

    International Nuclear Information System (INIS)

    Gijeon media editorial department

    1977-10-01

    This book is divided into three parts. The first part deals with electricity machine, which can taints from generator to motor, motor a power source of machine tool, electricity machine for machine tool such as switch in main circuit, automatic machine, a knife switch and pushing button, snap switch, protection device, timer, solenoid, and rectifier. The second part handles wiring diagram. This concludes basic electricity circuit of machine tool, electricity wiring diagram in your machine like milling machine, planer and grinding machine. The third part introduces fault diagnosis of machine, which gives the practical solution according to fault diagnosis and the diagnostic method with voltage and resistance measurement by tester.

  16. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  17. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  18. Human-Machine Communication

    International Nuclear Information System (INIS)

    Farbrot, J.E.; Nihlwing, Ch.; Svengren, H.

    2005-01-01

    New requirements for enhanced safety and design changes in process systems often leads to a step-wise installation of new information and control equipment in the control room of older nuclear power plants, where nowadays modern digital I and C solutions with screen-based human-machine interfaces (HMI) most often are introduced. Human factors (HF) expertise is then required to assist in specifying a unified, integrated HMI, where the entire integration of information is addressed to ensure an optimal and effective interplay between human (operators) and machine (process). Following a controlled design process is the best insurance for ending up with good solutions. This paper addresses the approach taken when introducing modern human-machine communication in the Oskarshamn 1 NPP, the results, and the lessons learned from this work with high operator involvement seen from an HF point of view. Examples of possibilities modern technology might offer for the operators are also addressed. (orig.)

  19. Study of neoclassical transport and bootstrap current for W7-X in the 1/upsilon regime, using results from the PIES code

    International Nuclear Information System (INIS)

    Nemov, V V; Kalyuzhnyj, V N; Kasilov, S V; Drevlak, M; Nuehrenberg, J; Kernbichler, W; Reiman, A; Monticello, D

    2004-01-01

    For the magnetic field of the Wendelstein 7-X (W7-X) standard high-mirror configuration, computed by the PIES code, taking into account real coil geometry, neoclassical transport and bootstrap current are analysed in the 1/upsilon regime using methods based on the integration along magnetic field lines in a given magnetic field. The zero beta and (beta) = 1% cases are studied. The results are compared to the corresponding results for the vacuum magnetic field directly produced by modular coils. A significant advantage of W7-X over a conventional stellarator resulting from reduced neoclassical transport and from reduced bootstrap current follows from the computations although the neoclassical transport is somewhat larger than that previously obtained for the ideal W7-X model configuration

  20. Machine learning systems

    Energy Technology Data Exchange (ETDEWEB)

    Forsyth, R

    1984-05-01

    With the dramatic rise of expert systems has come a renewed interest in the fuel that drives them-knowledge. For it is specialist knowledge which gives expert systems their power. But extracting knowledge from human experts in symbolic form has proved arduous and labour-intensive. So the idea of machine learning is enjoying a renaissance. Machine learning is any automatic improvement in the performance of a computer system over time, as a result of experience. Thus a learning algorithm seeks to do one or more of the following: cover a wider range of problems, deliver more accurate solutions, obtain answers more cheaply, and simplify codified knowledge. 6 references.

  1. Man - Machine Communication

    CERN Document Server

    Petersen, Peter; Nielsen, Henning

    1984-01-01

    This report describes a Man-to-Machine Communication module which together with a STAC can take care of all operator inputs from the touch-screen, tracker balls and mechanical buttons. The MMC module can also contain a G64 card which could be a GPIB driver but many other G64 cards could be used. The soft-ware services the input devices and makes the results accessible from the CAMAC bus. NODAL functions for the Man Machine Communication is implemented in the STAC and in the ICC.

  2. Machine Protection

    CERN Document Server

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an ...

  3. Indonesian Stock Prediction using Support Vector Machine (SVM

    Directory of Open Access Journals (Sweden)

    Santoso Murtiyanto

    2018-01-01

    Full Text Available This project is part of developing software to provide predictive information technology-based services artificial intelligence (Machine Intelligence or Machine Learning that will be utilized in the money market community. The prediction method used in this early stages uses the combination of Gaussian Mixture Model and Support Vector Machine with Python programming. The system predicts the price of Astra International (stock code: ASII.JK stock data. The data used was taken during 17 yr period of January 2000 until September 2017. Some data was used for training/modeling (80 % of data and the remainder (20 % was used for testing. An integrated model comprising Gaussian Mixture Model and Support Vector Machine system has been tested to predict stock market of ASII.JK for l d in advance. This model has been compared with the Market Cummulative Return. From the results, it is depicts that the Gaussian Mixture Model-Support Vector Machine based stock predicted model, offers significant improvement over the compared models resulting sharpe ratio of 3.22.

  4. A generalized interface module for the coupling of spatial kinetics and thermal-hydraulics codes

    Energy Technology Data Exchange (ETDEWEB)

    Barber, D.A.; Miller, R.M.; Joo, H.G.; Downar, T.J. [Purdue Univ., West Lafayette, IN (United States). Dept. of Nuclear Engineering; Wang, W. [SCIENTECH, Inc., Rockville, MD (United States); Mousseau, V.A.; Ebert, D.D. [Nuclear Regulatory Commission, Washington, DC (United States). Office of Nuclear Regulatory Research

    1999-03-01

    A generalized interface module has been developed for the coupling of any thermal-hydraulics code to any spatial kinetics code. The coupling scheme was designed and implemented with emphasis placed on maximizing flexibility while minimizing modifications to the respective codes. In this design, the thermal-hydraulics, general interface, and spatial kinetics codes function independently and utilize the Parallel Virtual Machine software to manage cross-process communication. Using this interface, the USNRC version of the 3D neutron kinetics code, PARCX, has been coupled to the USNRC system analysis codes RELAP5 and TRAC-M. RELAP5/PARCS assessment results are presented for two NEACRP rod ejection benchmark problems and an NEA/OECD main steam line break benchmark problem. The assessment of TRAC-M/PARCS has only recently been initiated, nonetheless, the capabilities of the coupled code are presented for a typical PWR system/core model.

  5. A generalized interface module for the coupling of spatial kinetics and thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Barber, D.A.; Miller, R.M.; Joo, H.G.; Downar, T.J.; Mousseau, V.A.; Ebert, D.D.

    1999-01-01

    A generalized interface module has been developed for the coupling of any thermal-hydraulics code to any spatial kinetics code. The coupling scheme was designed and implemented with emphasis placed on maximizing flexibility while minimizing modifications to the respective codes. In this design, the thermal-hydraulics, general interface, and spatial kinetics codes function independently and utilize the Parallel Virtual Machine software to manage cross-process communication. Using this interface, the USNRC version of the 3D neutron kinetics code, PARCX, has been coupled to the USNRC system analysis codes RELAP5 and TRAC-M. RELAP5/PARCS assessment results are presented for two NEACRP rod ejection benchmark problems and an NEA/OECD main steam line break benchmark problem. The assessment of TRAC-M/PARCS has only recently been initiated, nonetheless, the capabilities of the coupled code are presented for a typical PWR system/core model

  6. FY 1992 research and development project for large-scale industrial technologies. Report on results of R and D of superhigh technological machining systems; 1992 nendo chosentan kako system no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-03-01

    Described herein are the FY 1992 results of the R and D project aimed at establishment of the technologies for development of, e.g., machine and electronic device members of superhigh precision and high functions by processing and superhigh-precision machining aided by excited beams. The elementary researches on superhigh-precision machining achieve the given targets for precision stability of the feed positioning device. The researches on development of high-precision rotating devices, on a trial basis, are directed to improvement of rotational precision of pneumatic static pressure bearings and magnetism correction/controlling circuits, increasing speed and precision of 3-point type rotational precision measurement methods, and development of rotation-driving motors, achieving rotational precision of 0.015{mu}m at 2000rpm. The researches on the surface modification technologies aided by ion beams involve experiments for production of crystalline Si films and thin-film transistors of the Si films, using the surface-modified portion of a large-size glass substrate. The researches on superhigh-technological machining standard measurement involve development of length-measuring systems aided by a dye laser, achieving a precision of {+-} 10nm or less in a 100mm measurement range. (NEDO)

  7. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  8. Maintenance and improvement of thermal hydraulic system codes using results of OCDE experiments (PKL, Rosa, Atlas) and application to Spanish Nuclear power plants. Camp-Spain project

    International Nuclear Information System (INIS)

    Sanchez, M.; Perez, J.; Martorell, S.; Carlos, S.; Villanueva, J. F.; Sanchez, F.; Queral, C.; Rebollo, M. J.; Rivas-Lewicky, J.; Verdu, G.; Gallardo, S.; Miro, R.; Querol, A.; Munoz-Cobo, J. L.; Escriva, A.; Berna, C.; Reventos, F.; Freixa, J.; Martinez, V.

    2016-01-01

    CSN involvement in different international NEA experimental TH programmes has outlined the scope for a new period of CAMP-Espana activities, currently focused on the: -Analysis, simulation and investigation of specific safety aspects of PKL3/OECD and ATLAS/OECD experiments. -Analysis of applicability and/or extension of the results in these projects to the safety, operation or availability of the Spanish nuclear power plants. Both objective are carried out by simulating experiments and plant application with the last available versions of NRC TH codes (RELAP5 or TRACE). A CAMP in kind contribution (NUREG/IA) is aimed as final result of both types of analyses. Five different national research groups (from Technical Universities of Madrid, Valencia and Cataluna) ate carrying out the development of these activities. (Author)

  9. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  10. Machine Protection

    International Nuclear Information System (INIS)

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an interlock system providing the glue between these systems. The most recent accelerator, the LHC, will operate with about 3 × 10 14 protons per beam, corresponding to an energy stored in each beam of 360 MJ. This energy can cause massive damage to accelerator equipment in case of uncontrolled beam loss, and a single accident damaging vital parts of the accelerator could interrupt operation for years. This article provides an overview of the requirements for protection of accelerator equipment and introduces the various protection systems. Examples are mainly from LHC, SNS and ESS

  11. Permutation parity machines for neural synchronization

    International Nuclear Information System (INIS)

    Reyes, O M; Kopitzke, I; Zimmermann, K-H

    2009-01-01

    Synchronization of neural networks has been studied in recent years as an alternative to cryptographic applications such as the realization of symmetric key exchange protocols. This paper presents a first view of the so-called permutation parity machine, an artificial neural network proposed as a binary variant of the tree parity machine. The dynamics of the synchronization process by mutual learning between permutation parity machines is analytically studied and the results are compared with those of tree parity machines. It will turn out that for neural synchronization, permutation parity machines form a viable alternative to tree parity machines

  12. Object-Oriented Support for Adaptive Methods on Paranel Machines

    Directory of Open Access Journals (Sweden)

    Sandeep Bhatt

    1993-01-01

    Full Text Available This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent, and the low overhead of the resulting C++ code (over hand-crafted C code supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.

  13. Development of Fractal Pattern Making Application using L-System for Enhanced Machine Controller

    Directory of Open Access Journals (Sweden)

    Gunawan Alexander A S

    2014-03-01

    Full Text Available One big issue facing the industry today is an automated machine lack of flexibility for customization because it is designed by the manufacturers based on certain standards. In this research, it is developed customized application software for CNC (Computer Numerically Controlled machines using open source platform. The application is enable us to create designs by means of fractal patterns using L-System, developed by turtle geometry interpretation and Python programming languages. The result of the application is the G-Code of fractal pattern formed by the method of L-System. In the experiment on the CNC machine, the G-Code of fractal pattern which involving the branching structure has been able to run well.

  14. Metallizing of machinable glass ceramic

    International Nuclear Information System (INIS)

    Seigal, P.K.

    1976-02-01

    A satisfactory technique has been developed for metallizing Corning (Code 9658) machinable glass ceramic for brazing. Analyses of several bonding materials suitable for metallizing were made using microprobe analysis, optical metallography, and tensile strength tests. The effect of different cleaning techniques on the microstructure and the effect of various firing temperatures on the bonding interface were also investigated. A nickel paste, used for thick-film application, has been applied to obtain braze joints with strength in excess of 2000 psi

  15. Machine terms dictionary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1979-04-15

    This book gives descriptions of machine terms which includes machine design, drawing, the method of machine, machine tools, machine materials, automobile, measuring and controlling, electricity, basic of electron, information technology, quality assurance, Auto CAD and FA terms and important formula of mechanical engineering.

  16. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  17. Machinability of nickel based alloys using electrical discharge machining process

    Science.gov (United States)

    Khan, M. Adam; Gokul, A. K.; Bharani Dharan, M. P.; Jeevakarthikeyan, R. V. S.; Uthayakumar, M.; Thirumalai Kumaran, S.; Duraiselvam, M.

    2018-04-01

    The high temperature materials such as nickel based alloys and austenitic steel are frequently used for manufacturing critical aero engine turbine components. Literature on conventional and unconventional machining of steel materials is abundant over the past three decades. However the machining studies on superalloy is still a challenging task due to its inherent property and quality. Thus this material is difficult to be cut in conventional processes. Study on unconventional machining process for nickel alloys is focused in this proposed research. Inconel718 and Monel 400 are the two different candidate materials used for electrical discharge machining (EDM) process. Investigation is to prepare a blind hole using copper electrode of 6mm diameter. Electrical parameters are varied to produce plasma spark for diffusion process and machining time is made constant to calculate the experimental results of both the material. Influence of process parameters on tool wear mechanism and material removal are considered from the proposed experimental design. While machining the tool has prone to discharge more materials due to production of high energy plasma spark and eddy current effect. The surface morphology of the machined surface were observed with high resolution FE SEM. Fused electrode found to be a spherical structure over the machined surface as clumps. Surface roughness were also measured with surface profile using profilometer. It is confirmed that there is no deviation and precise roundness of drilling is maintained.

  18. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  19. High-resolution MR imaging of the elbow using a microscopy surface coil and a clinical 1.5 T MR machine: preliminary results

    International Nuclear Information System (INIS)

    Yoshioka, Hiroshi; Ueno, Teruko; Takahashi, Nobuyuki; Saida, Yukihisa; Tanaka, Toshikazu; Kujiraoka, Yuka; Shindo, Masashi; Nishiura, Yasumasa; Ochiai, Naoyuki

    2004-01-01

    To obtain high-resolution MR images of the elbow using a microscopy surface coil with a 1.5 T clinical machine and to evaluate the feasibility of its use for elbow injuries. Five asymptomatic normal volunteers and 13 patients with elbow pain were prospectively studied with MR imaging using a microscopy surface coil 47 mm in diameter. High-resolution MR images using a microscopy coil were obtained with fast spin echo (FSE) proton density-weighted sequence, gradient recalled echo (GRE) T2*-weighted sequence, and short tau inversion recovery (STIR) sequence, with a 1-2 mm slice thickness, a 50-70 mm field of view, an imaging matrix of 140-224 x 512 using zero fill interpolation, and 2-6 excitations. High-resolution MR images of normal volunteers using a microscopy coil clearly showed each structure of the medial and lateral collateral ligaments on GRE T2*-weighted images and FSE proton-density weighted images. Partial medial collateral ligament injury, a small avulsion of the medial epicondyle, and osteochondritis dissecans were well demonstrated on high-resolution MR images. High-resolution MR imaging of the elbow using a microscopy surface coil with a 1.5 T clinical machine is a promising method for accurately characterizing the normal anatomy of the elbow and depicting its lesions in detail. (orig.)

  20. FCG: a code generator for lazy functional languages

    NARCIS (Netherlands)

    Kastens, U.; Langendoen, K.G.; Hartel, Pieter H.; Pfahler, P.

    1992-01-01

    The FCGcode generator produces portable code that supports efficient two-space copying garbage collection. The code generator transforms the output of the FAST compiler front end into an abstract machine code. This code explicitly uses a call stack, which is accessible to the garbage collector. In

  1. Machine intelligence and signal processing

    CERN Document Server

    Vatsa, Mayank; Majumdar, Angshul; Kumar, Ajay

    2016-01-01

    This book comprises chapters on key problems in machine learning and signal processing arenas. The contents of the book are a result of a 2014 Workshop on Machine Intelligence and Signal Processing held at the Indraprastha Institute of Information Technology. Traditionally, signal processing and machine learning were considered to be separate areas of research. However in recent times the two communities are getting closer. In a very abstract fashion, signal processing is the study of operator design. The contributions of signal processing had been to device operators for restoration, compression, etc. Applied Mathematicians were more interested in operator analysis. Nowadays signal processing research is gravitating towards operator learning – instead of designing operators based on heuristics (for example wavelets), the trend is to learn these operators (for example dictionary learning). And thus, the gap between signal processing and machine learning is fast converging. The 2014 Workshop on Machine Intel...

  2. A Concrete Framework for Environment Machines

    DEFF Research Database (Denmark)

    Biernacka, Malgorzata; Danvy, Olivier

    2007-01-01

    calculus with explicit substitutions), we extend it minimally so that it can also express one-step reduction strategies, and we methodically derive a series of environment machines from the specification of two one-step reduction strategies for the lambda-calculus: normal order and applicative order....... The derivation extends Danvy and Nielsen’s refocusing-based construction of abstract machines with two new steps: one for coalescing two successive transitions into one, and the other for unfolding a closure into a term and an environment in the resulting abstract machine. The resulting environment machines...... include both the Krivine machine and the original version of Krivine’s machine, Felleisen et al.’s CEK machine, and Leroy’s Zinc abstract machine....

  3. Possibilities for Automatic Control of Hydro-Mechanical Transmission and Birotating Electric Machine

    Directory of Open Access Journals (Sweden)

    V. V. Mikhailov

    2014-01-01

    Full Text Available The paper presents mathematical models and results of virtual investigations pertaining to the selected motion parameters of a mobile machine equipped with hydro mechanical and modernized transmissions. The machine has been tested in similar technological cycles and it has been equipped with a universal automatic control system. Changes in structure and type of power transmission have been obtained with the help of a control algorithm including an extra reversible electric machine which is switched in at some operational modes.Implementation of the proposed  concept makes it possible to obtain and check the improved C-code of the control system and enhance operational parameters of the transmission and machine efficiency, reduce slippage and tire wear while using braking energy for its later beneficial use which is usually considered as a consumable element.

  4. Addiction Machines

    Directory of Open Access Journals (Sweden)

    James Godley

    2011-10-01

    Full Text Available Entry into the crypt William Burroughs shared with his mother opened and shut around a failed re-enactment of William Tell’s shot through the prop placed upon a loved one’s head. The accidental killing of his wife Joan completed the installation of the addictation machine that spun melancholia as manic dissemination. An early encryptment to which was added the audio portion of abuse deposited an undeliverable message in WB. Wil- liam could never tell, although his corpus bears the in- scription of this impossibility as another form of pos- sibility. James Godley is currently a doctoral candidate in Eng- lish at SUNY Buffalo, where he studies psychoanalysis, Continental philosophy, and nineteenth-century litera- ture and poetry (British and American. His work on the concept of mourning and “the dead” in Freudian and Lacanian approaches to psychoanalytic thought and in Gothic literature has also spawned an essay on zombie porn. Since entering the Academy of Fine Arts Karlsruhe in 2007, Valentin Hennig has studied in the classes of Sil- via Bächli, Claudio Moser, and Corinne Wasmuht. In 2010 he spent a semester at the Dresden Academy of Fine Arts. His work has been shown in group exhibi- tions in Freiburg and Karlsruhe.

  5. Machine musicianship

    Science.gov (United States)

    Rowe, Robert

    2002-05-01

    The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.

  6. User's manual for seismic analysis code 'SONATINA-2V'

    International Nuclear Information System (INIS)

    Hanawa, Satoshi; Iyoku, Tatsuo

    2001-08-01

    The seismic analysis code, SONATINA-2V, has been developed to analyze the behavior of the HTTR core graphite components under seismic excitation. The SONATINA-2V code is a two-dimensional computer program capable of analyzing the vertical arrangement of the HTTR graphite components, such as fuel blocks, replaceable reflector blocks, permanent reflector blocks, as well as their restraint structures. In the analytical model, each block is treated as rigid body and is restrained by dowel pins which restrict relative horizontal movement but allow vertical and rocking motions between upper and lower blocks. Moreover, the SONATINA-2V code is capable of analyzing the core vibration behavior under both simultaneous excitations of vertical and horizontal directions. The SONATINA-2V code is composed of the main program, pri-processor for making the input data to SONATINA-2V and post-processor for data processing and making the graphics from analytical results. Though the SONATINA-2V code was developed in order to work in the MSP computer system of Japan Atomic Energy Research Institute (JAERI), the computer system was abolished with the technical progress of computer. Therefore, improvement of this analysis code was carried out in order to operate the code under the UNIX machine, SR8000 computer system, of the JAERI. The users manual for seismic analysis code, SONATINA-2V, including pri- and post-processor is given in the present report. (author)

  7. Towards a new classification of stable phase schizophrenia into major and simple neuro-cognitive psychosis: Results of unsupervised machine learning analysis.

    Science.gov (United States)

    Kanchanatawan, Buranee; Sriswasdi, Sira; Thika, Supaksorn; Stoyanov, Drozdstoy; Sirivichayakul, Sunee; Carvalho, André F; Geffard, Michel; Maes, Michael

    2018-05-23

    Deficit schizophrenia, as defined by the Schedule for Deficit Syndrome, may represent a distinct diagnostic class defined by neurocognitive impairments coupled with changes in IgA/IgM responses to tryptophan catabolites (TRYCATs). Adequate classifications should be based on supervised and unsupervised learning rather than on consensus criteria. This study used machine learning as means to provide a more accurate classification of patients with stable phase schizophrenia. We found that using negative symptoms as discriminatory variables, schizophrenia patients may be divided into two distinct classes modelled by (A) impairments in IgA/IgM responses to noxious and generally more protective tryptophan catabolites, (B) impairments in episodic and semantic memory, paired associative learning and false memory creation, and (C) psychotic, excitation, hostility, mannerism, negative, and affective symptoms. The first cluster shows increased negative, psychotic, excitation, hostility, mannerism, depression and anxiety symptoms, and more neuroimmune and cognitive disorders and is therefore called "major neurocognitive psychosis" (MNP). The second cluster, called "simple neurocognitive psychosis" (SNP) is discriminated from normal controls by the same features although the impairments are less well developed than in MNP. The latter is additionally externally validated by lowered quality of life, body mass (reflecting a leptosome body type), and education (reflecting lower cognitive reserve). Previous distinctions including "type 1" (positive)/"type 2" (negative) and DSM-IV-TR (eg, paranoid) schizophrenia could not be validated using machine learning techniques. Previous names of the illness, including schizophrenia, are not very adequate because they do not describe the features of the illness, namely, interrelated neuroimmune, cognitive, and clinical features. Stable-phase schizophrenia consists of 2 relevant qualitatively distinct categories or nosological entities with SNP

  8. Uncertainty analysis for results of thermal hydraulic codes of best-estimate-type; Analisis de incertidumbre para resultados de codigos termohidraulicos de mejor estimacion

    Energy Technology Data Exchange (ETDEWEB)

    Alva N, J.

    2010-07-01

    In this thesis, some fundamental knowledge is presented about uncertainty analysis and about diverse methodologies applied in the study of nuclear power plant transient event analysis, particularly related to thermal hydraulics phenomena. These concepts and methodologies mentioned in this work come from a wide bibliographical research in the nuclear power subject. Methodologies for uncertainty analysis have been developed by quite diverse institutions, and they have been widely used worldwide for application to results from best-estimate-type computer codes in nuclear reactor thermal hydraulics and safety analysis. Also, the main uncertainty sources, types of uncertainties, and aspects related to best estimate modeling and methods are introduced. Once the main bases of uncertainty analysis have been set, and some of the known methodologies have been introduced, it is presented in detail the CSAU methodology, which will be applied in the analyses. The main objective of this thesis is to compare the results of an uncertainty and sensibility analysis by using the Response Surface Technique to the application of W ilks formula, apply through a loss coolant experiment and an event of rise in a BWR. Both techniques are options in the part of uncertainty and sensibility analysis of the CSAU methodology, which was developed for the analysis of transients and accidents at nuclear power plants, and it is the base of most of the methodologies used in licensing of nuclear power plants practically everywhere. Finally, the results of applying both techniques are compared and discussed. (Author)

  9. Production of neutronic discrete equations for a cylindrical geometry in one group energy and benchmark the results with MCNP-4B code with one group energy library

    International Nuclear Information System (INIS)

    Salehi, A. A.; Vosoughi, N.; Shahriari, M.

    2002-01-01

    In reactor core neutronic calculations, we usually choose a control volume and investigate about the input, output, production and absorption inside it. Finally, we derive neutron transport equation. This equation is not easy to solve for simple and symmetrical geometry. The objective of this paper is to introduce a new direct method for neutronic calculations. This method is based on physics of problem and with meshing of the desired geometry, writing the balance equation for each mesh intervals and with notice to the conjunction between these mesh intervals, produce the final discrete equation series without production of neutron transport differential equation and mandatory passing form differential equation bridge. This method, which is named Direct Discrete Method, was applied in static state, for a cylindrical geometry in one group energy. The validity of the results from this new method are tested with MCNP-4B code with a one group energy library. One energy group direct discrete equation produces excellent results, which can be compared with the results of MCNP-4B

  10. QR code for medical information uses.

    Science.gov (United States)

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  11. Trunnion Collar Removal Machine - Gap Analysis Table

    International Nuclear Information System (INIS)

    Johnson, M.

    2005-01-01

    The purpose of this document is to review the existing the trunnion collar removal machine against the ''Nuclear Safety Design Bases for License Application'' (NSDB) [Ref. 10] requirements and to identify codes and standards and supplemental requirements to meet these requirements. If these codes and standards can not fully meet these requirements then a ''gap'' is identified. These gaps will be identified here and addressed using the ''Trunnion Collar Removal Machine Design Development Plan'' [Ref. 15]. The codes and standards, supplemental requirements, and design development requirements for the trunnion collar removal machine are provided in the gap analysis table (Appendix A, Table 1). Because the trunnion collar removal machine is credited with performing functions important to safety (ITS) in the NSDB [Ref. 10], design basis requirements are applicable to ensure equipment is available and performs required safety functions when needed. The gap analysis table is used to identify design objectives and provide a means to satisfy safety requirements. To ensure that the trunnion collar removal machine performs required safety functions and meets performance criteria, this portion of the gap analysis tables supplies codes and standards sections and the supplemental requirements and identifies design development requirements, if needed

  12. Preliminary Test of Upgraded Conventional Milling Machine into PC Based CNC Milling Machine

    International Nuclear Information System (INIS)

    Abdul Hafid

    2008-01-01

    CNC (Computerized Numerical Control) milling machine yields a challenge to make an innovation in the field of machining. With an action job is machining quality equivalent to CNC milling machine, the conventional milling machine ability was improved to be based on PC CNC milling machine. Mechanically and instrumentally change. As a control replacing was conducted by servo drive and proximity were used. Computer programme was constructed to give instruction into milling machine. The program structure of consists GUI model and ladder diagram. Program was put on programming systems called RTX software. The result of up-grade is computer programming and CNC instruction job. The result was beginning step and it will be continued in next time. With upgrading ability milling machine becomes user can be done safe and optimal from accident risk. By improving performance of milling machine, the user will be more working optimal and safely against accident risk. (author)

  13. An algebraic approach to graph codes

    DEFF Research Database (Denmark)

    Pinero, Fernando

    This thesis consists of six chapters. The first chapter, contains a short introduction to coding theory in which we explain the coding theory concepts we use. In the second chapter, we present the required theory for evaluation codes and also give an example of some fundamental codes in coding...... theory as evaluation codes. Chapter three consists of the introduction to graph based codes, such as Tanner codes and graph codes. In Chapter four, we compute the dimension of some graph based codes with a result combining graph based codes and subfield subcodes. Moreover, some codes in chapter four...

  14. Engineering model cryocooler test results

    International Nuclear Information System (INIS)

    Skimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1992-01-01

    This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads

  15. Bacteriological quality of drinks from vending machines.

    Science.gov (United States)

    Hunter, P. R.; Burge, S. H.

    1986-01-01

    A survey on the bacteriological quality of both drinking water and flavoured drinks from coin-operated vending machines is reported. Forty-four per cent of 25 drinking water samples examined contained coliforms and 84% had viable counts of greater than 1000 organisms ml at 30 degrees C. Thirty-one flavoured drinks were examined; 6% contained coliforms and 39% had total counts greater than 1000 organisms ml. It is suggested that the D.H.S.S. code of practice on coin-operated vending machines is not being followed. It is also suggested that drinking water alone should not be dispensed from such machines. PMID:3794325

  16. BOT3P5.2, 3D Mesh Generator and Graphical Display of Geometry for Radiation Transport Codes, Display of Results

    International Nuclear Information System (INIS)

    Orsi, Roberto; Bidaud, Adrien

    2007-01-01

    1 - Description of program or function: BOT3P was originally conceived as a set of standard FORTRAN 77 language programs in order to give the users of the DORT and TORT deterministic transport codes some useful diagnostic tools to prepare and check their input data files. Later versions extended the possibility to produce the geometrical, material distribution and fixed neutron source data to other deterministic transport codes such as TWODANT/THREEDANT of the DANTSYS system, PARTISN and, potentially, to any transport code through BOT3P binary output files that can be easily interfaced (see, for example, the Russian two-dimensional (2D) and three-dimensional (3D) discrete ordinates neutron, photon and charged particle transport codes KASKAD-S-2.5 and KATRIN-2.0). As from Version 5.1 BOT3P contained important additions specifically addressed to radiation transport analysis for medical applications. BOT3P-5.2 contains new graphics capabilities. Some of them enable users to select space sub-domains of the total mesh grid in order to improve the zoom simulation of the geometry, both in 2D cuts and in 3D. Moreover the new BOT3P module (PDTM) may improve the interface of BOT3P geometrical models to transport analysis codes. The following programs are included in the BOT3P software package: GGDM, DDM, GGTM, DTM2, DTM3, RVARSCL, COMPARE, MKSRC, CATSM, DTET, and PDTM. The main features of these different programs are described. 2 - Methods: GGDM and GGTM work similarly from the logical point of view. Since the 3D case is more general, the following description refers to GGTM. All the co-ordinate values that characterise the geometrical scheme at the basis of the 3D transport code geometrical and material model are read, sorted and all stored if different from the neighbouring ones more than an input tolerance established by the user. These co-ordinates are always present in the fine-mesh boundary arrays independently of the mesh grid refinement options, because they

  17. Strong normalization by type-directed partial evaluation and run-time code generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1998-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  18. Strong Normalization by Type-Directed Partial Evaluation and Run-Time Code Generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1997-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  19. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  20. Machine technology: a survey

    International Nuclear Information System (INIS)

    Barbier, M.M.

    1981-01-01

    An attempt was made to find existing machines that have been upgraded and that could be used for large-scale decontamination operations outdoors. Such machines are in the building industry, the mining industry, and the road construction industry. The road construction industry has yielded the machines in this presentation. A review is given of operations that can be done with the machines available

  1. Machine Shop Lathes.

    Science.gov (United States)

    Dunn, James

    This guide, the second in a series of five machine shop curriculum manuals, was designed for use in machine shop courses in Oklahoma. The purpose of the manual is to equip students with basic knowledge and skills that will enable them to enter the machine trade at the machine-operator level. The curriculum is designed so that it can be used in…

  2. Superconducting rotating machines

    International Nuclear Information System (INIS)

    Smith, J.L. Jr.; Kirtley, J.L. Jr.; Thullen, P.

    1975-01-01

    The opportunities and limitations of the applications of superconductors in rotating electric machines are given. The relevant properties of superconductors and the fundamental requirements for rotating electric machines are discussed. The current state-of-the-art of superconducting machines is reviewed. Key problems, future developments and the long range potential of superconducting machines are assessed

  3. Machine-assisted verification of latent fingerprints: first results for nondestructive contact-less optical acquisition techniques with a CWL sensor

    Science.gov (United States)

    Hildebrandt, Mario; Kiltz, Stefan; Krapyvskyy, Dmytro; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus

    2011-11-01

    A machine-assisted analysis of traces from crime scenes might be possible with the advent of new high-resolution non-destructive contact-less acquisition techniques for latent fingerprints. This requires reliable techniques for the automatic extraction of fingerprint features from latent and exemplar fingerprints for matching purposes using pattern recognition approaches. Therefore, we evaluate the NIST Biometric Image Software for the feature extraction and verification of contact-lessly acquired latent fingerprints to determine potential error rates. Our exemplary test setup includes 30 latent fingerprints from 5 people in two test sets that are acquired from different surfaces using a chromatic white light sensor. The first test set includes 20 fingerprints on two different surfaces. It is used to determine the feature extraction performance. The second test set includes one latent fingerprint on 10 different surfaces and an exemplar fingerprint to determine the verification performance. This utilized sensing technique does not require a physical or chemical visibility enhancement of the fingerprint residue, thus the original trace remains unaltered for further investigations. No particular feature extraction and verification techniques have been applied to such data, yet. Hence, we see the need for appropriate algorithms that are suitable to support forensic investigations.

  4. Machine-to-machine communications architectures, technology, standards, and applications

    CERN Document Server

    Misic, Vojislav B

    2014-01-01

    With the number of machine-to-machine (M2M)-enabled devices projected to reach 20 to 50 billion by 2020, there is a critical need to understand the demands imposed by such systems. Machine-to-Machine Communications: Architectures, Technology, Standards, and Applications offers rigorous treatment of the many facets of M2M communication, including its integration with current technology.Presenting the work of a different group of international experts in each chapter, the book begins by supplying an overview of M2M technology. It considers proposed standards, cutting-edge applications, architectures, and traffic modeling and includes case studies that highlight the differences between traditional and M2M communications technology.Details a practical scheme for the forward error correction code designInvestigates the effectiveness of the IEEE 802.15.4 low data rate wireless personal area network standard for use in M2M communicationsIdentifies algorithms that will ensure functionality, performance, reliability, ...

  5. Irradiation of Argentine (U,Pu)O2 MOX fuels. Post-irradiation results and experimental analysis with the BACO code

    International Nuclear Information System (INIS)

    Marino, A.C.; Perez, E.; Adelfang, P.

    1996-01-01

    The irradiation of the first Argentine prototypes of pressurized heavy water reactor (PHWR) (U,Pu)O 2 MOX fuels began in 1986. These experiments were carried out in the High Flux Reactor (HFR)-Petten, Holland. The rods were prepared and controlled in the C NEA's α Facility. The postirradiation examinations were performed in the Kernforschungszentrum, Karlsruhe, Germany and in the Joint Research Center (JRC), Petten. The first rod has been used for destructive pre-irradiation analysis. The second one as a pathfinder to adjust systems in the HFR. Two additional rods including iodine doped pellets were intended to simulate 15000 MWd/T(M) burnup. The remaining two rods were irradiated until 15000 MWd/T(M). One of them underwent a final ramp with the aim of verifying fabrication processes and studying the behaviour under power transients. BACO (BArra COmbustible) code was used to define the power histories and to analyse the experiments. This paper presents a description of the different experiments and a comparison between the results of the postirradiation examinations and the BACO outputs. (orig.)

  6. Irradiation of Argentine (U,Pu)O 2 MOX fuels. Post-irradiation results and experimental analysis with the BACO code

    Science.gov (United States)

    Marino, Armando Carlos; Pérez, Edmundo; Adelfang, Pablo

    1996-04-01

    The irradiation of the first Argentine prototypes of pressurized heavy water reactor (PHWR) (U,Pu)O 2 MOX fuels began in 1986. These experiments were carried out in the High Flux Reactor (HFR)-Petten, Holland. The rods were prepared and controlled in the CNEA's α Facility. The postirradiation examinations were performed in the Kernforschungszentrum, Karlsruhe, Germany and in the Joint Research Center (JRC), Petten. The first rod has been used for destructive pre-irradiation analysis. The second one as a pathfinder to adjust systems in the HFR. Two additional rods including iodine doped pellets were intended to simulate 15 000 MWd/T(M) burnup. The remaining two rods were irradiated until 15 000 MWd/T(M). One of them underwent a final ramp with the aim of verifying fabrication processes and studying the behaviour under power transients. BACO (BArra COmbustible) code was used to define the power histories and to analyse the experiments. This paper presents a description of the different experiments and a comparison between the results of the postirradiation examinations and the BACO outputs.

  7. Machine Learning for Medical Imaging.

    Science.gov (United States)

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. © RSNA, 2017.

  8. VOA: a 2-d plasma physics code

    International Nuclear Information System (INIS)

    Eltgroth, P.G.

    1975-12-01

    A 2-dimensional relativistic plasma physics code was written and tested. The non-thermal components of the particle distribution functions are represented by expansion into moments in momentum space. These moments are computed directly from numerical equations. Currently three species are included - electrons, ions and ''beam electrons''. The computer code runs on either the 7600 or STAR machines at LLL. Both the physics and the operation of the code are discussed

  9. Classification of Strawberry Fruit Shape by Machine Learning

    Science.gov (United States)

    Ishikawa, T.; Hayashi, A.; Nagamatsu, S.; Kyutoku, Y.; Dan, I.; Wada, T.; Oku, K.; Saeki, Y.; Uto, T.; Tanabata, T.; Isobe, S.; Kochi, N.

    2018-05-01

    Shape is one of the most important traits of agricultural products due to its relationships with the quality, quantity, and value of the products. For strawberries, the nine types of fruit shape were defined and classified by humans based on the sampler patterns of the nine types. In this study, we tested the classification of strawberry shapes by machine learning in order to increase the accuracy of the classification, and we introduce the concept of computerization into this field. Four types of descriptors were extracted from the digital images of strawberries: (1) the Measured Values (MVs) including the length of the contour line, the area, the fruit length and width, and the fruit width/length ratio; (2) the Ellipse Similarity Index (ESI); (3) Elliptic Fourier Descriptors (EFDs), and (4) Chain Code Subtraction (CCS). We used these descriptors for the classification test along with the random forest approach, and eight of the nine shape types were classified with combinations of MVs + CCS + EFDs. CCS is a descriptor that adds human knowledge to the chain codes, and it showed higher robustness in classification than the other descriptors. Our results suggest machine learning's high ability to classify fruit shapes accurately. We will attempt to increase the classification accuracy and apply the machine learning methods to other plant species.

  10. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  11. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  12. 4th Machining Innovations Conference

    CERN Document Server

    2014-01-01

    This contributed volume contains the research results presented at the 4th Machining Innovations Conference, Hannover, September 2013. The topic of the conference are new production technologies in aerospace industry and the focus is on energy efficient machine tools as well as sustainable process planning. The target audience primarily comprises researchers and experts in the field but the book may also be beneficial for graduate students.

  13. Assembly processor program converts symbolic programming language to machine language

    Science.gov (United States)

    Pelto, E. V.

    1967-01-01

    Assembly processor program converts symbolic programming language to machine language. This program translates symbolic codes into computer understandable instructions, assigns locations in storage for successive instructions, and computer locations from symbolic addresses.

  14. Machine learning a probabilistic perspective

    CERN Document Server

    Murphy, Kevin P

    2012-01-01

    Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic method...

  15. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  16. ''Diagonalization'' of a compound Atwood machine

    International Nuclear Information System (INIS)

    Crawford, F.S.

    1987-01-01

    We consider a simple Atwood machine consisting of a massless frictionless pulley no. 0 supporting two masses m 1 and m 2 connected by a massless flexible string. We show that the string that supports massless pulley no. 0 ''thinks'' it is simply supporting a mass m 0 , with m 0 = 4m 1 m 2 /(m 1 +m 2 ). This result, together with Einstein's equivalence principle, allows us to solve easily those compound Atwood machines created by replacing one or both of m 1 and m 2 in machine no. 0 by an Atwood machine. We may then replacing the masses in these new machines by machines, etc. The complete solution can be written down immediately, without solving simultaneous equations. Finally we give the effective mass of an Atwood machine whose pulley has nonzero mass and moment of inertia

  17. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  18. Machine Vision Implementation in Rapid PCB Prototyping

    Directory of Open Access Journals (Sweden)

    Yosafat Surya Murijanto

    2012-03-01

    Full Text Available Image processing, the heart of machine vision, has proven itself to be an essential part of the industries today. Its application has opened new doorways, making more concepts in manufacturing processes viable. This paper presents an application of machine vision in designing a module with the ability to extract drills and route coordinates from an un-mounted or mounted printed circuit board (PCB. The algorithm comprises pre-capturing processes, image segmentation and filtering, edge and contour detection, coordinate extraction, and G-code creation. OpenCV libraries and Qt IDE are the main tools used. Throughout some testing and experiments, it is concluded that the algorithm is able to deliver acceptable results. The drilling and routing coordinate extraction algorithm can extract in average 90% and 82% of the whole drills and routes available on the scanned PCB in a total processing time of less than 3 seconds. This is achievable through proper lighting condition, good PCB surface condition and good webcam quality. 

  19. Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines

    Science.gov (United States)

    Le, Martin; Zheng, Xin; Katanyoutant, Sunant

    2008-01-01

    Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state

  20. To report the obtained results in the simulation with the FCS-11 and Presto codes of the two first operation cycles of the Laguna Verde Unit 1 reactor

    International Nuclear Information System (INIS)

    Montes T, J.L.; Moran L, J.M.; Cortes C, C.C.

    1990-08-01

    The objective of this work is to establish a preliminary methodology to carry out analysis of recharges for the reactor of the Laguna Verde U-1, by means of the evaluation of the state of the reactor core in its first two operation cycles using the FCS2 and Presto-B codes. (Author)

  1. Scaling gysela code beyond 32K-cores on bluegene/Q***

    Directory of Open Access Journals (Sweden)

    Bigot J.

    2013-12-01

    Full Text Available Gyrokinetic simulations lead to huge computational needs. Up to now, the semi- Lagrangian code Gysela performed large simulations using a few thousands cores (8k cores typically. Simulation with finer resolutions and with kinetic electrons are expected to increase those needs by a huge factor, providing a good example of applications requiring Exascale machines. This paper presents our work to improve Gysela in order to target an architecture that presents one possible way towards Exascale: the Blue Gene/Q. After analyzing the limitations of the code on this architecture, we have implemented three kinds of improvement: computational performance improvements, memory consumption improvements and disk i/o improvements. As a result, we show that the code now scales beyond 32k cores with much improved performances. This will make it possible to target the most powerful machines available and thus handle much larger physical cases.

  2. "Mommy Blogs" and the Vaccination Exemption Narrative: Results From A Machine-Learning Approach for Story Aggregation on Parenting Social Media Sites.

    Science.gov (United States)

    Tangherlini, Timothy R; Roychowdhury, Vwani; Glenn, Beth; Crespi, Catherine M; Bandari, Roja; Wadia, Akshay; Falahi, Misagh; Ebrahimzadeh, Ehsan; Bastani, Roshan

    2016-11-22

    Social media offer an unprecedented opportunity to explore how people talk about health care at a very large scale. Numerous studies have shown the importance of websites with user forums for people seeking information related to health. Parents turn to some of these sites, colloquially referred to as "mommy blogs," to share concerns about children's health care, including vaccination. Although substantial work has considered the role of social media, particularly Twitter, in discussions of vaccination and other health care-related issues, there has been little work on describing the underlying structure of these discussions and the role of persuasive storytelling, particularly on sites with no limits on post length. Understanding the role of persuasive storytelling at Internet scale provides useful insight into how people discuss vaccinations, including exemption-seeking behavior, which has been tied to a recent diminution of herd immunity in some communities. To develop an automated and scalable machine-learning method for story aggregation on social media sites dedicated to discussions of parenting. We wanted to discover the aggregate narrative frameworks to which individuals, through their exchange of experiences and commentary, contribute over time in a particular topic domain. We also wanted to characterize temporal trends in these narrative frameworks on the sites over the study period. To ensure that our data capture long-term discussions and not short-term reactions to recent events, we developed a dataset of 1.99 million posts contributed by 40,056 users and viewed 20.12 million times indexed from 2 parenting sites over a period of 105 months. Using probabilistic methods, we determined the topics of discussion on these parenting sites. We developed a generative statistical-mechanical narrative model to automatically extract the underlying stories and story fragments from millions of posts. We aggregated the stories into an overarching narrative framework

  3. Precision Machining

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    moving away from the empirical work of Taylor (1906) that resulted in a new cutting tool ... conditions that bring about partial ductile mode grinding, a primary process that ... innovative research work performed by my group in Malaysia. It was a ...

  4. Comparison of results of simulation of the CONTEMPT-LT/028 and lAP-3B codes for the analysis of the internal vacuum breaker valves of the CNLV

    International Nuclear Information System (INIS)

    Ovando C, R.; Cecenas F, M.; Moya C, M.M.

    2006-01-01

    In the primary container of a BWR type reactor, the humid and dry wells its are communicate by means of valves designed to equal the pressure in case of a significant pressure difference exists, produced by an operative event just as the performance of an emergency system. These valves are known as internal vacuum breakers and its analysis it is made by means of the use of a code with the capacity to analyze the primary contention of the reactor. Among the codes able to carry out this analysis type there is CONTEMPT-LT/028 and MAAP-3B; however, these codes possess characteristic different respect the modeling one of the different damage mitigation systems to the contention (dews, windy, emergency systems), of the transfer of heat among the different compartments of the primary container and in the details of the civil construction. In previous works carried out with the CONTEMPT-LT/028 code, they have been carried out different cases of simulation related with the operation of the internal breaker vacuum valves. These cases include small ruptures in the main steam lines and ruptures in the recirculation knots. It was selected the case more restrictive and it was generated an equivalent scenario file for the MAAP-3B code. In this work the performance of the internal breaker vacuum valves is analyzed by means of the CONTEMPT-LT/028 and MAAP-3B codes, when using the case more restrictive consistent in a small rupture in a main steam line. The analysis of the simulations indicates that both codes produce very similar results and the found differences are explained with base in the models used by each code to obtain the answer of the main thermohydraulic variables. In general terms, MAAP-3B possesses models that adapt in a form more convenient to the prospective phenomenology for this analysis, maintaining a conservative focus. (Author)

  5. Development of throughflow calculation code for axial flow compressors

    International Nuclear Information System (INIS)

    Kim, Ji Hwan; Kim, Hyeun Min; No, Hee Cheon

    2005-01-01

    The power conversion systems of the current HTGRs are based on closed Brayton cycle and major concern is thermodynamic performance of the axial flow helium gas turbines. Particularly, the helium compressor has some unique design challenges compared to the air-breathing compressor such as high hub-to-tip ratios throughout the machine and a large number of stages due to the physical property of the helium and thermodynamic cycle. Therefore, it is necessary to develop a design and analysis code for helium compressor that can estimate the design point and off-design performance accurately. KAIST nuclear system laboratory has developed a compressor design and analysis code by means of throughflow calculation and several loss models. This paper presents the outline of the development of a throughflow calculation code and its verification results

  6. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  7. Electronic data processing codes for California wildland plants

    Science.gov (United States)

    Merton J. Reed; W. Robert Powell; Bur S. Bal

    1963-01-01

    Systematized codes for plant names are helpful to a wide variety of workers who must record the identity of plants in the field. We have developed such codes for a majority of the vascular plants encountered on California wildlands and have published the codes in pocket size, using photo-reductions of the output from data processing machines. A limited number of the...

  8. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  9. The Complexity of Abstract Machines

    Directory of Open Access Journals (Sweden)

    Beniamino Accattoli

    2017-01-01

    Full Text Available The lambda-calculus is a peculiar computational model whose definition does not come with a notion of machine. Unsurprisingly, implementations of the lambda-calculus have been studied for decades. Abstract machines are implementations schema for fixed evaluation strategies that are a compromise between theory and practice: they are concrete enough to provide a notion of machine and abstract enough to avoid the many intricacies of actual implementations. There is an extensive literature about abstract machines for the lambda-calculus, and yet—quite mysteriously—the efficiency of these machines with respect to the strategy that they implement has almost never been studied. This paper provides an unusual introduction to abstract machines, based on the complexity of their overhead with respect to the length of the implemented strategies. It is conceived to be a tutorial, focusing on the case study of implementing the weak head (call-by-name strategy, and yet it is an original re-elaboration of known results. Moreover, some of the observation contained here never appeared in print before.

  10. MITS machine operations

    International Nuclear Information System (INIS)

    Flinchem, J.

    1980-01-01

    This document contains procedures which apply to operations performed on individual P-1c machines in the Machine Interface Test System (MITS) at AiResearch Manufacturing Company's Torrance, California Facility

  11. Brain versus Machine Control.

    Directory of Open Access Journals (Sweden)

    Jose M Carmena

    2004-12-01

    Full Text Available Dr. Octopus, the villain of the movie "Spiderman 2", is a fusion of man and machine. Neuroscientist Jose Carmena examines the facts behind this fictional account of a brain- machine interface

  12. Applied machining technology

    CERN Document Server

    Tschätsch, Heinz

    2010-01-01

    Machining and cutting technologies are still crucial for many manufacturing processes. This reference presents all important machining processes in a comprehensive and coherent way. It includes many examples of concrete calculations, problems and solutions.

  13. Machining with abrasives

    CERN Document Server

    Jackson, Mark J

    2011-01-01

    Abrasive machining is key to obtaining the desired geometry and surface quality in manufacturing. This book discusses the fundamentals and advances in the abrasive machining processes. It provides a complete overview of developing areas in the field.

  14. QCD on the connection machine

    International Nuclear Information System (INIS)

    Gupta, R.

    1990-01-01

    In this talk I give a brief introduction to the standard model of particle interactions and illustrate why analytical methods fail to solve QCD. I then give some details of our implementation of the high performance QCD code on the CM2 and highlight the important lessons learned. The sustained speed of the code at the time of this conference is 5.2 Gigaflops (scaled to a full 64K machine). Since this is a conference dedicated to computing in the 21st century, I will tailor my expectations (somewhat idiosyncratic) of the physics objectives to reflect what we will be able to do in 10 years time, extrapolating from where we stand today. This work is being done under a joint LANL-TMC collaboration consisting of C. Baillie, R. Brickner, D. Daniel, G. Kilcup, L. Johnson, A. Patel. S. Sharpe and myself. 5 refs

  15. Technical Note: Defining cyclotron-based clinical scanning proton machines in a FLUKA Monte Carlo system.

    Science.gov (United States)

    Fiorini, Francesca; Schreuder, Niek; Van den Heuvel, Frank

    2018-02-01

    Cyclotron-based pencil beam scanning (PBS) proton machines represent nowadays the majority and most affordable choice for proton therapy facilities, however, their representation in Monte Carlo (MC) codes is more complex than passively scattered proton system- or synchrotron-based PBS machines. This is because degraders are used to decrease the energy from the cyclotron maximum energy to the desired energy, resulting in a unique spot size, divergence, and energy spread depending on the amount of degradation. This manuscript outlines a generalized methodology to characterize a cyclotron-based PBS machine in a general-purpose MC code. The code can then be used to generate clinically relevant plans starting from commercial TPS plans. The described beam is produced at the Provision Proton Therapy Center (Knoxville, TN, USA) using a cyclotron-based IBA Proteus Plus equipment. We characterized the Provision beam in the MC FLUKA using the experimental commissioning data. The code was then validated using experimental data in water phantoms for single pencil beams and larger irregular fields. Comparisons with RayStation TPS plans are also presented. Comparisons of experimental, simulated, and planned dose depositions in water plans show that same doses are calculated by both programs inside the target areas, while penumbrae differences are found at the field edges. These differences are lower for the MC, with a γ(3%-3 mm) index never below 95%. Extensive explanations on how MC codes can be adapted to simulate cyclotron-based scanning proton machines are given with the aim of using the MC as a TPS verification tool to check and improve clinical plans. For all the tested cases, we showed that dose differences with experimental data are lower for the MC than TPS, implying that the created FLUKA beam model is better able to describe the experimental beam. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists

  16. Machine protection systems

    CERN Document Server

    Macpherson, A L

    2010-01-01

    A summary of the Machine Protection System of the LHC is given, with particular attention given to the outstanding issues to be addressed, rather than the successes of the machine protection system from the 2009 run. In particular, the issues of Safe Machine Parameter system, collimation and beam cleaning, the beam dump system and abort gap cleaning, injection and dump protection, and the overall machine protection program for the upcoming run are summarised.

  17. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  18. Assessment of RELAP5/MOD2 and RELAP5/MOD1-EUR codes on the basis of LOBI-MOD2 test results

    International Nuclear Information System (INIS)

    D'Auria, F.; Mazzini, M.; Oriolo, F.; Galassi, G.M.

    1989-10-01

    The present report deals with an overview of the application of RELAP5/MOD2 and RELAP5/MOD1-EUR codes to tests performed in the LOBI/MOD2 facility. The work has been carried out in the frame of a contract between Dipartimento di Costruzioni Meccaniche e Nucleari (DCMN) of Pisa University and CEC. The Universities of Roma, Pisa, Bologna and Palermo and the Polytechnic of Torino performed the post-test analysis of the LOBI experiment under the supervision of DCMN. In the report the main outcomes from the analysis of the LOBI experiments are given with the attempt to identify deficiencies in the modelling capabilities of the used codes

  19. Session 2: Machine studies

    International Nuclear Information System (INIS)

    Assmann, R.W.; Papotti, G.

    2012-01-01

    This document summarizes the talks and discussion that took place in the second session of the Chamonix 2012 workshop concerning results from machine studies performed in 2011. The session consisted of the following presentations: -) LHC experience with different bunch spacings by G. Rumolo; -) Observations of beam-beam effects in MDs in 2011 by W. Herr; -) Beam-induced heating/ bunch length/RF and lessons for 2012 by E. Metral; -) Lessons in beam diagnostics by R. Jones; -) Quench margins by M. Sapinski; and -) First demonstration with beam of the Achromatic Telescopic Squeeze (ATS) by S. Fartoukh. (authors)

  20. FY 1991 Research and development project for large-scale industrial technologies. Report on results of R and D of superhigh technological machining systems (Development of advanced machining devices for power-generating members); 1991 nendo chosentan kako system no kenkyu kaihatsu seika hokokusho. Hatsuden shisetsuyo buzai kodo kako sochi kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-03-01

    Described herein are the FY 1991 results of the R and D project aimed at establishment of superprecision machining technologies for developing machining technologies and nano-technologies aided by excited beams. For increasing the excimer laser output, the discharge-exciting technologies necessary for designing the 2kW laser as the final target are established. The service life tests are started to demonstrate the member service life of 10{sup 9} shots or more. For development of the technologies for large-current composite ion beams, the plant is constructed to attain the final targets (100keV, 2A, width: 500mm or more). The currents reaching the substrate are developed to have 2.8mA with the Ar ion and 2.9mA with the Ca ion by, e.g., developing the ion sources and improving functions of the ion beam controlling systems. Researches on the surface modification technologies for producing the superhigh-quality metallic surfaces involve composite ion implantation and providing the modified layer of Ti-B-based hard compound. Corrosion rate of the modified titanium surface in a boiling sulfuric acid solution is reduced from 300mm/year to around 0.13mm/year. (NEDO)

  1. Dictionary of machine terms

    International Nuclear Information System (INIS)

    1990-06-01

    This book has introduction of dictionary of machine terms, and a compilation committee and introductory remarks. It gives descriptions of the machine terms in alphabetical order from a to Z and also includes abbreviation of machine terms and symbol table, way to read mathematical symbols and abbreviation and terms of drawings.

  2. Mankind, machines and people

    Energy Technology Data Exchange (ETDEWEB)

    Hugli, A

    1984-01-01

    The following questions are addressed: is there a difference between machines and men, between human communication and communication with machines. Will we ever reach the point when the dream of artificial intelligence becomes a reality. Will thinking machines be able to replace the human spirit in all its aspects. Social consequences and philosophical aspects are addressed. 8 references.

  3. A Universal Reactive Machine

    DEFF Research Database (Denmark)

    Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.

    1997-01-01

    Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...

  4. HTS machine laboratory prototype

    DEFF Research Database (Denmark)

    machine. The machine comprises six stationary HTS field windings wound from both YBCO and BiSCOO tape operated at liquid nitrogen temperature and enclosed in a cryostat, and a three phase armature winding spinning at up to 300 rpm. This design has full functionality of HTS synchronous machines. The design...

  5. Your Sewing Machine.

    Science.gov (United States)

    Peacock, Marion E.

    The programed instruction manual is designed to aid the student in learning the parts, uses, and operation of the sewing machine. Drawings of sewing machine parts are presented, and space is provided for the student's written responses. Following an introductory section identifying sewing machine parts, the manual deals with each part and its…

  6. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  7. RFQ simulation code

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  8. SSCTRK: A particle tracking code for the SSC

    International Nuclear Information System (INIS)

    Ritson, D.

    1990-07-01

    While many indirect methods are available to evaluate dynamic aperture there appears at this time to be no reliable substitute to tracking particles through realistic machine lattices for a number of turns determined by the storage times. Machine lattices are generated by ''Monte Carlo'' techniques from the expected rms fabrication and survey errors. Any given generated machine can potentially be a lucky or unlucky fluctuation from the average. Therefore simulation to serve as a predictor of future performance must be done for an ensemble of generated machines. Further, several amplitudes and momenta are necessary to predict machine performance. Thus to make Monte Carlo type simulations for the SSC requires very considerable computer resources. Hitherto, it has been assumed that this was not feasible, and alternative indirect methods have been proposed or tried to answer the problem. We reexamined the feasibility of using direct computation. Previous codes have represented lattices by a succession of thin elements separated by bend-drifts. With ''kick-drift'' configurations, tracking time is linear in the multipole order included, and the code is symplectic. Modern vector processors simultaneously handle a large number of cases in parallel. Combining the efficiencies of kick drift tracking with vector processing, in fact, makes realistic Monte Carlo simulation entirely feasible. SSCTRK uses the above features. It is structured to have a very friendly interface, a very wide latitude of choice for cases to be run in parallel, and, by using pure FORTRAN 77, to interchangeably run on a wide variety of computers. We describe in this paper the program structure operational checks and results achieved

  9. Study of on-machine error identification and compensation methods for micro machine tools

    International Nuclear Information System (INIS)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-01-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  10. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  11. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  12. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  13. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  14. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  15. Quantum machine learning.

    Science.gov (United States)

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-13

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  16. Asynchronized synchronous machines

    CERN Document Server

    Botvinnik, M M

    1964-01-01

    Asynchronized Synchronous Machines focuses on the theoretical research on asynchronized synchronous (AS) machines, which are "hybrids” of synchronous and induction machines that can operate with slip. Topics covered in this book include the initial equations; vector diagram of an AS machine; regulation in cases of deviation from the law of full compensation; parameters of the excitation system; and schematic diagram of an excitation regulator. The possible applications of AS machines and its calculations in certain cases are also discussed. This publication is beneficial for students and indiv

  17. An object-oriented extension for debugging the virtual machine

    Energy Technology Data Exchange (ETDEWEB)

    Pizzi, Jr, Robert G. [Univ. of California, Davis, CA (United States)

    1994-12-01

    A computer is nothing more then a virtual machine programmed by source code to perform a task. The program`s source code expresses abstract constructs which are compiled into some lower level target language. When a virtual machine breaks, it can be very difficult to debug because typical debuggers provide only low-level target implementation information to the software engineer. We believe that the debugging task can be simplified by introducing aspects of the abstract design and data into the source code. We introduce OODIE, an object-oriented extension to programming languages that allows programmers to specify a virtual environment by describing the meaning of the design and data of a virtual machine. This specification is translated into symbolic information such that an augmented debugger can present engineers with a programmable debugging environment specifically tailored for the virtual machine that is to be debugged.

  18. TORQUE MEASUREMENT IN WORM AGLOMERATION MACHINE

    Directory of Open Access Journals (Sweden)

    Marian DUDZIAK

    2014-03-01

    Full Text Available The paper presents the operating characteristics of the worm agglomeration machine. The paper indicates the need for continuous monitoring of the value of the torque due to the efficiency of the machine. An original structure of torque meter which is built in the standard drive system of briquetting machine was presented. A number of benefits arising from the application of the proposed solution were presented. Exemplary measurement results obtained by means of this torque meter were presented.

  19. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  20. Automation of a universal machine

    International Nuclear Information System (INIS)

    Rodriguez S, J.

    1997-01-01

    The development of the hardware and software of a control system for a servo-hydraulic machine is presented. The universal machine is an Instron, model 1331, used to make mechanical tests. The software includes the acquisition of data from the measurements, processing and graphic presentation of the results in the assay of the 'tension' type. The control is based on a PPI (Programmable Peripheral Interface) 8255, in which the different states of the machine are set. The control functions of the machine are: a) Start of an assay, b) Pause in the assay, c) End of the assay, d) Choice of the control mode of the machine, that they could be in load, stroke or strain modes. For the data acquisition, a commercial card, National Products, model DAS-16, plugged in a slot of a Pc was used. Three transducers provide the analog signals, a cell of load, a LVDT and a extensometer. All the data are digitalized and handled in order to get the results in the appropriate working units. A stress-strain graph is obtained in the screen of the Pc for a tension test for a specific material. The points of maximum stress, rupture stress and the yield stress of the material under test are shown. (Author)

  1. The Gambling Reducing Slot Machine

    DEFF Research Database (Denmark)

    Callesen, Mette Buhl; Thomsen, Kristine Rømer; Linnet, Jakob

    2007-01-01

      The Gambling Reducing Slot Machine - Preliminary results Mette Buhl Callesen, Kristine Rømer Thomsen, Jakob Linnet and Arne Møller The PET Centre, Aarhus University Hospital and Centre of Functionally Integrative Neuroscience, Aarhus, Denmark   Slot machines are among the most addictive forms...... and willingness to continue gambling. The results may have important implications for understanding how to reduce gambling behavior in pathological gamblers.   [1] Griffiths, M. 1999. Gambling Technologies: Prospects for Problem Gambling. Journal of Gambling Studies, vol. 15(3), pp. 265-283.    ...

  2. Machine vision systems using machine learning for industrial product inspection

    Science.gov (United States)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  3. The Minimum Distance of Graph Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2011-01-01

    We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....

  4. To report the obtained results in the operation multicycle study of the L V U-1 using design data with the FCSII (1D) and PRESTO (3D) codes of the FMS system

    International Nuclear Information System (INIS)

    Montes T, J.L.; Cortes C, C.C.

    1991-07-01

    This work is to carry out a multicycle study for the Laguna Verde U-1 reactor with the FCS-II (1 - 20 cycles) and PRESTO (1 - 6 cycles) codes and to compare the obtained results against those reported by General Electric. (Author)

  5. Lattice polytopes in coding theory

    Directory of Open Access Journals (Sweden)

    Ivan Soprunov

    2015-05-01

    Full Text Available In this paper we discuss combinatorial questions about lattice polytopes motivated by recent results on minimum distance estimation for toric codes. We also include a new inductive bound for the minimum distance of generalized toric codes. As an application, we give new formulas for the minimum distance of generalized toric codes for special lattice point configurations.

  6. Geochemical computer codes. A review

    International Nuclear Information System (INIS)

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  7. Superconducting three element synchronous ac machine

    International Nuclear Information System (INIS)

    Boyer, L.; Chabrerie, J.P.; Mailfert, A.; Renard, M.

    1975-01-01

    There is a growing interest in ac superconducting machines. Of several new concepts proposed for these machines in the last years one of the most promising seems to be the ''three elements'' concept which allows the cancellation of the torque acting on the superconducting field winding, thus overcoming some of the major contraints. This concept leads to a device of induction-type generator. A synchronous, three element superconducting ac machine is described, in which a room temperature, dc fed rotating winding is inserted between the superconducting field winding and the ac armature. The steady-state machine theory is developed, the flux linkages are established, and the torque expressions are derived. The condition for zero torque on the field winding, as well as the resulting electrical equations of the machine, are given. The theoretical behavior of the machine is studied, using phasor diagrams and assuming for the superconducting field winding either a constant current or a constant flux condition

  8. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  9. The cognitive approach to conscious machines

    CERN Document Server

    Haikonen, Pentti O

    2003-01-01

    Could a machine have an immaterial mind? The author argues that true conscious machines can be built, but rejects artificial intelligence and classical neural networks in favour of the emulation of the cognitive processes of the brain-the flow of inner speech, inner imagery and emotions. This results in a non-numeric meaning-processing machine with distributed information representation and system reactions. It is argued that this machine would be conscious; it would be aware of its own existence and its mental content and perceive this as immaterial. Novel views on consciousness and the mind-

  10. Mechanical design and first experimental results of an upgraded technical PERMCAT reactor for tritium recovery in the fuel cycle of a fusion machine

    Energy Technology Data Exchange (ETDEWEB)

    Welte, S., E-mail: stefan.welte@kit.edu [Karlsruhe Institute of Technology (KIT), Forschungszentrum Karlsruhe, Institute for Technical Physics, Tritium Laboratory Karlsruhe, Hermann v. Helmholtz Platz 1, 76344 Eggenstein Leopoldshafen (Germany); Demange, D.; Wagner, R. [Karlsruhe Institute of Technology (KIT), Forschungszentrum Karlsruhe, Institute for Technical Physics, Tritium Laboratory Karlsruhe, Hermann v. Helmholtz Platz 1, 76344 Eggenstein Leopoldshafen (Germany)

    2010-12-15

    The PERMCAT process developed for the final clean-up stage of the Tokamak Exhaust Processing systems of the ITER tritium plant combines a catalytic reactor and a Pd/Ag permeator in a single component. A first generation technical PERMCAT has been successfully operated as part of the CAPER experiment at the Tritium Laboratory Karlsruhe for several years. Various alternative PERMCAT mechanical designs were proposed and studied on small-scale prototypes. An upgraded technical PERMCAT reactor was designed, manufactured and commissioned with deuterium. A parallel arrangement of finger-type membranes inserted in a single catalyst bed design was chosen to simplify the geometry and the manufacturing while improving the robustness of the reactor. The component has been designed and manufactured to be fully tritium compatible and also fully compatible with both process and electrical connections of the previous PERMCAT to be replaced. The new PERMCAT mechanical design is more compact and easy to manufacture. This PERMCAT reactor was submitted to functional tests and experiments based on isotopic exchanges between H{sub 2}O and D{sub 2} to measure the processing performances. The first experimental results show decontamination factors versus flow rates better than all previously measured.

  11. Mechanical design and first experimental results of an upgraded technical PERMCAT reactor for tritium recovery in the fuel cycle of a fusion machine

    International Nuclear Information System (INIS)

    Welte, S.; Demange, D.; Wagner, R.

    2010-01-01

    The PERMCAT process developed for the final clean-up stage of the Tokamak Exhaust Processing systems of the ITER tritium plant combines a catalytic reactor and a Pd/Ag permeator in a single component. A first generation technical PERMCAT has been successfully operated as part of the CAPER experiment at the Tritium Laboratory Karlsruhe for several years. Various alternative PERMCAT mechanical designs were proposed and studied on small-scale prototypes. An upgraded technical PERMCAT reactor was designed, manufactured and commissioned with deuterium. A parallel arrangement of finger-type membranes inserted in a single catalyst bed design was chosen to simplify the geometry and the manufacturing while improving the robustness of the reactor. The component has been designed and manufactured to be fully tritium compatible and also fully compatible with both process and electrical connections of the previous PERMCAT to be replaced. The new PERMCAT mechanical design is more compact and easy to manufacture. This PERMCAT reactor was submitted to functional tests and experiments based on isotopic exchanges between H 2 O and D 2 to measure the processing performances. The first experimental results show decontamination factors versus flow rates better than all previously measured.

  12. Giro form reading machine

    Science.gov (United States)

    Minh Ha, Thien; Niggeler, Dieter; Bunke, Horst; Clarinval, Jose

    1995-08-01

    Although giro forms are used by many people in daily life for money remittance in Switzerland, the processing of these forms at banks and post offices is only partly automated. We describe an ongoing project for building an automatic system that is able to recognize various items printed or written on a giro form. The system comprises three main components, namely, an automatic form feeder, a camera system, and a computer. These components are connected in such a way that the system is able to process a bunch of forms without any human interactions. We present two real applications of our system in the field of payment services, which require the reading of both machine printed and handwritten information that may appear on a giro form. One particular feature of giro forms is their flexible layout, i.e., information items are located differently from one form to another, thus requiring an additional analysis step to localize them before recognition. A commercial optical character recognition software package is used for recognition of machine-printed information, whereas handwritten information is read by our own algorithms, the details of which are presented. The system is implemented by using a client/server architecture providing a high degree of flexibility to change. Preliminary results are reported supporting our claim that the system is usable in practice.

  13. Stirling cryocooler test results and design model verification

    International Nuclear Information System (INIS)

    Shimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1990-01-01

    This paper reports on progress in developing a long-life Stirling cycle cryocooler for space borne applications. It presents the results from tests on a preliminary breadboard version of the cryocooler used to demonstrate the feasibility of the technology and to validate the regenerator design code used in its development. This machine achieved a cold-end temperature of 65 K while carrying a 1/2 Watt cooling load. The basic machine is a double-acting, flexure-bearing, split Stirling design with linear electromagnetic drives for the expander and compressors. Flat metal diaphragms replace pistons for both sweeping and sealing the machine working volumes. In addition, the double-acting expander couples to a laminar-channel counterflow recuperative heat exchanger for regeneration. A PC compatible design code was developed for this design approach that calculates regenerator loss including heat transfer irreversibilities, pressure drop, and axial conduction in the regenerator walls

  14. Classifying injury narratives of large administrative databases for surveillance-A practical approach combining machine learning ensembles and human review.

    Science.gov (United States)

    Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R

    2017-01-01

    Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as

  15. Game-powered machine learning.

    Science.gov (United States)

    Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert

    2012-04-24

    Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the "wisdom of the crowds." Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., "funky jazz with saxophone," "spooky electronica," etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data.

  16. Performance evaluation based on data from code reviews

    OpenAIRE

    Andrej, Sekáč

    2016-01-01

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in ...

  17. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  18. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer; Ghanem, Bernard

    2017-01-01

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images

  19. Quasi-cyclic unit memory convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular......, catastrophic encoders and minimal encoders are characterized and dual codes treated. Further, various distance measures are discussed, and a number of good codes, some of which result from efficient computer search and some of which result from known block codes, are presented...

  20. Comparison and validation of the results of the AZNHEX v.1.0 code with the MCNP code simulating the core of a fast reactor cooled with sodium; Comparacion y validacion de los resultados del codigo AZNHEX v.1.0 con el codigo MCNP simulando el nucleo de un reactor rapido refrigerado con sodio

    Energy Technology Data Exchange (ETDEWEB)

    Galicia A, J.; Francois L, J. L.; Bastida O, G. E. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Esquivel E, J., E-mail: blink19871@hotmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2016-09-15

    The development of the AZTLAN platform for the analysis and design of nuclear reactors is led by Instituto Nacional de Investigaciones Nucleares (ININ) and divided into four working groups, which have well-defined activities to achieve significant progress in this project individually and jointly. Within these working groups is the users group, whose main task is to use the codes that make up the AZTLAN platform to provide feedback to the developers, and in this way to make the final versions of the codes are efficient and at the same time reliable and easy to understand. In this paper we present the results provided by the AZNHEX v.1.0 code when simulating the core of a fast reactor cooled with sodium at steady state. The validation of these results is a fundamental part of the platform development and responsibility of the users group, so in this research the results obtained with AZNHEX are compared and analyzed with those provided by the Monte Carlo code MCNP-5, software worldwide used and recognized. A description of the methodology used with MCNP-5 is also presented for the calculation of the interest variables and the difference that is obtained with respect to the calculated with AZNHEX. (Author)

  1. I2D: code for conversion of ISOTXS structured data to DTF and ANISN structured tables

    International Nuclear Information System (INIS)

    Resnik, W.M. II.

    1977-06-01

    The I2D code converts neutron cross-section data written in the standard interface file format called ISOTXS to a matrix structured format commonly called DTF tables. Several BCD and binary output options are available including FIDO (ANISN) format. The I2D code adheres to the guidelines established by the Committee on Computer Code Coordination for standardized code development. Since some machine dependency is inherent regardless of the degree of standardization, provisions have been made in the I2D code for easy implementation on either short-word machines (IBM) or on long-word machines (CDC). 3 figures, 5 tables

  2. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  3. Quadrilateral Micro-Hole Array Machining on Invar Thin Film: Wet Etching and Electrochemical Fusion Machining

    Directory of Open Access Journals (Sweden)

    Woong-Kirl Choi

    2018-01-01

    Full Text Available Ultra-precision products which contain a micro-hole array have recently shown remarkable demand growth in many fields, especially in the semiconductor and display industries. Photoresist etching and electrochemical machining are widely known as precision methods for machining micro-holes with no residual stress and lower surface roughness on the fabricated products. The Invar shadow masks used for organic light-emitting diodes (OLEDs contain numerous micro-holes and are currently machined by a photoresist etching method. However, this method has several problems, such as uncontrollable hole machining accuracy, non-etched areas, and overcutting. To solve these problems, a machining method that combines photoresist etching and electrochemical machining can be applied. In this study, negative photoresist with a quadrilateral hole array pattern was dry coated onto 30-µm-thick Invar thin film, and then exposure and development were carried out. After that, photoresist single-side wet etching and a fusion method of wet etching-electrochemical machining were used to machine micro-holes on the Invar. The hole machining geometry, surface quality, and overcutting characteristics of the methods were studied. Wet etching and electrochemical fusion machining can improve the accuracy and surface quality. The overcutting phenomenon can also be controlled by the fusion machining. Experimental results show that the proposed method is promising for the fabrication of Invar film shadow masks.

  4. Determination of the dead layer and full-energy peak efficiency of an HPGe detector using the MCNP code and experimental results

    Directory of Open Access Journals (Sweden)

    M Moeinifar

    2017-02-01

    Full Text Available One important factor in using an High Purity Germanium (HPGe detector is its efficiency that highly depends on the geometry and absorption factors, so that when the configuration of source-detector geometry is changed, the detector efficiency must be re-measured. The best way of determining the efficiency of a detector is measuring the efficiency of standard sources. But considering the fact that standard sources are hardly available and it is time consuming to find them, determinig the efficiency by simulation which gives enough efficiency in less time, is important. In this study, the dead layer thickness and the full-energy peak efficiency of an HPGe detector was obtained by Monte Carlo simulation, using MCNPX code. For this, we first measured gamma–ray spectra for different sources placed at various distances from the detector and stored the measured spectra obtained. Then the obtained spectra were simulated under similar conditions in vitro.At first, the whole volume of germanium was regarded as active, and the obtaind spectra from calculation were compared with the corresponding experimental spectra. Comparison of the calculated spectra with the measured spectra showed considerable differences. By making small variations in the dead layer thickness of the detector (about a few hundredths of a millimeter in the simulation program, we tried to remove these differences and in this way a dead layer of 0.57 mm was obtained for the detector. By incorporating this value for the dead layer in the simulating program, the full-energy peak efficiency of the detector was then obtained both by experiment and by simulation, for various sources at various distances from the detector, and both methods showed good agreements. Then, using MCNP code and considering the exact measurement system, one can conclude that the efficiency of an HPGe detector for various source-detector geometries can be calculated with rather good accuracy by simulation method

  5. Machine assisted histogram classification

    Science.gov (United States)

    Benyó, B.; Gaspar, C.; Somogyi, P.

    2010-04-01

    LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty or ageing components can be either done visually using instruments, such as the LHCb Histogram Presenter, or with the help of automated tools. In order to assist detector experts in handling the vast monitoring information resulting from the sheer size of the detector, we propose a graph based clustering tool combined with machine learning algorithm and demonstrate its use by processing histograms representing 2D hitmaps events. We prove the concept by detecting ion feedback events in the LHCb experiment's RICH subdetector.

  6. Machine assisted histogram classification

    Energy Technology Data Exchange (ETDEWEB)

    Benyo, B; Somogyi, P [BME-IIT, H-1117 Budapest, Magyar tudosok koerutja 2. (Hungary); Gaspar, C, E-mail: Peter.Somogyi@cern.c [CERN-PH, CH-1211 Geneve 23 (Switzerland)

    2010-04-01

    LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty or ageing components can be either done visually using instruments, such as the LHCb Histogram Presenter, or with the help of automated tools. In order to assist detector experts in handling the vast monitoring information resulting from the sheer size of the detector, we propose a graph based clustering tool combined with machine learning algorithm and demonstrate its use by processing histograms representing 2D hitmaps events. We prove the concept by detecting ion feedback events in the LHCb experiment's RICH subdetector.

  7. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  8. A Very Fast and Angular Momentum Conserving Tree Code

    International Nuclear Information System (INIS)

    Marcello, Dominic C.

    2017-01-01

    There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.

  9. A Very Fast and Angular Momentum Conserving Tree Code

    Energy Technology Data Exchange (ETDEWEB)

    Marcello, Dominic C., E-mail: dmarce504@gmail.com [Department of Physics and Astronomy, and Center for Computation and Technology Louisiana State University, Baton Rouge, LA 70803 (United States)

    2017-09-01

    There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.

  10. Pattern recognition & machine learning

    CERN Document Server

    Anzai, Y

    1992-01-01

    This is the first text to provide a unified and self-contained introduction to visual pattern recognition and machine learning. It is useful as a general introduction to artifical intelligence and knowledge engineering, and no previous knowledge of pattern recognition or machine learning is necessary. Basic for various pattern recognition and machine learning methods. Translated from Japanese, the book also features chapter exercises, keywords, and summaries.

  11. Support vector machines applications

    CERN Document Server

    Guo, Guodong

    2014-01-01

    Support vector machines (SVM) have both a solid mathematical background and good performance in practical applications. This book focuses on the recent advances and applications of the SVM in different areas, such as image processing, medical practice, computer vision, pattern recognition, machine learning, applied statistics, business intelligence, and artificial intelligence. The aim of this book is to create a comprehensive source on support vector machine applications, especially some recent advances.

  12. The Newest Machine Material

    International Nuclear Information System (INIS)

    Seo, Yeong Seop; Choe, Byeong Do; Bang, Meong Sung

    2005-08-01

    This book gives descriptions of machine material with classification of machine material and selection of machine material, structure and connection of material, coagulation of metal and crystal structure, equilibrium diagram, properties of metal material, elasticity and plasticity, biopsy of metal, material test and nondestructive test. It also explains steel material such as heat treatment of steel, cast iron and cast steel, nonferrous metal materials, non metallic materials, and new materials.

  13. Introduction to machine learning

    OpenAIRE

    Baştanlar, Yalın; Özuysal, Mustafa

    2014-01-01

    The machine learning field, which can be briefly defined as enabling computers make successful predictions using past experiences, has exhibited an impressive development recently with the help of the rapid increase in the storage capacity and processing power of computers. Together with many other disciplines, machine learning methods have been widely employed in bioinformatics. The difficulties and cost of biological analyses have led to the development of sophisticated machine learning app...

  14. Machinability of advanced materials

    CERN Document Server

    Davim, J Paulo

    2014-01-01

    Machinability of Advanced Materials addresses the level of difficulty involved in machining a material, or multiple materials, with the appropriate tooling and cutting parameters.  A variety of factors determine a material's machinability, including tool life rate, cutting forces and power consumption, surface integrity, limiting rate of metal removal, and chip shape. These topics, among others, and multiple examples comprise this research resource for engineering students, academics, and practitioners.

  15. Machining of titanium alloys

    CERN Document Server

    2014-01-01

    This book presents a collection of examples illustrating the resent research advances in the machining of titanium alloys. These materials have excellent strength and fracture toughness as well as low density and good corrosion resistance; however, machinability is still poor due to their low thermal conductivity and high chemical reactivity with cutting tool materials. This book presents solutions to enhance machinability in titanium-based alloys and serves as a useful reference to professionals and researchers in aerospace, automotive and biomedical fields.

  16. Evaluation of leak rate by EPRI code

    International Nuclear Information System (INIS)

    Isozaki, Toshikuni; Hashiguchi, Issei; Kato, Kiyoshi; Miyazono, Shohachiro

    1987-08-01

    From 1987, a research on the leak rate from a cracked pipe under BWR or PWR operating condition is going to be carried out at the authors' laboratory. This report describes the computed results by EPRI's leak rate code which was mounted on JAERI FACOM-M380 machine. Henry's critical flow model is used in this program. For the planning of an experimental research, the leak rate from a crack under BWR or PWR operating condition is computed, varying a crack length 2c, crack opening diameter COD and pipe diameter. The COD value under which the minimum detectable leak rate of 5 gpm is given is 0.22 mm or 0.21 mm under the BWR or PWR condition with 2c = 100 mm and 16B pipe geometry. The entire lists are shown in the appendix. (author)

  17. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  18. Tribology in machine design

    CERN Document Server

    Stolarski, Tadeusz

    1999-01-01

    ""Tribology in Machine Design is strongly recommended for machine designers, and engineers and scientists interested in tribology. It should be in the engineering library of companies producing mechanical equipment.""Applied Mechanics ReviewTribology in Machine Design explains the role of tribology in the design of machine elements. It shows how algorithms developed from the basic principles of tribology can be used in a range of practical applications within mechanical devices and systems.The computer offers today's designer the possibility of greater stringen

  19. Induction machine handbook

    CERN Document Server

    Boldea, Ion

    2002-01-01

    Often called the workhorse of industry, the advent of power electronics and advances in digital control are transforming the induction motor into the racehorse of industrial motion control. Now, the classic texts on induction machines are nearly three decades old, while more recent books on electric motors lack the necessary depth and detail on induction machines.The Induction Machine Handbook fills industry's long-standing need for a comprehensive treatise embracing the many intricate facets of induction machine analysis and design. Moving gradually from simple to complex and from standard to

  20. Chaotic Boltzmann machines

    Science.gov (United States)

    Suzuki, Hideyuki; Imura, Jun-ichi; Horio, Yoshihiko; Aihara, Kazuyuki

    2013-01-01

    The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented. PMID:23558425

  1. Electrical machines & drives

    CERN Document Server

    Hammond, P

    1985-01-01

    Containing approximately 200 problems (100 worked), the text covers a wide range of topics concerning electrical machines, placing particular emphasis upon electrical-machine drive applications. The theory is concisely reviewed and focuses on features common to all machine types. The problems are arranged in order of increasing levels of complexity and discussions of the solutions are included where appropriate to illustrate the engineering implications. This second edition includes an important new chapter on mathematical and computer simulation of machine systems and revised discussions o

  2. Nanocomposites for Machining Tools

    Directory of Open Access Journals (Sweden)

    Daria Sidorenko

    2017-10-01

    Full Text Available Machining tools are used in many areas of production. To a considerable extent, the performance characteristics of the tools determine the quality and cost of obtained products. The main materials used for producing machining tools are steel, cemented carbides, ceramics and superhard materials. A promising way to improve the performance characteristics of these materials is to design new nanocomposites based on them. The application of micromechanical modeling during the elaboration of composite materials for machining tools can reduce the financial and time costs for development of new tools, with enhanced performance. This article reviews the main groups of nanocomposites for machining tools and their performance.

  3. Machine listening intelligence

    Science.gov (United States)

    Cella, C. E.

    2017-05-01

    This manifesto paper will introduce machine listening intelligence, an integrated research framework for acoustic and musical signals modelling, based on signal processing, deep learning and computational musicology.

  4. Rotating electrical machines

    CERN Document Server

    Le Doeuff, René

    2013-01-01

    In this book a general matrix-based approach to modeling electrical machines is promulgated. The model uses instantaneous quantities for key variables and enables the user to easily take into account associations between rotating machines and static converters (such as in variable speed drives).   General equations of electromechanical energy conversion are established early in the treatment of the topic and then applied to synchronous, induction and DC machines. The primary characteristics of these machines are established for steady state behavior as well as for variable speed scenarios. I

  5. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  6. The convergence of parallel Boltzmann machines

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.; Eckmiller, R.; Hartmann, G.; Hauske, G.

    1990-01-01

    We discuss the main results obtained in a study of a mathematical model of synchronously parallel Boltzmann machines. We present supporting evidence for the conjecture that a synchronously parallel Boltzmann machine maximizes a consensus function that consists of a weighted sum of the regular

  7. Machine learning approaches in medical image analysis

    DEFF Research Database (Denmark)

    de Bruijne, Marleen

    2016-01-01

    Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols......, learning from weak labels, and interpretation and evaluation of results....

  8. Coil Optimization for High Temperature Superconductor Machines

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Abrahamsen, Asger Bech

    2011-01-01

    This paper presents topology optimization of HTS racetrack coils for large HTS synchronous machines. The topology optimization is used to acquire optimal coil designs for the excitation system of 3 T HTS machines. Several tapes are evaluated and the optimization results are discussed. The optimiz...

  9. Design Guidelines for Coffee Vending Machines

    OpenAIRE

    Schneidermeier, Tim; Burghardt, Manuel; Wolff, Christian

    2013-01-01

    Walk-up-and-use-systems such as vending and self-service machines request special attention concerning an easy to use and self-explanatory user interface. In this paper we present a set of design guidelines for coffee vending machines based on the results of an expert-based usability evaluation of thirteen different models.

  10. Dynamic Performances of Asynchronous Machines | Ubeku ...

    African Journals Online (AJOL)

    The per-phase parameters of a 1.5 hp, 380 V, 50 Hz, 4 poles, 3 phase asynchronous machine used in the simulation were computed with reading obtained from a dc, no-load and blocked rotor tests carried out on the machine in the laboratory. The results obtained from the computer simulations confirmed the capabilities ...

  11. Straightness measurement of large machine guideways

    Directory of Open Access Journals (Sweden)

    W. Ptaszyñski

    2011-10-01

    Full Text Available This paper shows the guideway types of large machines and describes problems with their straightness measurement. A short description of straightness measurement methods and the results of investigation in straightness of 10 meter long guideways of a CNC machine by means of the XL-10 Renishaw interferometer are also presented.

  12. A Machine Learning Framework for Plan Payment Risk Adjustment.

    Science.gov (United States)

    Rose, Sherri

    2016-12-01

    To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.

  13. High Temperature Superconductor Machine Prototype

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Træholt, Chresten

    2011-01-01

    A versatile testing platform for a High Temperature Superconductor (HTS) machine has been constructed. The stationary HTS field winding can carry up to 10 coils and it is operated at a temperature of 77K. The rotating armature is at room temperature. Test results and performance for the HTS field...

  14. Making extreme computations possible with virtual machines

    International Nuclear Information System (INIS)

    Reuter, J.; Chokoufe Nejad, B.

    2016-02-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  15. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  16. Are there intelligent Turing machines?

    OpenAIRE

    Bátfai, Norbert

    2015-01-01

    This paper introduces a new computing model based on the cooperation among Turing machines called orchestrated machines. Like universal Turing machines, orchestrated machines are also designed to simulate Turing machines but they can also modify the original operation of the included Turing machines to create a new layer of some kind of collective behavior. Using this new model we can define some interested notions related to cooperation ability of Turing machines such as the intelligence quo...

  17. Efficient DS-UWB MUD Algorithm Using Code Mapping and RVM

    Directory of Open Access Journals (Sweden)

    Pingyan Shi

    2016-01-01

    Full Text Available A hybrid multiuser detection (MUD using code mapping and a wrong code recognition based on relevance vector machine (RVM for direct sequence ultra wide band (DS-UWB system is developed to cope with the multiple access interference (MAI and the computational efficiency. A new MAI suppression mechanism is studied in the following steps: firstly, code mapping, an optimal decision function, is constructed and the output candidate code of the matched filter is mapped to a feature space by the function. In the feature space, simulation results show that the error codes caused by MAI and the single user mapped codes can be classified by a threshold which is related to SNR of the receiver. Then, on the base of code mapping, use RVM to distinguish the wrong codes from the right ones and finally correct them. Compared with the traditional MUD approaches, the proposed method can considerably improve the bit error ratio (BER performance due to its special MAI suppression mechanism. Simulation results also show that the proposed method can approximately achieve the BER performance of optimal multiuser detection (OMD and the computational complexity approximately equals the matched filter. Moreover, the proposed method is less sensitive to the number of users.

  18. Beam Dynamics Studies in Recirculating Machines

    CERN Document Server

    Pellegrini, Dario; Latina, A

    The LHeC and the CLIC Drive Beam share not only the high-current beams that make them prone to show instabilities, but also unconventional lattice topologies and operational schemes in which the time sequence of the bunches varies along the machine. In order to asses the feasibility of these projects, realistic simulations taking into account the most worrisome effects and their interplays, are crucial. These include linear and non-linear optics with time dependent elements, incoherent and coherent synchrotron radiation, short and long-range wakefields, beam-beam effect and ion cloud. In order to investigate multi-bunch effects in recirculating machines, a new version of the tracking code PLACET has been developed from scratch. PLACET2, already integrates most of the effects mentioned before and can easily receive additional physics. Its innovative design allows to describe complex lattices and track one or more bunches accordingly to the machine operation, reproducing the bunch train splitting and recombinat...

  19. Parallelization of a beam dynamics code and first large scale radio frequency quadrupole simulations

    Directory of Open Access Journals (Sweden)

    J. Xu

    2007-01-01

    Full Text Available The design and operation support of hadron (proton and heavy-ion linear accelerators require substantial use of beam dynamics simulation tools. The beam dynamics code TRACK has been originally developed at Argonne National Laboratory (ANL to fulfill the special requirements of the rare isotope accelerator (RIA accelerator systems. From the beginning, the code has been developed to make it useful in the three stages of a linear accelerator project, namely, the design, commissioning, and operation of the machine. To realize this concept, the code has unique features such as end-to-end simulations from the ion source to the final beam destination and automatic procedures for tuning of a multiple charge state heavy-ion beam. The TRACK code has become a general beam dynamics code for hadron linacs and has found wide applications worldwide. Until recently, the code has remained serial except for a simple parallelization used for the simulation of multiple seeds to study the machine errors. To speed up computation, the TRACK Poisson solver has been parallelized. This paper discusses different parallel models for solving the Poisson equation with the primary goal to extend the scalability of the code onto 1024 and more processors of the new generation of supercomputers known as BlueGene (BG/L. Domain decomposition techniques have been adapted and incorporated into the parallel version of the TRACK code. To demonstrate the new capabilities of the parallelized TRACK code, the dynamics of a 45 mA proton beam represented by 10^{8} particles has been simulated through the 325 MHz radio frequency quadrupole and initial accelerator section of the proposed FNAL proton driver. The results show the benefits and advantages of large-scale parallel computing in beam dynamics simulations.

  20. comparative study of moore and mealy machine models adaptation

    African Journals Online (AJOL)

    user

    automata model was developed for ABS manufacturing process using Moore and Mealy Finite State Machines. Simulation ... The simulation results showed that the Mealy Machine is faster than the Moore ..... random numbers from MATLAB.

  1. Training Restricted Boltzmann Machines

    DEFF Research Database (Denmark)

    Fischer, Asja

    relies on sampling based approximations of the log-likelihood gradient. I will present an empirical and theoretical analysis of the bias of these approximations and show that the approximation error can lead to a distortion of the learning process. The bias decreases with increasing mixing rate......Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can also be interpreted as stochastic neural networks. Training RBMs is known to be challenging. Computing the likelihood of the model parameters or its gradient is in general computationally intensive. Thus, training...... of the applied sampling procedure and I will introduce a transition operator that leads to faster mixing. Finally, a different parametrisation of RBMs will be discussed that leads to better learning results and more robustness against changes in the data representation....

  2. Machinability of IPS Empress 2 framework ceramic.

    Science.gov (United States)

    Schmidt, C; Weigl, P

    2000-01-01

    Using ceramic materials for an automatic production of ceramic dentures by CAD/CAM is a challenge, because many technological, medical, and optical demands must be considered. The IPS Empress 2 framework ceramic meets most of them. This study shows the possibilities for machining this ceramic with economical parameters. The long life-time requirement for ceramic dentures requires a ductile machined surface to avoid the well-known subsurface damages of brittle materials caused by machining. Slow and rapid damage propagation begins at break outs and cracks, and limits life-time significantly. Therefore, ductile machined surfaces are an important demand for machine dental ceramics. The machining tests were performed with various parameters such as tool grain size and feed speed. Denture ceramics were machined by jig grinding on a 5-axis CNC milling machine (Maho HGF 500) with a high-speed spindle up to 120,000 rpm. The results of the wear test indicate low tool wear. With one tool, you can machine eight occlusal surfaces including roughing and finishing. One occlusal surface takes about 60 min machining time. Recommended parameters for roughing are middle diamond grain size (D107), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 1000 mm/min, depth of cut a(e) = 0.06 mm, width of contact a(p) = 0.8 mm, and for finishing ultra fine diamond grain size (D46), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 100 mm/min, depth of cut a(e) = 0.02 mm, width of contact a(p) = 0.8 mm. The results of the machining tests give a reference for using IPS Empress(R) 2 framework ceramic in CAD/CAM systems. Copyright 2000 John Wiley & Sons, Inc.

  3. Parallelization methods study of thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Gaudart, Catherine

    2000-01-01

    The variety of parallelization methods and machines leads to a wide selection for programmers. In this study we suggest, in an industrial context, some solutions from the experience acquired through different parallelization methods. The study is about several scientific codes which simulate a large variety of thermal-hydraulics phenomena. A bibliography on parallelization methods and a first analysis of the codes showed the difficulty of our process on the whole applications to study. Therefore, it would be necessary to identify and extract a representative part of these applications and parallelization methods. The linear solver part of the codes forced itself. On this particular part several parallelization methods had been used. From these developments one could estimate the necessary work for a non initiate programmer to parallelize his application, and the impact of the development constraints. The different methods of parallelization tested are the numerical library PETSc, the parallelizer PAF, the language HPF, the formalism PEI and the communications library MPI and PYM. In order to test several methods on different applications and to follow the constraint of minimization of the modifications in codes, a tool called SPS (Server of Parallel Solvers) had be developed. We propose to describe the different constraints about the optimization of codes in an industrial context, to present the solutions given by the tool SPS, to show the development of the linear solver part with the tested parallelization methods and lastly to compare the results against the imposed criteria. (author) [fr

  4. Reliability assessment of the fueling machine of the CANDU reactor

    International Nuclear Information System (INIS)

    Al-Kusayer, T.A.

    1985-01-01

    Fueling of CANDU-reactors is carried out by two fueling machines, each serving one end of the reactor. The fueling machine becomes a part of the primary heat transport system during the refueling operations, and hence, some refueling machine malfunctions could result in a small scale-loss-of-coolant accident. Fueling machine failures and the failure sequences are discussed. The unavailability of the fueling machine is estimated by using fault tree analysis. The probability of mechanical failure of the fueling machine interface is estimated as 1.08 x 10 -5 . (orig.) [de

  5. Parallel processing Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    McKinney, G.W.

    1994-01-01

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine

  6. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  7. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  8. Students' perspectives on promoting healthful food choices from campus vending machines: a qualitative interview study.

    Science.gov (United States)

    Ali, Habiba I; Jarrar, Amjad H; Abo-El-Enen, Mostafa; Al Shamsi, Mariam; Al Ashqar, Huda

    2015-05-28

    Increasing the healthfulness of campus food environments is an important step in promoting healthful food choices among college students. This study explored university students' suggestions on promoting healthful food choices from campus vending machines. It also examined factors influencing students' food choices from vending machines. Peer-led semi-structured individual interviews were conducted with 43 undergraduate students (33 females and 10 males) recruited from students enrolled in an introductory nutrition course in a large national university in the United Arab Emirates. Interviews were audiotaped, transcribed, and coded to generate themes using N-Vivo software. Accessibility, peer influence, and busy schedules were the main factors influencing students' food choices from campus vending machines. Participants expressed the need to improve the nutritional quality of the food items sold in the campus vending machines. Recommendations for students' nutrition educational activities included placing nutrition tips on or beside the vending machines and using active learning methods, such as competitions on nutrition knowledge. The results of this study have useful applications in improving the campus food environment and nutrition education opportunities at the university to assist students in making healthful food choices.

  9. Parallel processing of Monte Carlo code MCNP for particle transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Kawasaki, Takuji

    1996-06-01

    It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)

  10. Optimizing fusion PIC code performance at scale on Cori Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, T. S.; Deslippe, J.

    2017-07-23

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale well up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.

  11. Design of man-machine-communication-systems

    International Nuclear Information System (INIS)

    Zimmermann, R.

    1975-04-01

    This paper shows some fundamentals of man-machine-communication and deduces demands and recommendations for the design of communication systems. The main points are the directives for the design of optic display systems with details for visual perception and resolution, luminance and contrast, as well as discernibility and coding of displayed information. The most important rules are recommendations for acoustic information systems, control devices and for design of consoles are also given. (orig.) [de

  12. THE STIRLING GAS REFRIGERATING MACHINE MECHANICAL DESIGN IMPROVING

    Directory of Open Access Journals (Sweden)

    V. V. Trandafilov

    2016-06-01

    Full Text Available To improve the mechanical design of the piston Stirling gas refrigeration machine the structural optimization of rotary vane Stirling gas refrigeration machine is carried out. This paper presents the results of theoretical research. Analysis and prospects of rotary vane Stirling gas refrigeration machine for domestic and industrial refrigeration purpose are represented. The results of a patent search by mechanisms of transformation of rotary vane machines are discussed.

  13. THE STIRLING GAS REFRIGERATING MACHINE MECHANICAL DESIGN IMPROVING

    Directory of Open Access Journals (Sweden)

    V. V. Trandafilov

    2016-02-01

    Full Text Available To improve the mechanical design of the piston Stirling gas refrigeration machine the structural optimization of rotary vane Stirling gas refrigeration machine is carried out. This paper presents the results of theoretical research. Analysis and prospects of rotary vane Stirling gas refrigeration machine for domestic and industrial refrigeration purpose are represented. The results of a patent search by mechanisms of transformation of rotary vane machines are discussed

  14. Microsoft Azure machine learning

    CERN Document Server

    Mund, Sumit

    2015-01-01

    The book is intended for those who want to learn how to use Azure Machine Learning. Perhaps you already know a bit about Machine Learning, but have never used ML Studio in Azure; or perhaps you are an absolute newbie. In either case, this book will get you up-and-running quickly.

  15. The Hooey Machine.

    Science.gov (United States)

    Scarnati, James T.; Tice, Craig J.

    1992-01-01

    Describes how students can make and use Hooey Machines to learn how mechanical energy can be transferred from one object to another within a system. The Hooey Machine is made using a pencil, eight thumbtacks, one pushpin, tape, scissors, graph paper, and a plastic lid. (PR)

  16. Nanocomposites for Machining Tools

    DEFF Research Database (Denmark)

    Sidorenko, Daria; Loginov, Pavel; Mishnaevsky, Leon

    2017-01-01

    Machining tools are used in many areas of production. To a considerable extent, the performance characteristics of the tools determine the quality and cost of obtained products. The main materials used for producing machining tools are steel, cemented carbides, ceramics and superhard materials...

  17. A nucleonic weighing machine

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    The design and operation of a nucleonic weighing machine fabricated for continuous weighing of material over conveyor belt are described. The machine uses a 40 mCi cesium-137 line source and a 10 litre capacity ionization chamber. It is easy to maintain as there are no moving parts. It can also be easily removed and reinstalled. (M.G.B.)

  18. An asymptotical machine

    Science.gov (United States)

    Cristallini, Achille

    2016-07-01

    A new and intriguing machine may be obtained replacing the moving pulley of a gun tackle with a fixed point in the rope. Its most important feature is the asymptotic efficiency. Here we obtain a satisfactory description of this machine by means of vector calculus and elementary trigonometry. The mathematical model has been compared with experimental data and briefly discussed.

  19. Machine learning with R

    CERN Document Server

    Lantz, Brett

    2015-01-01

    Perhaps you already know a bit about machine learning but have never used R, or perhaps you know a little R but are new to machine learning. In either case, this book will get you up and running quickly. It would be helpful to have a bit of familiarity with basic programming concepts, but no prior experience is required.

  20. The deleuzian abstract machines

    DEFF Research Database (Denmark)

    Werner Petersen, Erik

    2005-01-01

    To most people the concept of abstract machines is connected to the name of Alan Turing and the development of the modern computer. The Turing machine is universal, axiomatic and symbolic (E.g. operating on symbols). Inspired by Foucault, Deleuze and Guattari extended the concept of abstract...