WorldWideScience

Sample records for machine code resulting

  1. Machines are benchmarked by code, not algorithms

    NARCIS (Netherlands)

    Poss, R.

    2013-01-01

    This article highlights how small modifications to either the source code of a benchmark program or the compilation options may impact its behavior on a specific machine. It argues that for evaluating machines, benchmark providers and users be careful to ensure reproducibility of results based on th

  2. Reusable State Machine Code Generator

    Science.gov (United States)

    Hoffstadt, A. A.; Reyes, C.; Sommer, H.; Andolfato, L.

    2010-12-01

    The State Machine model is frequently used to represent the behaviour of a system, allowing one to express and execute this behaviour in a deterministic way. A graphical representation such as a UML State Chart diagram tames the complexity of the system, thus facilitating changes to the model and communication between developers and domain experts. We present a reusable state machine code generator, developed by the Universidad Técnica Federico Santa María and the European Southern Observatory. The generator itself is based on the open source project architecture, and uses UML State Chart models as input. This allows for a modular design and a clean separation between generator and generated code. The generated state machine code has well-defined interfaces that are independent of the implementation artefacts such as the middle-ware. This allows using the generator in the substantially different observatory software of the Atacama Large Millimeter Array and the ESO Very Large Telescope. A project-specific mapping layer for event and transition notification connects the state machine code to its environment, which can be the Common Software of these projects, or any other project. This approach even allows to automatically create tests for a generated state machine, using techniques from software testing, such as path-coverage.

  3. Machine structure oriented control code logic

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2009-01-01

    Control code is a concept that is closely related to a frequently occurring practitioner’s view on what is a program: code that is capable of controlling the behaviour of some machine. We present a logical approach to explain issues concerning control codes that are independent of the details of the

  4. A portable virtual machine target for proof-carrying code

    DEFF Research Database (Denmark)

    Franz, Michael; Chandra, Deepak; Gal, Andreas

    2005-01-01

    Virtual Machines (VMs) and Proof-Carrying Code (PCC) are two techniques that have been used independently to provide safety for (mobile) code. Existing virtual machines, such as the Java VM, have several drawbacks: First, the effort required for safety verification is considerable. Second and mor...... simultaneously providing efficient justin-time compilation and target-machine independence. In particular, our approach reduces the complexity of the required proofs, resulting in fewer proof obligations that need to be discharged at the target machine.......Virtual Machines (VMs) and Proof-Carrying Code (PCC) are two techniques that have been used independently to provide safety for (mobile) code. Existing virtual machines, such as the Java VM, have several drawbacks: First, the effort required for safety verification is considerable. Second and more...... subtly, the need to provide such verification by the code consumer inhibits the amount of optimization that can be performed by the code producer. This in turn makes justin-time compilation surprisingly expensive. Proof-Carrying Code, on the other hand, has its own set of limitations, among which...

  5. Reversible machine code and its abstract processor architecture

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert; Yokoyama, Tetsuo

    2007-01-01

    A reversible abstract machine architecture and its reversible machine code are presented and formalized. For machine code to be reversible, both the underlying control logic and each instruction must be reversible. A general class of machine instruction sets was proven to be reversible, building...

  6. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  7. Reversible machine code and its abstract processor architecture

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert; Yokoyama, Tetsuo

    2007-01-01

    A reversible abstract machine architecture and its reversible machine code are presented and formalized. For machine code to be reversible, both the underlying control logic and each instruction must be reversible. A general class of machine instruction sets was proven to be reversible, building ...... on our concept of reversible updates. The presentation is abstract and can serve as a guideline for a family of reversible processor designs. By example, we illustrate programming principles for the abstract machine architecture formalized in this paper....

  8. Understanding and Writing G & M Code for CNC Machines

    Science.gov (United States)

    Loveland, Thomas

    2012-01-01

    In modern CAD and CAM manufacturing companies, engineers design parts for machines and consumable goods. Many of these parts are cut on CNC machines. Whether using a CNC lathe, milling machine, or router, the ideas and designs of engineers must be translated into a machine-readable form called G & M Code that can be used to cut parts to precise…

  9. Understanding and Writing G & M Code for CNC Machines

    Science.gov (United States)

    Loveland, Thomas

    2012-01-01

    In modern CAD and CAM manufacturing companies, engineers design parts for machines and consumable goods. Many of these parts are cut on CNC machines. Whether using a CNC lathe, milling machine, or router, the ideas and designs of engineers must be translated into a machine-readable form called G & M Code that can be used to cut parts to precise…

  10. Quantitative information measurement and application for machine component classification codes

    Institute of Scientific and Technical Information of China (English)

    LI Ling-Feng; TAN Jian-rong; LIU Bo

    2005-01-01

    Information embodied in machine component classification codes has internal relation with the probability distribution of the code symbol. This paper presents a model considering codes as information source based on Shannon's information theory. Using information entropy, it preserves the mathematical form and quantitatively measures the information amount of a symbol and a bit in the machine component classification coding system. It also gets the maximum value of information amount and the corresponding coding scheme when the category of symbols is fixed. Samples are given to show how to evaluate the information amount of component codes and how to optimize a coding system.

  11. Machine function based control code algebras

    NARCIS (Netherlands)

    Bergstra, J.A.

    2008-01-01

    Machine functions have been introduced by Earley and Sturgis in [6] in order to provide a mathematical foundation of the use of the T-diagrams proposed by Bratman in [5]. Machine functions describe the operation of a machine at a very abstract level. A theory of hardware and software based on machin

  12. Distinguishing protein-coding from non-coding RNAs through support vector machines.

    Directory of Open Access Journals (Sweden)

    Jinfeng Liu

    2006-04-01

    Full Text Available RIKEN's FANTOM project has revealed many previously unknown coding sequences, as well as an unexpected degree of variation in transcripts resulting from alternative promoter usage and splicing. Ever more transcripts that do not code for proteins have been identified by transcriptome studies, in general. Increasing evidence points to the important cellular roles of such non-coding RNAs (ncRNAs. The distinction of protein-coding RNA transcripts from ncRNA transcripts is therefore an important problem in understanding the transcriptome and carrying out its annotation. Very few in silico methods have specifically addressed this problem. Here, we introduce CONC (for "coding or non-coding", a novel method based on support vector machines that classifies transcripts according to features they would have if they were coding for proteins. These features include peptide length, amino acid composition, predicted secondary structure content, predicted percentage of exposed residues, compositional entropy, number of homologs from database searches, and alignment entropy. Nucleotide frequencies are also incorporated into the method. Confirmed coding cDNAs for eukaryotic proteins from the Swiss-Prot database constituted the set of true positives, ncRNAs from RNAdb and NONCODE the true negatives. Ten-fold cross-validation suggested that CONC distinguished coding RNAs from ncRNAs at about 97% specificity and 98% sensitivity. Applied to 102,801 mouse cDNAs from the FANTOM3 dataset, our method reliably identified over 14,000 ncRNAs and estimated the total number of ncRNAs to be about 28,000.

  13. On Cascade Source Coding with A Side Information "Vending Machine"

    CERN Document Server

    Ahmadi, Behzad; Choudhuri, Chiranjib; Mitra, Urbashi

    2012-01-01

    The model of a side information "vending machine" accounts for scenarios in which acquiring side information is costly and thus should be done efficiently. In this paper, the three-node cascade source coding problem is studied under the assumption that a side information vending machine is available either at the intermediate or at the end node. In both cases, a single-letter characterization of the available trade-offs among the rate, the distortions in the reconstructions at the intermediate and at the end node, and the cost in acquiring the side information are derived under given conditions.

  14. Editing of EIA coded, numerically controlled, machine tool tapes

    Science.gov (United States)

    Weiner, J. M.

    1975-01-01

    Editing of numerically controlled (N/C) machine tool tapes (8-level paper tape) using an interactive graphic display processor is described. A rapid technique required for correcting production errors in N/C tapes was developed using the interactive text editor on the IMLAC PDS-ID graphic display system and two special programs resident on disk. The correction technique and special programs for processing N/C tapes coded to EIA specifications are discussed.

  15. Compiler design handbook optimizations and machine code generation

    CERN Document Server

    Srikant, YN

    2003-01-01

    The widespread use of object-oriented languages and Internet security concerns are just the beginning. Add embedded systems, multiple memory banks, highly pipelined units operating in parallel, and a host of other advances and it becomes clear that current and future computer architectures pose immense challenges to compiler designers-challenges that already exceed the capabilities of traditional compilation techniques. The Compiler Design Handbook: Optimizations and Machine Code Generation is designed to help you meet those challenges. Written by top researchers and designers from around the

  16. Machine-Learning Algorithms to Code Public Health Spending Accounts.

    Science.gov (United States)

    Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David

    Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.

  17. Predicting and Classifying User Identification Code System Based on Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In digital fingerprinting, preventing piracy of images by colluders is an important and tedious issue. Each image will be embedded with a unique User IDentification (U ID) code that is the fingerprint for tracking the authorized user. The proposed hiding scheme makes use of a random number generator to scramble two copies of a UID,which will then be hidden in the randomly selected medium frequency coefficients of the host image. The linear support vector machine (SVM) will be used to train classifications by calculating the normalized correlation (NC) for the 2-class UID codes. The trained classifications will be the models used for identifying unreadable UID codes.Experimental results showed that the success of predicting the unreadable UID codes can be increased by applying SVM. The proposed scheme can be used to provide protections to intellectual property rights of digital images and to keep track of users to prevent collaborative piracies.

  18. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  19. Code-expanded radio access protocol for machine-to-machine communications

    DEFF Research Database (Denmark)

    Thomsen, Henning; Kiilerich Pratas, Nuno; Stefanovic, Cedomir

    2013-01-01

    The random access methods used for support of machine-to-machine, also referred to as Machine-Type Communications, in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. We propose an approach that is motivated...... subframes and orthogonal preambles, the amount of available contention resources is drastically increased, enabling the massive support of Machine-Type Communication users that is beyond the reach of current systems....

  20. Code-expanded radio access protocol for machine-to-machine communications

    DEFF Research Database (Denmark)

    Thomsen, Henning; Kiilerich Pratas, Nuno; Stefanovic, Cedomir

    2013-01-01

    The random access methods used for support of machine-to-machine, also referred to as Machine-Type Communications, in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. We propose an approach that is motivated b...

  1. A Machine Learning Perspective on Predictive Coding with PAQ

    CERN Document Server

    Knoll, Byron

    2011-01-01

    PAQ8 is an open source lossless data compression algorithm that currently achieves the best compression rates on many benchmarks. This report presents a detailed description of PAQ8 from a statistical machine learning perspective. It shows that it is possible to understand some of the modules of PAQ8 and use this understanding to improve the method. However, intuitive statistical explanations of the behavior of other modules remain elusive. We hope the description in this report will be a starting point for discussions that will increase our understanding, lead to improvements to PAQ8, and facilitate a transfer of knowledge from PAQ8 to other machine learning methods, such a recurrent neural networks and stochastic memoizers. Finally, the report presents a broad range of new applications of PAQ to machine learning tasks including language modeling and adaptive text prediction, adaptive game playing, classification, and compression using features from the field of deep learning.

  2. Towards a universal code formatter through machine learning

    NARCIS (Netherlands)

    Parr, T. (Terence); J.J. Vinju (Jurgen)

    2016-01-01

    textabstractThere are many declarative frameworks that allow us to implement code formatters relatively easily for any specific language, but constructing them is cumbersome. The first problem is that "everybody" wants to format their code differently, leading to either many formatter variants or a

  3. Code-Expanded Random Access for Machine-Type Communications

    DEFF Research Database (Denmark)

    Kiilerich Pratas, Nuno; Thomsen, Henning; Stefanovic, Cedomir

    2012-01-01

    Abstract—The random access methods used for support of machine-type communications (MTC) in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. Motivated by the random access method employed in LTE, we propose...

  4. A Machine Learning Perspective on Predictive Coding with PAQ

    OpenAIRE

    Knoll, Byron; de Freitas, Nando

    2011-01-01

    PAQ8 is an open source lossless data compression algorithm that currently achieves the best compression rates on many benchmarks. This report presents a detailed description of PAQ8 from a statistical machine learning perspective. It shows that it is possible to understand some of the modules of PAQ8 and use this understanding to improve the method. However, intuitive statistical explanations of the behavior of other modules remain elusive. We hope the description in this report will be a sta...

  5. "Source Coding With a Side Information ""Vending Machine"""

    OpenAIRE

    Weissman, Tsachy; Permuter, Haim H.

    2011-01-01

    We study source coding in the presence of side information, when the system can take actions that affect the availability, quality, or nature of the side information. We begin by extending the Wyner-Ziv problem of source coding with decoder side information to the case where the decoder is allowed to choose actions affecting the side information. We then consider the setting where actions are taken by the encoder, based on its observation of the source. Actions may have costs that are commens...

  6. A Quaternary Decision Diagram Machine: Optimization of Its Code

    Science.gov (United States)

    2010-08-01

    2026 IEICE TRANS. INF. & SYST., VOL.E93–D, NO.8 AUGUST 2010 INVITED PAPER Special Section on Multiple-Valued Logic and VLSI Computing A Quaternary...sequentially. A straightforward method to increase the speed is to increase the clock frequency. However, this is 2028 IEICE TRANS. INF. & SYST...reduce the 2030 IEICE TRANS. INF. & SYST., VOL.E93–D, NO.8 AUGUST 2010 Fig. 9 4-address QDD machine. Fig. 10 Branch instruction for 4-address QDD

  7. Accuracy comparison among different machine learning techniques for detecting malicious codes

    Science.gov (United States)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  8. CNC LATHE MACHINE PRODUCING NC CODE BY USING DIALOG METHOD

    Directory of Open Access Journals (Sweden)

    Yakup TURGUT

    2004-03-01

    Full Text Available In this study, an NC code generation program utilising Dialog Method was developed for turning centres. Initially, CNC lathes turning methods and tool path development techniques were reviewed briefly. By using geometric definition methods, tool path was generated and CNC part program was developed for FANUC control unit. The developed program made CNC part program generation process easy. The program was developed using BASIC 6.0 programming language while the material and cutting tool database were and supported with the help of ACCESS 7.0.

  9. Code-Expanded Random Access for Machine-Type Communications

    DEFF Research Database (Denmark)

    Kiilerich Pratas, Nuno; Thomsen, Henning; Stefanovic, Cedomir

    2012-01-01

    a novel approach that is able to sustain a wide random access load range, while preserving the physical layer unchanged and incurring minor changes in the medium access control layer. The proposed scheme increases the amount of available contention resources, without resorting to the increase of system...... of an increased number of MTC users. We present the framework and analysis of the proposed code-expanded random access method and show that our approach supports load regions that are beyond the reach of current systems....

  10. Using machine-coded event data for the micro-level study of political violence

    Directory of Open Access Journals (Sweden)

    Jesse Hammond

    2014-07-01

    Full Text Available Machine-coded datasets likely represent the future of event data analysis. We assess the use of one of these datasets—Global Database of Events, Language and Tone (GDELT—for the micro-level study of political violence by comparing it to two hand-coded conflict event datasets. Our findings indicate that GDELT should be used with caution for geo-spatial analyses at the subnational level: its overall correlation with hand-coded data is mediocre, and at the local level major issues of geographic bias exist in how events are reported. Overall, our findings suggest that due to these issues, researchers studying local conflict processes may want to wait for a more reliable geocoding method before relying too heavily on this set of machine-coded data.

  11. Teaching the computer to code frames in news: comparing two supervised machine learning approaches to frame analysis

    NARCIS (Netherlands)

    Burscher, B.; Odijk, D.; Vliegenthart, R.; de Rijke, M.; de Vreese, C.H.

    2014-01-01

    We explore the application of supervised machine learning (SML) to frame coding. By automating the coding of frames in news, SML facilitates the incorporation of large-scale content analysis into framing research, even if financial resources are scarce. This furthers a more integrated investigation

  12. Teaching the computer to code frames in news: comparing two supervised machine learning approaches to frame analysis

    NARCIS (Netherlands)

    Burscher, B.; Odijk, D.; Vliegenthart, R.; de Rijke, M.; de Vreese, C.H.

    2014-01-01

    We explore the application of supervised machine learning (SML) to frame coding. By automating the coding of frames in news, SML facilitates the incorporation of large-scale content analysis into framing research, even if financial resources are scarce. This furthers a more integrated investigation

  13. Brain cells in the avian 'prefrontal cortex' code for features of slot-machine-like gambling.

    Directory of Open Access Journals (Sweden)

    Damian Scarf

    Full Text Available Slot machines are the most common and addictive form of gambling. In the current study, we recorded from single neurons in the 'prefrontal cortex' of pigeons while they played a slot-machine-like task. We identified four categories of neurons that coded for different aspects of our slot-machine-like task. Reward-Proximity neurons showed a linear increase in activity as the opportunity for a reward drew near. I-Won neurons fired only when the fourth stimulus of a winning (four-of-a-kind combination was displayed. I-Lost neurons changed their firing rate at the presentation of the first nonidentical stimulus, that is, when it was apparent that no reward was forthcoming. Finally, Near-Miss neurons also changed their activity the moment it was recognized that a reward was no longer available, but more importantly, the activity level was related to whether the trial contained one, two, or three identical stimuli prior to the display of the nonidentical stimulus. These findings not only add to recent neurophysiological research employing simulated gambling paradigms, but also add to research addressing the functional correspondence between the avian NCL and primate PFC.

  14. 2D and 3D Core-Collapse Supernovae Simulation Results Obtained with the CHIMERA Code

    CERN Document Server

    Bruenn, S W; Hix, W R; Blondin, J M; Marronetti, P; Messer, O E B; Dirk, C J; Yoshida, S

    2010-01-01

    Much progress in realistic modeling of core-collapse supernovae has occurred recently through the availability of multi-teraflop machines and the increasing sophistication of supernova codes. These improvements are enabling simulations with enough realism that the explosion mechanism, long a mystery, may soon be delineated. We briefly describe the CHIMERA code, a supernova code we have developed to simulate core-collapse supernovae in 1, 2, and 3 spatial dimensions. We then describe the results of an ongoing suite of 2D simulations initiated from a 12, 15, 20, and 25 solar mass progenitor. These have all exhibited explosions and are currently in the expanding phase with the shock at between 5,000 and 20,000 km. We also briefly describe an ongoing simulation in 3 spatial dimensions initiated from the 15 solar mass progenitor.

  15. 2D and 3D core-collapse supernovae simulation results obtained with the CHIMERA code

    Energy Technology Data Exchange (ETDEWEB)

    Bruenn, S W; Marronetti, P; Dirk, C J [Physics Department, Florida Atlantic University, 777 W. Glades Road, Boca Raton, FL 33431-0991 (United States); Mezzacappa, A; Hix, W R [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6354 (United States); Blondin, J M [Department of Physics, North Carolina State University, Raleigh, NC 27695-8202 (United States); Messer, O E B [Center for Computational Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6354 (United States); Yoshida, S, E-mail: bruenn@fau.ed [Max-Planck-Institut fur Gravitationsphysik, Albert Einstein Institut, Golm (Germany)

    2009-07-01

    Much progress in realistic modeling of core-collapse supernovae has occurred recently through the availability of multi-teraflop machines and the increasing sophistication of supernova codes. These improvements are enabling simulations with enough realism that the explosion mechanism, long a mystery, may soon be delineated. We briefly describe the CHIMERA code, a supernova code we have developed to simulate core-collapse supernovae in 1, 2, and 3 spatial dimensions. We then describe the results of an ongoing suite of 2D simulations initiated from a 12, 15, 20, and 25 M{sub o-dot} progenitor. These have all exhibited explosions and are currently in the expanding phase with the shock at between 5,000 and 20,000 km. We also briefly describe an ongoing simulation in 3 spatial dimensions initiated from the 15 M{sub o-dot} progenitor.

  16. Channel Efficiency with Security Enhancement for Remote Condition Monitoring of Multi Machine System Using Hybrid Huffman Coding

    Science.gov (United States)

    Datta, Jinia; Chowdhuri, Sumana; Bera, Jitendranath

    2016-12-01

    This paper presents a novel scheme of remote condition monitoring of multi machine system where a secured and coded data of induction machine with different parameters is communicated between a state-of-the-art dedicated hardware Units (DHU) installed at the machine terminal and a centralized PC based machine data management (MDM) software. The DHUs are built for acquisition of different parameters from the respective machines, and hence are placed at their nearby panels in order to acquire different parameters cost effectively during their running condition. The MDM software collects these data through a communication channel where all the DHUs are networked using RS485 protocol. Before transmitting, the parameter's related data is modified with the adoption of differential pulse coded modulation (DPCM) and Huffman coding technique. It is further encrypted with a private key where different keys are used for different DHUs. In this way a data security scheme is adopted during its passage through the communication channel in order to avoid any third party attack into the channel. The hybrid mode of DPCM and Huffman coding is chosen to reduce the data packet length. A MATLAB based simulation and its practical implementation using DHUs at three machine terminals (one healthy three phase, one healthy single phase and one faulty three phase machine) proves its efficacy and usefulness for condition based maintenance of multi machine system. The data at the central control room are decrypted and decoded using MDM software. In this work it is observed that Chanel efficiency with respect to different parameter measurements has been increased very much.

  17. The Continual Intercomparison of Radiation Codes: Results from Phase I

    Science.gov (United States)

    Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Cole, Jason; Iacono, Michael; Jin, Zhonghai; Li, Jiangnan; Manners, James; Raisanen, Petri; Rose, Fred; Zhang, Yuanchong; Wilson Michael J.; Rossow, William

    2011-01-01

    The computer codes that calculate the energy budget of solar and thermal radiation in Global Climate Models (GCMs), our most advanced tools for predicting climate change, have to be computationally efficient in order to not impose undue computational burden to climate simulations. By using approximations to gain execution speed, these codes sacrifice accuracy compared to more accurate, but also much slower, alternatives. International efforts to evaluate the approximate schemes have taken place in the past, but they have suffered from the drawback that the accurate standards were not validated themselves for performance. The manuscript summarizes the main results of the first phase of an effort called "Continual Intercomparison of Radiation Codes" (CIRC) where the cases chosen to evaluate the approximate models are based on observations and where we have ensured that the accurate models perform well when compared to solar and thermal radiation measurements. The effort is endorsed by international organizations such as the GEWEX Radiation Panel and the International Radiation Commission and has a dedicated website (i.e., http://circ.gsfc.nasa.gov) where interested scientists can freely download data and obtain more information about the effort's modus operandi and objectives. In a paper published in the March 2010 issue of the Bulletin of the American Meteorological Society only a brief overview of CIRC was provided with some sample results. In this paper the analysis of submissions of 11 solar and 13 thermal infrared codes relative to accurate reference calculations obtained by so-called "line-by-line" radiation codes is much more detailed. We demonstrate that, while performance of the approximate codes continues to improve, significant issues still remain to be addressed for satisfactory performance within GCMs. We hope that by identifying and quantifying shortcomings, the paper will help establish performance standards to objectively assess radiation code quality

  18. TWO-DIMENSION CODE RECOGNITION BASED ON MACHINE VISION%基于机器视觉的2D码识别

    Institute of Scientific and Technical Information of China (English)

    常晓玮

    2014-01-01

    According to components detection of the auto parts supply chain,put forward a kind code recognition method for DataMatrix code,PDF41 7 code,and QR code based on machine vision.The method of obtaining image firstly,and then using the image denoising technique to the image acquired with a 2D code processing,and then select different processing methods according to the 2D code can effectively identify these three 2D codes.The experimental result shows that the proposed approach is feasible and effective and can recognize DataMatrix code,PDF41 7 code,QR code real -timely.%针对汽车生产中零部件供应环节的部件检测应用,提出一种基于机器视觉的DataMatrix码、PDF417码、QR码识别方法。本方法利用图像获取、图像去噪等技术对获取的具有2D码的图像进行处理,然后根据2D码选取不同的处理方法,能有效识别以上三种2D码。实验结果表明该方法是可行的,能实时识别DataMatrix码、PDF417码、QR码。

  19. The Mistra experiment for field containment code validation first results

    Energy Technology Data Exchange (ETDEWEB)

    Caron-Charles, M.; Blumenfeld, L. [CEA Saclay, 91 - Gif sur Yvette (France)

    2001-07-01

    The MISTRA facility is a large scale experiment, designed for the purpose of thermal-hydraulics multi-D codes validation. A short description of the facility, the set up of the instrumentation and the test program are presented. Then, the first experimental results, studying helium injection in the containment and their calculations are detailed. (author)

  20. Distributed and Cascade Lossy Source Coding with a Side Information "Vending Machine"

    CERN Document Server

    Ahmadi, Behzad

    2011-01-01

    Source coding with a side information "vending machine" is a recently proposed framework in which the statistical relationship between the side information and the source, instead of being given and fixed as in the classical Wyner-Ziv problem, can be controlled by the decoder. This control action is selected by the decoder based on the message encoded by the source node. Unlike conventional settings, the message can thus carry not only information about the source to be reproduced at the decoder, but also control information aimed at improving the quality of the side information. In this paper, the single-letter characterization of the trade-offs between rate, distortion and cost associated with the control actions is extended from the previously studied point-to-point set-up to two basic multiterminal models. First, a distributed source coding model is studied, in which an arbitrary number of encoders communicate over rate-limited links to a decoder, whose side information can be controlled. The control acti...

  1. Simulation and experimental results of hybrid electric machine with a novel flux control strategy

    Directory of Open Access Journals (Sweden)

    Paplicki Piotr

    2015-03-01

    Full Text Available The paper presents selected simulation and experimental results of a hybrid ECPMS-machine (Electric Controlled Permanent Magnet Synchronous Machine. This permanent magnets (PMs excited machine offers an extended magnetic field control capability which makes it suitable for battery electric vehicle (BEV drives. Rotor, stator and the additional direct current control coil of the machine are analyzed in detail. The control system and strategy, the diagram of power supply system and an equivalent circuit model of the ECPMS-machine are presented. Influence of the additional excitation on the performance parameters of the machine, such as: torque, efficiency, speed limits and back-EMF have also been discussed.

  2. Machine-learned pattern identification in olfactory subtest results

    Science.gov (United States)

    Lötsch, Jörn; Hummel, Thomas; Ultsch, Alfred

    2016-01-01

    The human sense of smell is often analyzed as being composed of three main components comprising olfactory threshold, odor discrimination and the ability to identify odors. A relevant distinction of the three components and their differential changes in distinct disorders remains a research focus. The present data-driven analysis aimed at establishing a cluster structure in the pattern of olfactory subtest results. Therefore, unsupervised machine-learning was applied onto olfactory subtest results acquired in 10,714 subjects with nine different olfactory pathologies. Using the U-matrix, Emergent Self-organizing feature maps (ESOM) identified three different clusters characterized by (i) low threshold and good discrimination and identification, (ii) very high threshold associated with absent to poor discrimination and identification ability, or (iii) medium threshold, i.e., in the mid-range of possible thresholds, associated with reduced discrimination and identification ability. Specific etiologies of olfactory (dys)function were unequally represented in the clusters (p pattern recognition. PMID:27762302

  3. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    Science.gov (United States)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  4. The Continual Intercomparison of Radiation Codes: Results from Phase I

    Energy Technology Data Exchange (ETDEWEB)

    Oreopoulos, L.; Mlawer, Eli J.; Delamere, Jennifer; Shippert, Timothy R.; Cole, Jason; Fomin, Boris; Iacono, Michael J.; Jin, Zhonghai; Li, Jiangning; Manners, James; Raisanen, Petri; Rose, Fred; Zhang, Yuanchong; Wilson, Michael J.; Rossow, William B.

    2012-01-01

    We present results from Phase I of the Continual Intercomparison of Radiation Codes (CIRC), intended as an evolving and regularly updated reference source for evaluation of radiative transfer (RT) codes used in Global Climate Models. CIRC differs from previous intercomparisons in that it relies on an observationally validated catalogue of cases. The seven CIRC Phase I baseline cases, five cloud-free, and two with overcast liquid clouds, are built around observations by the Atmospheric Radiation Measurements (ARM) program that satisfy the goals of Phase I, namely to examine radiative transfer (RT) model performance in realistic, yet not overly complex, atmospheric conditions. In addition to the seven baseline cases, additional idealized "subcases" are also examined to facilitate intrepretation of the causes of model errors. In addition to summarizing individual model performance with respect to reference line-by-line calculations and inter-model differences, we also highlight RT model behavior for conditions of doubled CO2, aspects of utilizing a spectral specification of surface albedo, and the impact of the inclusion of scattering in the thermal infrared. Our analysis suggests that RT models should work towards improving their calculation of diffuse shortwave flux, shortwave absorption, treatment of spectral surface albedo, and shortwave CO2 forcing. On the other hand, LW calculations appear to be significantly closer to the reference results. By enhancing the range of conditions under which participating codes are tested, future CIRC phases will hopefully allow even more rigorous examination of RT code performance.

  5. Results from the First Validation Phase of CAP code

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Kim, Min Ki; Lee, Byung Chul [FNC Tech., SNU, Seoul (Korea, Republic of); Ha, Sang Jun; Choi, Hoon [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    The second stage of Safety Analysis Code Development for Nuclear Power Plants was lunched on Apirl, 2010 and is scheduled to be through 2012, of which the scope of work shall cover from code validation to licensing preparation. As a part of this project, CAP(Containment Analysis Package) will follow the same procedures. CAP's validation works are organized hieratically into four validation steps using; 1) Fundamental phenomena. 2) Principal phenomena (mixing and transport) and components in containment. 3) Demonstration test by small, middle, large facilities and International Standard Problems. 4) Comparison with other containment codes such as GOTHIC or COMTEMPT. In addition, collecting the experimental data related to containment phenomena and then constructing the database is one of the major works during the second stage as a part of this project. From the validation process of fundamental phenomenon, it could be expected that the current capability and the future improvements of CAP code will be revealed. For this purpose, simple but significant problems, which have the exact analytical solution, were selected and calculated for validation of fundamental phenomena. In this paper, some results of validation problems for the selected fundamental phenomena will be summarized and discussed briefly

  6. Monte Carlo simulation of a multi-leaf collimator design for telecobalt machine using BEAMnrc code

    Directory of Open Access Journals (Sweden)

    Ayyangar Komanduri

    2010-01-01

    Full Text Available This investigation aims to design a practical multi-leaf collimator (MLC system for the cobalt teletherapy machine and check its radiation properties using the Monte Carlo (MC method. The cobalt machine was modeled using the BEAMnrc Omega-Beam MC system, which could be freely downloaded from the website of the National Research Council (NRC, Canada. Comparison with standard depth dose data tables and the theoretically modeled beam showed good agreement within 2%. An MLC design with low melting point alloy (LMPA was tested for leakage properties of leaves. The LMPA leaves with a width of 7 mm and height of 6 cm, with tongue and groove of size 2 mm wide by 4 cm height, produced only 4% extra leakage compared to 10 cm height tungsten leaves. With finite 60 Co source size, the interleaf leakage was insignificant. This analysis helped to design a prototype MLC as an accessory mount on a cobalt machine. The complete details of the simulation process and analysis of results are discussed.

  7. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  8. Translocation Properties of Primitive Molecular Machines and Their Relevance to the Structure of the Genetic Code

    CERN Document Server

    Aldana, M; Larralde, H; Martínez-Mekler, G; Aldana, Maximino; Cocho, Germinal; Larralde, Hernan; Martinez-Mekler, Gustavo

    2002-01-01

    We address the question, related with the origin of the genetic code, of why are there three bases per codon in the translation to protein process. As a followup to our previous work, we approach this problem by considering the translocation properties of primitive molecular machines, which capture basic features of ribosomal/messenger RNA interactions, while operating under prebiotic conditions. Our model consists of a short one-dimensional chain of charged particles(rRNA antecedent) interacting with a polymer (mRNA antecedent) via electrostatic forces. The chain is subject to external forcing that causes it to move along the polymer which is fixed in a quasi one dimensional geometry. Our numerical and analytic studies of statistical properties of random chain/polymer potentials suggest that, under very general conditions, a dynamics is attained in which the chain moves along the polymer in steps of three monomers. By adjusting the model in order to consider present day genetic sequences, we show that the ab...

  9. Machines as organisms: an exploration of the relevance of recent results.

    Science.gov (United States)

    Laing, R

    1979-08-01

    The capacity of machines to exhibit organism-like behavior is examined. Some known results on machine description, self-description, construction and self-construction, are reviewed. The basic mechanism of machines and the ways in which they can be combined to form more complex biological-like systems are put forth as a source of explanatory mechanisms in biology. The proven properties can be employed in the design of machines which can repair themselves, and can exhibit a behavior distinguishing between machines which are or are not structurally similar to themselves. It is then argued that in an appropriate setting of variation and competition, such behavior would arise without explicit design.

  10. Measuring social interactions: results from the Dutch Post Code Lottery

    NARCIS (Netherlands)

    P. Kuhn; P. Kooreman; A.R. Soetevent; A. Kapteyn

    2007-01-01

    In the Dutch Post Code Lottery a postal code (19 households on average) is randomly selected weekly, and sizeable prizes (€12,500 per lottery ticket) are awarded to lottery participants living in that postal code. In addition to the monetary prizes, one of the winners wins a BMW. We analyze data on

  11. HIGH-PERFORMANCE SIMPLE-ENCODING GENERATOR-BASED SYSTEMATIC IRREGULAR LDPC CODES AND RESULTED PRODUCT CODES

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Low-Density Parity-Check (LDPC) code is one of the most exciting topics among the coding theory community. It is of great importance in both theory and practical communications over noisy channels. The most advantage of LDPC codes is their relatively lower decoding complexity compared with turbo codes, while the disadvantage is its higher encoding complexity. In this paper, a new approach is first proposed to construct high performance irregular systematic LDPC codes based on sparse generator matrix, which can significantly reduce the encoding complexity under the same decoding complexity as that of regular or irregular LDPC codes defined by traditional sparse parity-check matrix. Then, the proposed generator-based systematic irregular LDPC codes are adopted ss constituent block codes in rows and columns to design a new kind of product codes family, which also can be interpreted as irregular LDPC codes characterized by graph and thus decoded iteratively. Finally,the performance of the generator-based LDPC codes and the resultant product codes is investigated over an Additive White Gaussian Noise (AWGN) and also compared with the conventional LDPC codes under the same conditions of decoding complexity and channel noise.

  12. Evaluation of detonation energy from EXPLO5 computer code results

    Energy Technology Data Exchange (ETDEWEB)

    Suceska, M. [Brodarski Institute, Zagreb (Croatia). Marine Research and Special Technologies

    1999-10-01

    The detonation energies of several high explosives are evaluated from the results of chemical-equilibrium computer code named EXPLO5. Two methods of the evaluation of detonation energy are applied: (a) Direct evaluation from the internal energy of detonation products at the CJ point and the energy of shock compression of the detonation products, i.e. by equating the detonation energy and the heat of detonation, and (b) evaluation from the expansion isentrope of detonation products, applying the JWL model. These energies are compared to the energies computed from cylinder test derived JWL coefficients. It is found out that the detonation energies obtained directly from the energy of detonation products at the CJ point are uniformly to high (0.9445{+-}0.577 kJ/cm{sup 3}) while the detonation energies evaluated from the expansion isentrope, are in a considerable agreement (0.2072{+-}0.396 kJ/cm{sup 3}) with the energies calculated from cylinder test derived JWL coefficients. (orig.) [German] Die Detonationsenergien verschiedener Hochleistungssprengstoffe werden bewertet aus den Ergebnissen des Computer Codes fuer chemische Gleichgewichte genannt EXPLO5. Zwei Methoden wurden angewendet: (a) Direkte Bewertung aus der inneren Energie der Detonationsprodukte am CJ-Punkt und aus der Energie der Stosskompression der Detonationsprodukte, d.h. durch Gleichsetzung von Detonationsenergie und Detonationswaerme, (b) Auswertung durch die Expansions-Isentrope der Detonationsprodukte unter Anwendung des JWL-Modells. Diese Energien werden verglichen mit den berechneten Energien mit aus dem Zylindertest abgeleiteten JWL-Koeffizienten. Es wird gefunden, dass die Detonationsenergien, die direkt aus der Energie der Detonationsprodukte beim CJ-Punkt erhalten wurden, einheitlich zu hoch sind (0,9445{+-}0,577 kJ/cm{sup 3}), waehrend die aus der Expansions-Isentrope erhaltenen in guter Uebereinstimmung sind (0,2072{+-}0,396 kJ/cm{sup 3}) mit den berechneten Energien mit aus dem Zylindertest

  13. The use of machine learning with signal- and NLP processing of source code to detect and classify vulnerabilities and weaknesses with MARFCAT

    CERN Document Server

    Mokhov, Serguei A

    2010-01-01

    We present a machine learning approach to static code analysis for weaknesses related to security and others with the open-source MARF framework and its application to for the NIST's SATE 2010 static analysis tool exhibition workshop.

  14. Recent results in the decoding of Algebraic geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  15. Analysis of image content recognition algorithm based on sparse coding and machine learning

    Science.gov (United States)

    Xiao, Yu

    2017-03-01

    This paper presents an image classification algorithm based on spatial sparse coding model and random forest. Firstly, SIFT feature extraction of the image; and then use the sparse encoding theory to generate visual vocabulary based on SIFT features, and using the visual vocabulary of SIFT features into a sparse vector; through the combination of regional integration and spatial sparse vector, the sparse vector gets a fixed dimension is used to represent the image; at last random forest classifier for image sparse vectors for training and testing, using the experimental data set for standard test Caltech-101 and Scene-15. The experimental results show that the proposed algorithm can effectively represent the features of the image and improve the classification accuracy. In this paper, we propose an innovative image recognition algorithm based on image segmentation, sparse coding and multi instance learning. This algorithm introduces the concept of multi instance learning, the image as a multi instance bag, sparse feature transformation by SIFT images as instances, sparse encoding model generation visual vocabulary as the feature space is mapped to the feature space through the statistics on the number of instances in bags, and then use the 1-norm SVM to classify images and generate sample weights to select important image features.

  16. On Coding the States of Sequential Machines with the Use of Partition Pairs

    DEFF Research Database (Denmark)

    Zahle, Torben U.

    1966-01-01

    This article introduces a new technique of making state assignment for sequential machines. The technique is in line with the approach used by Hartmanis [l], Stearns and Hartmanis [3], and Curtis [4]. It parallels the work of Dolotta and McCluskey [7], although it was developed independently...

  17. Equipped Search Results Using Machine Learning from Web Databases

    Directory of Open Access Journals (Sweden)

    Ahmed Mudassar Ali

    2015-05-01

    Full Text Available Aim of this study is to form a cluster of search results based on similarity and to assign meaningful label to it Database driven web pages play a vital role in multiple domains like online shopping, e-education systems, cloud computing and other. Such databases are accessible through HTML forms and user interfaces. They return the result pages come from the underlying databases as per the nature of the user query. Such types of databases are termed as Web Databases (WDB. Web databases have been frequently employed to search the products online for retail industry. They can be private to a retailer/concern or publicly used by a number of retailers. Whenever the user queries these databases using keywords, most of the times the user will be deviated by the search results returned. The reason is no relevance exists between the keyword and SRs (Search Results. A typical web page returned from a WDB has multiple Search Result Records (SRRs. An easier way is to group the similar SRRs into one cluster in such a way the user can be more focused on his demand. The key concept of this paper is XML technologies. In this study, we propose a novel system called CSR (Clustering Search Results which extracts the data from the XML database and clusters them based on the similarity and finally assigns meaningful label for it. So, the output of the keyword entered will be the clusters containing related data items.

  18. Phonemic Coding Might Result From Sensory-Motor Coupling Dynamics

    OpenAIRE

    2002-01-01

    Human sound systems are invariably phonemically coded. Furthermore, phoneme inventories follow very particular tendancies. To explain these phenomena, there existed so far three kinds of approaches : ``Chomskyan''/cognitive innatism, morpho-perceptual innatism and the more recent approach of ``language as a complex cultural system which adapts under the pressure of efficient communication''. The two first approaches are clearly not satisfying, while the third, even if ...

  19. Initial experimental results of a machine learning-based temperature control system for an RF gun

    CERN Document Server

    Edelen, A L; Milton, S V; Chase, B E; Crawford, D J; Eddy, N; Edstrom, D; Harms, E R; Ruan, J; Santucci, J K; Stabile, P

    2015-01-01

    Colorado State University (CSU) and Fermi National Accelerator Laboratory (Fermilab) have been developing a control system to regulate the resonant frequency of an RF electron gun. As part of this effort, we present initial test results for a benchmark temperature controller that combines a machine learning-based model and a predictive control algorithm. This is part of an on-going effort to develop adaptive, machine learning-based tools specifically to address control challenges found in particle accelerator systems.

  20. High Performance Computing of Three-Dimensional Finite Element Codes on a 64-bit Machine

    Directory of Open Access Journals (Sweden)

    M.P Raju

    2012-01-01

    Full Text Available Three dimensional Navier-Stokes finite element formulations require huge computational power in terms of memory and CPU time. Recent developments in sparse direct solvers have significantly reduced the memory and computational time of direct solution methods. The objective of this study is twofold. First is to evaluate the performance of various state-of-the-art sequential sparse direct solvers in the context of finite element formulation of fluid flow problems. Second is to examine the merit in upgrading from 32 bit machine to a 64 bit machine with larger RAM capacity in terms of its capacity to solve larger problems. The choice of a direct solver is dependent on its computational time and its in-core memory requirements. Here four different solvers, UMFPACK, MUMPS, HSL_MA78 and PARDISO are compared. The performances of these solvers with respect to the computational time and memory requirements on a 64-bit windows server machine with 16GB RAM is evaluated.

  1. Recent Progress in a Beam-Beam Simulation Code for Circular Hadron Machines

    Energy Technology Data Exchange (ETDEWEB)

    Kabel, Andreas; /SLAC; Fischer, Wolfram; /Brookhaven; Sen, Tanaji; /Fermilab

    2007-09-10

    While conventional tracking codes can readily provide higher-order optical quantities and give an estimate of dynamic apertures, they are unable to provide directly measurable quantities such as lifetimes and loss rates. The particle tracking framework Plibb aims at modeling a storage ring with sufficient accuracy and a sufficiently high number of turns and in the presence of beam-beam interactions to allow for an estimate of these quantities. We provide a description of new features of the codes; we also describe a novel method of treating chromaticity in ring sections in a symplectic fashion.

  2. Machine-vision-based bar code scanning for long-range applications

    Science.gov (United States)

    Banta, Larry E.; Pertl, Franz A.; Rosenecker, Charles; Rosenberry-Friend, Kimberly A.

    1998-10-01

    Bar code labeling of products has become almost universal in most industries. However, in the steel industry, problems with high temperatures, harsh physical environments and the large sizes of the products and material handling equipment have slowed implementation of bar code based systems in the hot end of the mill. Typical laser-based bar code scanners have maximum scan distances of only 15 feet or so. Longer distance models have been developed which require the use of retro reflective paper labels, but the labels must be very large, are expensive, and cannot stand the heat and physical abuse of the steel mill environment. Furthermore, it is often difficult to accurately point a hand held scanner at targets in bright sunlight or at long distances. An automated product tag reading system based on CCD cameras and computer image processing has been developed by West Virginia University, and demonstrated at the Weirton Steel Corporation. The system performs both the pointing and reading functions. A video camera is mounted on a pan/tilt head, and connected to a personal computer through a frame grabber board. The computer analyzes the images, and can identify product ID tags in a wide-angle scene. It controls the camera to point at each tag and zoom for a closeup picture. The closeups are analyzed and the program need both a barcode and the corresponding alphanumeric code on the tag. This paper describes the camera pointing and bar-code reading functions of the algorithm. A companion paper describes the OCR functions.

  3. lncRScan-SVM: A Tool for Predicting Long Non-Coding RNAs Using Support Vector Machine.

    Science.gov (United States)

    Sun, Lei; Liu, Hui; Zhang, Lin; Meng, Jia

    2015-01-01

    Functional long non-coding RNAs (lncRNAs) have been bringing novel insight into biological study, however it is still not trivial to accurately distinguish the lncRNA transcripts (LNCTs) from the protein coding ones (PCTs). As various information and data about lncRNAs are preserved by previous studies, it is appealing to develop novel methods to identify the lncRNAs more accurately. Our method lncRScan-SVM aims at classifying PCTs and LNCTs using support vector machine (SVM). The gold-standard datasets for lncRScan-SVM model training, lncRNA prediction and method comparison were constructed according to the GENCODE gene annotations of human and mouse respectively. By integrating features derived from gene structure, transcript sequence, potential codon sequence and conservation, lncRScan-SVM outperforms other approaches, which is evaluated by several criteria such as sensitivity, specificity, accuracy, Matthews correlation coefficient (MCC) and area under curve (AUC). In addition, several known human lncRNA datasets were assessed using lncRScan-SVM. LncRScan-SVM is an efficient tool for predicting the lncRNAs, and it is quite useful for current lncRNA study.

  4. An experimental result of surface roughness machining performance in deep hole drilling

    Directory of Open Access Journals (Sweden)

    Mohamad Azizah

    2016-01-01

    Full Text Available This study presents an experimental result of a deep hole drilling process for Steel material at different machining parameters which are feed rate (f, spindle speed (s, the depth of the hole (d and MQL, number of drops (m on surface roughness, Ra. The experiment was designed using two level full factorial design of experiment (DoE with centre points to collect surface roughness, Ra values. The signal to noise (S/N ratio analysis was used to discover the optimum level for each machining parameters in the experiment.

  5. Automatic Multi-GPU Code Generation applied to Simulation of Electrical Machines

    CERN Document Server

    Rodrigues, Antonio Wendell De Oliveira; Dekeyser, Jean-Luc; Menach, Yvonnick Le

    2011-01-01

    The electrical and electronic engineering has used parallel programming to solve its large scale complex problems for performance reasons. However, as parallel programming requires a non-trivial distribution of tasks and data, developers find it hard to implement their applications effectively. Thus, in order to reduce design complexity, we propose an approach to generate code for hybrid architectures (e.g. CPU + GPU) using OpenCL, an open standard for parallel programming of heterogeneous systems. This approach is based on Model Driven Engineering (MDE) and the MARTE profile, standard proposed by Object Management Group (OMG). The aim is to provide resources to non-specialists in parallel programming to implement their applications. Moreover, thanks to model reuse capacity, we can add/change functionalities or the target architecture. Consequently, this approach helps industries to achieve their time-to-market constraints and confirms by experimental tests, performance improvements using multi-GPU environmen...

  6. Canadian geothermal code for public reporting: reporting of exploration results, geothermal resources and geothermal reserves

    Energy Technology Data Exchange (ETDEWEB)

    Deibert, Lee [Meridian Environmental Consulting Ltd. (Canada); Hjartarson, Arnar [Mannvit Engineering (Canada); McDonald, Ian; Toohey, Brian [Nexen Inc. (Canada); McIlveen, John [Jacob Securities, (Canada); Thompson, Alison [Magma Energy Corp. (Canada); Yang, Daniel [Borealis Geopower Inc. (Canada)

    2010-07-01

    In December 2008, the Canadian geothermal code committee sponsored by the Canadian Geothermal Energy Association (CanGEA) was created with the intention of developing a code for public reporting of geothermal resources and reserves. The code was based on key elements of the Australian code which was developed in 2008 by the Australian Geothermal Energy Association in collaboration with the Australian Geothermal Energy Group. The Canadian Code was developed with the purpose of being applicable to both Canadian and international geothermal plays and to offer a reporting basis which satisfies investors, shareholders and capital markets. The Canadian Geothermal Reporting Code for Public Reporting is provided herein, it is intended for all Canadian companies and their competitors. Since reporting of geothermal results is a recent activity, this Code will require further input during its implementation.

  7. Comparison of results between the ballooning-modes codes BLOON and BALOON

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.K. Jr.

    1981-08-01

    Ballooning mode equation eigenvalues calculated by two different codes, BLOON (written at General Atomic) and BALOON (written at Oak Ridge National Laboratory) have been compared for a sequence of equilibria having a range of ..beta.. values. The results agree for marginal stability only. Differences away from marginal stability may be due to differences in the coordinate systems used for the analysis in the two codes. Equilibria were generated using the ISLAND code of D. Stevens of New York University. Results of various convergence studies made with the codes are presented together with recommendations for their use.

  8. An improved method for identification of small non-coding RNAs in bacteria using support vector machine

    Science.gov (United States)

    Barman, Ranjan Kumar; Mukhopadhyay, Anirban; Das, Santasabuj

    2017-04-01

    Bacterial small non-coding RNAs (sRNAs) are not translated into proteins, but act as functional RNAs. They are involved in diverse biological processes like virulence, stress response and quorum sensing. Several high-throughput techniques have enabled identification of sRNAs in bacteria, but experimental detection remains a challenge and grossly incomplete for most species. Thus, there is a need to develop computational tools to predict bacterial sRNAs. Here, we propose a computational method to identify sRNAs in bacteria using support vector machine (SVM) classifier. The primary sequence and secondary structure features of experimentally-validated sRNAs of Salmonella Typhimurium LT2 (SLT2) was used to build the optimal SVM model. We found that a tri-nucleotide composition feature of sRNAs achieved an accuracy of 88.35% for SLT2. We validated the SVM model also on the experimentally-detected sRNAs of E. coli and Salmonella Typhi. The proposed model had robustly attained an accuracy of 81.25% and 88.82% for E. coli K-12 and S. Typhi Ty2, respectively. We confirmed that this method significantly improved the identification of sRNAs in bacteria. Furthermore, we used a sliding window-based method and identified sRNAs from complete genomes of SLT2, S. Typhi Ty2 and E. coli K-12 with sensitivities of 89.09%, 83.33% and 67.39%, respectively.

  9. Dynamic parameters’ identification for the feeding system of computer numerical control machine tools stimulated by G-code

    Directory of Open Access Journals (Sweden)

    Guangsheng Chen

    2015-08-01

    Full Text Available This study proposed a dynamic parameters’ identification method for the feeding system of computer numerical control machine tools based on internal sensor. A simplified control model and linear identification model of the feeding system were established, in which the input and output signals are from sensors embedded in computer numerical control machine tools, and the dynamic parameters of the feeding system, including the equivalent inertia, equivalent damping, worktable damping, and the overall stiffness of the mechanical system, were solved by the least square method. Using the high-order Taylor expansion, the nonlinear Stribeck friction model was linearized and the parameters of the Stribeck friction model were obtained by the same way. To verify the validity and effectiveness of the identification method, identification experiments, circular motion testing, and simulations were conducted. The results obtained were stable and suggested that inertia and damping identification experiments converged fast. Stiffness identification experiments showed some deviation from simulation due to the influences of geometric error and nonlinear of stiffness. However, the identification results were still of reference significance and the method is convenient, effective, and suited for industrial condition.

  10. Armature reaction effects on a high temperature superconducting field winding of an synchronous machine: experimental results

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech

    2014-01-01

    This paper presents experimental results from the Superwind laboratory setup. Particular focus in the paper has been placed on describing and quantifying the influence of armature reaction on performance of the HTS filed winding. Presented experimental results have confirmed the HTS field winding...... sensitivity to both armature reaction intensity and angular position with respect to the HTS coils. Furthermore, the characterization of the HTS field winding has been correlated to the electromagnetic torque of the machine where the maximal Ic reduction of 21% has been observed for the maximum torque....

  11. Armature reaction effects on a high temperature superconducting field winding of an synchronous machine: experimental results

    Science.gov (United States)

    Mijatovic, Nenad; Jensen, Bogi Bech

    2014-05-01

    This paper presents experimental results from the Superwind laboratory setup. Particular focus in the paper has been placed on describing and quantifying the influence of armature reaction on performance of the HTS filed winding. Presented experimental results have confirmed the HTS field winding sensitivity to both armature reaction intensity and angular position with respect to the HTS coils. Furthermore, the characterization of the HTS field winding has been correlated to the electromagnetic torque of the machine where the maximal Ic reduction of 21% has been observed for the maximum torque.

  12. Results of Investigative Tests of Gas Turbine Engine Compressor Blades Obtained by Electrochemical Machining

    Science.gov (United States)

    Kozhina, T. D.; Kurochkin, A. V.

    2016-04-01

    The paper highlights results of the investigative tests of GTE compressor Ti-alloy blades obtained by the method of electrochemical machining with oscillating tool-electrodes, carried out in order to define the optimal parameters of the ECM process providing attainment of specified blade quality parameters given in the design documentation, while providing maximal performance. The new technological methods suggested based on the results of the tests; in particular application of vibrating tool-electrodes and employment of locating elements made of high-strength materials, significantly extend the capabilities of this method.

  13. Results from new multi-megabar shockless compression experiments at the Z machine

    Science.gov (United States)

    Davis, Jean-Paul; Knudson, Marcus D.; Brown, Justin L.

    2017-01-01

    Sandia's Z Machine has been used to magnetically drive shockless compression of materials in a planar configuration to multi-megabar pressure levels, allowing accurate measurements of quasi-isentropic mechanical response at relatively low temperatures in the solid phase. This paper details recent improvements to design and analysis of such experiments, including the use of new data on the mechanical and optical response of lithium fluoride windows. Comparison of windowed and free-surface data on copper to 350 GPa lends confidence to the window correction method. Preliminary results are presented on gold to 500 GPa and platinum to 450 GPa; both appear stiffer than existing models.

  14. Analysis of detailed aerodynamic field measurements using results from an aeroelastic code

    Energy Technology Data Exchange (ETDEWEB)

    Schepers, J.G. [Energy Research Centre, Petten (Netherlands); Feigl, L. [Ecotecnia S. coop.c.l. (Spain); Rooij, R. van; Bruining, A. [Delft Univ. of Technology (Netherlands)

    2004-07-01

    In this article an analysis is given of aerodynamic field measurements on wind turbine blades. The analysis starts with a consistency check on the measurements, by relating the measured local aerodynamic segment forces to the overall rotor loads. It is found that the results are very consistent. Moreover, a comparison is made between measured results and results calculated from an aeroelastic code. On the basis of this comparison, the aerodynamic modelling in the aeroelastic code could be improved. This holds in particular for the modelling of 3D stall effects, not only on the lift but also on the drag, and for the modelling of tip effects (author)

  15. Sensitivity Analysis of FEAST-Metal Fuel Performance Code: Initial Results

    Energy Technology Data Exchange (ETDEWEB)

    Edelmann, Paul Guy [Los Alamos National Laboratory; Williams, Brian J. [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Yacout, Abdellatif [Argonne National Laboratories

    2012-06-27

    This memo documents the completion of the LANL milestone, M3FT-12LA0202041, describing methodologies and initial results using FEAST-Metal. The FEAST-Metal code calculations for this work are being conducted at LANL in support of on-going activities related to sensitivity analysis of fuel performance codes. The objective is to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. This report summarizes our preliminary results for the sensitivity analysis using 6 calibration datasets for metallic fuel developed at ANL for EBR-II experiments. Sensitivity ranking methodology was deployed to narrow down the selected parameters for the current study. There are approximately 84 calibration parameters in the FEAST-Metal code, of which 32 were ultimately used in Phase II of this study. Preliminary results of this sensitivity analysis led to the following ranking of FEAST models for future calibration and improvements: fuel conductivity, fission gas transport/release, fuel creep, and precipitation kinetics. More validation data is needed to validate calibrated parameter distributions for future uncertainty quantification studies with FEAST-Metal. Results of this study also served to point out some code deficiencies and possible errors, and these are being investigated in order to determine root causes and to improve upon the existing code models.

  16. SENR, A Super-Efficient Code for Gravitational Wave Source Modeling: Latest Results

    Science.gov (United States)

    Ruchlin, Ian; Etienne, Zachariah; Baumgarte, Thomas

    2017-01-01

    The science we extract from gravitational wave observations will be limited by our theoretical understanding, so with the recent breakthroughs by LIGO, reliable gravitational wave source modeling has never been more critical. Due to efficiency considerations, current numerical relativity codes are very limited in their applicability to direct LIGO source modeling, so it is important to develop new strategies for making our codes more efficient. We introduce SENR, a Super-Efficient, open-development numerical relativity (NR) code aimed at improving the efficiency of moving-puncture-based LIGO gravitational wave source modeling by 100x. SENR builds upon recent work, in which the BSSN equations are evolved in static spherical coordinates, to allow dynamical coordinates with arbitrary spatial distributions. The physical domain is mapped to a uniform-resolution grid on which derivative operations are approximated using standard central finite difference stencils. The source code is designed to be human-readable, efficient, parallelized, and readily extensible. We present the latest results from the SENR code.

  17. Magneto-acoustic waves in sunspots: first results from a new 3D nonlinear magnetohydrodynamic code

    CERN Document Server

    Felipe, T; Collados, M

    2010-01-01

    Waves observed in the photosphere and chromosphere of sunspots show complex dynamics and spatial patterns. The interpretation of high-resolution sunspot wave observations requires modeling of three-dimensional non-linear wave propagation and mode transformation in the sunspot upper layers in realistic spot model atmospheres. Here we present the first results of such modeling. We have developed a 3D non-linear numerical code specially designed to calculate the response of magnetic structures in equilibrium to an arbitrary perturbation. The code solves the 3D nonlinear MHD equations for perturbations; it is stabilized by hyper-diffusivity terms and is fully parallelized. The robustness of the code is demonstrated by a number of standard tests. We analyze several simulations of a sunspot perturbed by pulses of different periods at subphotospheric level, from short periods, introduced for academic purposes, to longer and realistic periods of three and five minutes. We present a detailed description of the three-d...

  18. Offshore Code Comparison Collaboration, Continuation: Phase II Results of a Floating Semisubmersible Wind System: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, A.; Jonkman, J.; Musial, W.; Vorpahl, F.; Popko, W.

    2013-11-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation tools that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, and foundation dynamics of the support structure. The Offshore Code Comparison Collaboration (OC3), which operated under the International Energy Agency (IEA) Wind Task 23, was established to verify the accuracy of these simulation tools [1]. This work was then extended under the Offshore Code Comparison Collaboration, Continuation (OC4) project under IEA Wind Task 30 [2]. Both of these projects sought to verify the accuracy of offshore wind turbine dynamics simulation tools (or codes) through code-to-code comparison of simulated responses of various offshore structures. This paper describes the latest findings from Phase II of the OC4 project, which involved the analysis of a 5-MW turbine supported by a floating semisubmersible. Twenty-two different organizations from 11 different countries submitted results using 24 different simulation tools. The variety of organizations contributing to the project brought together expertise from both the offshore structure and wind energy communities. Twenty-one different load cases were examined, encompassing varying levels of model complexity and a variety of metocean conditions. Differences in the results demonstrate the importance and accuracy of the various modeling approaches used. Significant findings include the importance of mooring dynamics to the mooring loads, the role nonlinear hydrodynamic terms play in calculating drift forces for the platform motions, and the difference between global (at the platform level) and local (at the member level) modeling of viscous drag. The results from this project will help guide development and improvement efforts for these tools to ensure that they are providing the accurate information needed to support the design and

  19. Microscopic Diffusion in Stellar Evolution Codes: First Comparison results of ESTA-Task~3

    CERN Document Server

    Lebreton, Y; Christensen-Dalsgaard, J; Théado, S; Hui-Bon-Hoa, A; Monteiro, M J P F G; Degl'Innocenti, S; Marconi, M; Morel, P; Moroni, P G P; Weiss, A

    2007-01-01

    We present recent work undertaken by the Evolution and Seismic Tools Activity (ESTA) team of the CoRoT Seismology Working Group. The new ESTA-Task 3 aims at testing, comparing and optimising stellar evolution codes which include microscopic diffusion of the chemical elements resulting from pressure, temperature and concentration gradients. The results already obtained are globally satisfactory, but some differences between the different numerical tools appear that require further investigations.

  20. Test results from Siemens low-speed, high-torque HTS machine and description of further steps towards commercialisation of HTS machines

    Science.gov (United States)

    Nick, Wolfgang; Grundmann, Joern; Frauenhofer, Joachim

    2012-11-01

    With extensive testing of the 4 MW 120 rpm HTS machine connected to a standard Siemens converter this first development stage for a basically new technology is concluded. The most innovative part of the machine, the HTS excited rotor, outperformed our expectations and demonstrated our capability to design, develop and build successfully such a technically challenging component. This could only be achieved on the base of a thorough understanding of the innovative material and its behaviour including practical handling experience, the ability to simulate 3D electromagnetics including transients, and finally transfer of the scientists' knowledge to a qualified manufacturing process. Equally important are the improved capabilities of critical component suppliers, e.g. for superconducting tapes and compact cryo-refrigerators. However, the transition of a technology into highly reliable industrial products does require more than technical mastering of the machine. Based on outstanding technical test results as presented above, the next step in future can be addressed: product development. Some thoughts will be presented regarding the needs of application fields and market oriented development, as the market is not "waiting for HTS". If HTS technology is seen as one key technology for a sustainable, material saving and energy efficient future, it certainly needs more effort, even at the 100th anniversary of superconductivity.

  1. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    Energy Technology Data Exchange (ETDEWEB)

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R. (Sandia National Labs., Albuquerque, NM (USA); National Inst. of Standards and Technology, Gaithersburg, MD (USA))

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  2. Offshore Code Comparison Collaboration within IEA Wind Annex XXIII: Phase II Results Regarding Monopile Foundation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Jonkman, J.; Butterfield, S.; Passon, P.; Larsen, T.; Camp, T.; Nichols, J.; Azcona, J.; Martinez, A.

    2008-01-01

    This paper presents an overview and describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, which operates under Subtask 2 of the International Energy Agency Wind Annex XXIII.

  3. A multigroup radiation diffusion test problem: Comparison of code results with analytic solution

    Energy Technology Data Exchange (ETDEWEB)

    Shestakov, A I; Harte, J A; Bolstad, J H; Offner, S R

    2006-12-21

    We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.

  4. Further results on binary convolutional codes with an optimum distance profile

    DEFF Research Database (Denmark)

    Johannesson, Rolf; Paaske, Erik

    1978-01-01

    the search for codes with a large value ofd_{infty}. We present extensive lists of such robustly optimal codes containing rateR = l/2nonsystematic codes, several withd_{infty}superior to that of any previously known code of the same rate and memory; rateR = 2/3systematic codes; and rateR = 2/3nonsystematic...... codes. As a counterpart to quick-look-in (QLI) codes which are not "transparent," we introduce rateR = 1/2easy-look-in-transparent (ELIT) codes with a feedforward inverse(1 + D,D). In general, ELIT codes haved_{infty}superior to that of QLI codes.......Fixed binary convolutional codes are considered which are simultaneously optimal or near-optimal according to three criteria: namely, distance profiled, free distanced_{ infty}, and minimum number of weightd_{infty}paths. It is shown how the optimum distance profile criterion can be used to limit...

  5. Comparisons of the simulation results using different codes for ADS spallation target

    CERN Document Server

    Yu Hong Wei; Shen Qing Biao; Wan Jun Sheng; Zhao Zhi Xiang

    2002-01-01

    The calculations to the standard thick target were made by using different codes. The simulation of the thick Pb target with length of 60 cm, diameter of 20 cm bombarded with 800, 1000, 1500 and 2000 MeV energetic proton beam was carried out. The yields and the spectra of emitted neutron were studied. The spallation target was simulated by SNSP, SHIELD, DCM/CEM (Dubna Cascade Model /Cascade Evaporation Mode) and LAHET codes. The Simulation Results were compared with experiments. The comparisons show good agreement between the experiments and the SNSP simulated leakage neutron yield. The SHIELD simulated leakage neutron spectra are in good agreement with the LAHET and the DCM/CEM simulated leakage neutron spectra

  6. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    Science.gov (United States)

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. AMENDMENTS TO THE FISCAL CODE REGARDING THE EXPENDITURES AND THE DETERMINATION OF THE EXERCISE RESULT

    Directory of Open Access Journals (Sweden)

    HOLT GHEORGHE

    2017-02-01

    Full Text Available rct Fiscal Code has brought many changes in the structure of the expenses used to calculate the results of the exercise and, thus, the profits tax. It treats the three items of expense categories (deductibles, with limited deductibility and non-deductible, bringing numerous changes in the structure of each of them. Expenses is recorded as decreases in economic benefits during the accounting period as outflows or decreases in assets and increases in the value of debt, which is reflected in the reductions of equity, other than those arising from their distribution to shareholders. The Fiscal Code has brought many changes to tax legislation in Romania, all the titles being affected, a particular importance being awarded to the changes regarding to the expenses deductibility, items that are the subject of this material. The basic concept regarding the deduction of expenses has been reformulated in the Fiscal Code, so that - currently - are deductible expenses that are performed for business purposes, unlike the general rule of deductibility valid until 31 December 2015, that were deductible only those expenses incurred in order to achieve taxable income.

  8. CODES AND PRACTICES OF IMPLEMENTATION OF CORPORATE GOVERNANCE IN ROMANIA AND RESULTS REPORTING

    Directory of Open Access Journals (Sweden)

    GROSU MARIA

    2011-12-01

    Full Text Available Corporate governance refers to the manner in which companies are directed and controlled. Business management was always guided by certain principles, but the current meaning of corporate governance concerns and the contribution that companies must have the overall development of modern society. Romania used quite late in adopting a code of good practice in corporate governance, being driven, in particular, the privatization process, but also the transfer of control and surveillance of political organizations by the Board of Directors (BD. Adoption of codes of corporate governance is necessary to harmonize internal business requirements of a functioning market economy. In addition, the CEE countries, the European Commission adopted an action plan announcing measures to modernize company law and enhance corporate governance. Romania takes steps in this direction by amending the Company Law, and other regulations, although the practice does not necessarily keep pace with the requirements. This study aims on the one hand, an analysis of the evolution of corporate governance codes adopted in Romania, but also an empirical research of the implementation of corporate governance principles of a representative sample of companies listed on the Bucharest Stock Exchange (BSE. Consider relevant research methodology, because the issuer of the Codes of CG in Romania is BSE listed companies requesting their voluntary implementation. Implementation results are summarized and interpreted at the expense of public reports of the companies studied. Most studies undertaken in this direction have been made on multinational companies which respects the rule of corporate governance codes of countries of origin. In addition, many studies also emphasize the fair treatment of stakeholders rather than on models of governance adopted (monist/dualist with implications for optimizing economic objectives but also social. Undertaken research attempts to highlight on the one

  9. The Aachen miniaturized heart-lung machine--first results in a small animal model.

    Science.gov (United States)

    Schnoering, Heike; Arens, Jutta; Sachweh, Joerg S; Veerman, Melanie; Tolba, Rene; Schmitz-Rode, Thomas; Steinseifer, Ulrich; Vazquez-Jimenez, Jaime F

    2009-11-01

    Congenital heart surgery most often incorporates extracorporeal circulation. Due to foreign surface contact and the administration of foreign blood in many children, inflammatory response and hemolysis are important matters of debate. This is particularly an issue in premature and low birth-weight newborns. Taking these considerations into account, the Aachen miniaturized heart-lung machine (MiniHLM) with a total static priming volume of 102 mL (including tubing) was developed and tested in a small animal model. Fourteen female Chinchilla Bastard rabbits were operated on using two different kinds of circuits. In eight animals, a conventional HLM with Dideco Kids oxygenator and Stöckert roller pump (Sorin group, Milan, Italy) was used, and the Aachen MiniHLM was employed in six animals. Outcome parameters were hemolysis and blood gas analysis including lactate. The rabbits were anesthetized, and a standard median sternotomy was performed. The ascending aorta and the right atrium were cannulated. After initiating cardiopulmonary bypass, the aorta was cross-clamped, and cardiac arrest was induced by blood cardioplegia. Blood samples for hemolysis and blood gas analysis were drawn before, during, and after cardiopulmonary bypass. After 1 h aortic clamp time, all animals were weaned from cardiopulmonary bypass. Blood gas analysis revealed adequate oxygenation and perfusion during cardiopulmonary bypass, irrespective of the employed perfusion system. The use of the Aachen MiniHLM resulted in a statistically significant reduced decrease in fibrinogen during cardiopulmonary bypass. A trend revealing a reduced increase in free hemoglobin during bypass in the MiniHLM group could also be observed. This newly developed Aachen MiniHLM with low priming volume, reduced hemolysis, and excellent gas transfer (O(2) and CO(2)) may reduce circuit-induced complications during heart surgery in neonates.

  10. Processing Code and Maintenance and Safety Operation for CNC Machining Center%CNC加工中心加工守则及维护保养与安全操作

    Institute of Scientific and Technical Information of China (English)

    罗昊

    2012-01-01

    介绍了CNC加工中心加工工艺守则,CNC加工中心保养规程,以及CNC加工中心安全技术操作规程。希望对CNC加工人员有一些帮助。%This paper Introduced the CNC machining center processing code, CNC machining center maintenance procedures, as well as CNC machining center safe technical operation rules. Hoped it has some helps for to CNC processing person.

  11. Application of distance-coded reference measuring system on rotary swivel drive of the 3D laser cutting machine%距离编码测量装置在三维激光切割机上的应用

    Institute of Scientific and Technical Information of China (English)

    翟东升; 钟昇; 洪超

    2013-01-01

    The application method of distance-coded reference measuring system on rotary swivel drive of the 3D laser cutting machine has been introduced in the text. The above method and conclusion provide reference for the application of distance-coded reference measuring system on other machine tools.%本文主要介绍带距离编码参考点标记的测量装置在三维激光切割机旋转机构上的应用方法,可为距离编码测量装置的应用提供参考.

  12. Ex vivo normothermic machine perfusion is safe, simple, and reliable: results from a large animal model.

    Science.gov (United States)

    Nassar, Ahmed; Liu, Qiang; Farias, Kevin; D'Amico, Giuseppe; Tom, Cynthia; Grady, Patrick; Bennett, Ana; Diago Uso, Teresa; Eghtesad, Bijan; Kelly, Dympna; Fung, John; Abu-Elmagd, Kareem; Miller, Charles; Quintini, Cristiano

    2015-02-01

    Normothermic machine perfusion (NMP) is an emerging preservation modality that holds the potential to prevent the injury associated with low temperature and to promote organ repair that follows ischemic cell damage. While several animal studies have showed its superiority over cold storage (CS), minimal studies in the literature have focused on safety, feasibility, and reliability of this technology, which represent key factors in its implementation into clinical practice. The aim of the present study is to report safety and performance data on NMP of DCD porcine livers. After 60 minutes of warm ischemia time, 20 pig livers were preserved using either NMP (n = 15; physiologic perfusion temperature) or CS group (n = 5) for a preservation time of 10 hours. Livers were then tested on a transplant simulation model for 24 hours. Machine safety was assessed by measuring system failure events, the ability to monitor perfusion parameters, sterility, and vessel integrity. The ability of the machine to preserve injured organs was assessed by liver function tests, hemodynamic parameters, and histology. No system failures were recorded. Target hemodynamic parameters were easily achieved and vascular complications were not encountered. Liver function parameters as well as histology showed significant differences between the 2 groups, with NMP livers showing preserved liver function and histological architecture, while CS livers presenting postreperfusion parameters consistent with unrecoverable cell injury. Our study shows that NMP is safe, reliable, and provides superior graft preservation compared to CS in our DCD porcine model. © The Author(s) 2014.

  13. Biomarkers of Eating Disorders Using Support Vector Machine Analysis of Structural Neuroimaging Data: Preliminary Results.

    Science.gov (United States)

    Cerasa, Antonio; Castiglioni, Isabella; Salvatore, Christian; Funaro, Angela; Martino, Iolanda; Alfano, Stefania; Donzuso, Giulia; Perrotta, Paolo; Gioia, Maria Cecilia; Gilardi, Maria Carla; Quattrone, Aldo

    2015-01-01

    Presently, there are no valid biomarkers to identify individuals with eating disorders (ED). The aim of this work was to assess the feasibility of a machine learning method for extracting reliable neuroimaging features allowing individual categorization of patients with ED. Support Vector Machine (SVM) technique, combined with a pattern recognition method, was employed utilizing structural magnetic resonance images. Seventeen females with ED (six with diagnosis of anorexia nervosa and 11 with bulimia nervosa) were compared against 17 body mass index-matched healthy controls (HC). Machine learning allowed individual diagnosis of ED versus HC with an Accuracy ≥ 0.80. Voxel-based pattern recognition analysis demonstrated that voxels influencing the classification Accuracy involved the occipital cortex, the posterior cerebellar lobule, precuneus, sensorimotor/premotor cortices, and the medial prefrontal cortex, all critical regions known to be strongly involved in the pathophysiological mechanisms of ED. Although these findings should be considered preliminary given the small size investigated, SVM analysis highlights the role of well-known brain regions as possible biomarkers to distinguish ED from HC at an individual level, thus encouraging the translational implementation of this new multivariate approach in the clinical practice.

  14. Biomarkers of Eating Disorders Using Support Vector Machine Analysis of Structural Neuroimaging Data: Preliminary Results

    Directory of Open Access Journals (Sweden)

    Antonio Cerasa

    2015-01-01

    Full Text Available Presently, there are no valid biomarkers to identify individuals with eating disorders (ED. The aim of this work was to assess the feasibility of a machine learning method for extracting reliable neuroimaging features allowing individual categorization of patients with ED. Support Vector Machine (SVM technique, combined with a pattern recognition method, was employed utilizing structural magnetic resonance images. Seventeen females with ED (six with diagnosis of anorexia nervosa and 11 with bulimia nervosa were compared against 17 body mass index-matched healthy controls (HC. Machine learning allowed individual diagnosis of ED versus HC with an Accuracy ≥ 0.80. Voxel-based pattern recognition analysis demonstrated that voxels influencing the classification Accuracy involved the occipital cortex, the posterior cerebellar lobule, precuneus, sensorimotor/premotor cortices, and the medial prefrontal cortex, all critical regions known to be strongly involved in the pathophysiological mechanisms of ED. Although these findings should be considered preliminary given the small size investigated, SVM analysis highlights the role of well-known brain regions as possible biomarkers to distinguish ED from HC at an individual level, thus encouraging the translational implementation of this new multivariate approach in the clinical practice.

  15. Nuclear Reactor Component Code CUPID-I: Numerical Scheme and Preliminary Assessment Results

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyoung Kyu; Jeong, Jae Jun; Park, Ik Kyu; Kim, Jong Tae; Yoon, Han Young

    2007-12-15

    A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components of a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAC, semi-implicit ICE, SIMPLE, Row Scheme and so on. Among them, the ICE scheme for the three-field model was presented in the present report. The CUPID code is utilizing unstructured mesh for the simulation of complicated geometries of the nuclear reactor components. The conventional ICE scheme that was applied to RELAP5 and COBRA-TF, therefore, were modified for the application to the unstructured mesh. Preliminary calculations for the unstructured semi-implicit ICE scheme have been conducted for a verification of the numerical method from a qualitative point of view. The preliminary calculation results showed that the present numerical scheme is robust and efficient for the prediction of phase changes and flow transitions due to a boiling and a flashing. These calculation results also showed the strong coupling between the pressure and void fraction changes. Thus, it is believed that the semi-implicit ICE scheme can be utilized for transient two-phase flows in a component of a nuclear reactor.

  16. Quantum Virtual Machine (QVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.

  17. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  18. An improved tip-loss correction based on vortex code results

    DEFF Research Database (Denmark)

    Branlard, Emmanuel; Dixon, Kristian; Gaunaa, Mac

    2012-01-01

    Standard blade element momentum(BEM) codes use Prandtl’s tip-loss correction which relies on simplified vortex theory under the assumption of optimal operating condition and no wake expansion. A new tip-loss correction for implementation in BEM codes has been developed using a lifting-line code...... to account for the effect of wake expansion, roll-up and distortion under any operating conditions. A database of tip-loss corrections is established for further use in BEM codes. Using this model a more physical representation of the flow and hence a better assessment of the performance of the turbine...

  19. An improved tip-loss correction based on vortex code results

    DEFF Research Database (Denmark)

    Branlard, Emmanuel; Dixon, Kristian; Gaunaa, Mac

    Standard blade element momentum(BEM) codes use Prandtl’s tip-loss correction which relies on simplified vortex theory under the assumption of optimal operating condition and no wake expansion. A new tip-loss correction for implementation in BEM codes has been developed using a lifting-line code...... to account for the effect of wake expansion, roll-up and distortion under any operating conditions. A database of tip-loss corrections is established for further use in BEM codes. Using this model a more physical representation of the flow and hence a better assessment of the performance of the turbine...

  20. Untyped Memory in the Java Virtual Machine

    DEFF Research Database (Denmark)

    Gal, Andreas; Probst, Christian; Franz, Michael

    2005-01-01

    We have implemented a virtual execution environment that executes legacy binary code on top of the type-safe Java Virtual Machine by recompiling native code instructions to type-safe bytecode. As it is essentially impossible to infer static typing into untyped machine code, our system emulates...... untyped memory on top of Java’s type system. While this approach allows to execute native code on any off-the-shelf JVM, the resulting runtime performance is poor. We propose a set of virtual machine extensions that add type-unsafe memory objects to JVM. We contend that these JVM extensions do not relax...... Java’s type system as the same functionality can be achieved in pure Java, albeit much less efficiently....

  1. Potential Job Creation in Tennessee as a Result of Adopting New Residential Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Niemeyer, Jackie M.

    2013-09-01

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levels of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.

  2. Potential Job Creation in Rhode Island as a Result of Adopting New Residential Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Niemeyer, Jackie M.

    2013-09-01

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levels of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.

  3. Potential Job Creation in Nevada as a Result of Adopting New Residential Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Niemeyer, Jackie M.

    2013-09-01

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levels of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.

  4. Potential Job Creation in Minnesota as a Result of Adopting New Residential Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Niemeyer, Jackie M.

    2013-09-01

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levels of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.

  5. Simulation of plasma turbulence in scrape-off layer conditions: the GBS code, simulation results and code validation

    Science.gov (United States)

    Ricci, P.; Halpern, F. D.; Jolliet, S.; Loizu, J.; Mosetto, A.; Fasoli, A.; Furno, I.; Theiler, C.

    2012-12-01

    Based on the drift-reduced Braginskii equations, the Global Braginskii Solver, GBS, is able to model the scrape-off layer (SOL) plasma turbulence in terms of the interplay between the plasma outflow from the tokamak core, the turbulent transport, and the losses at the vessel. Model equations, the GBS numerical algorithm, and GBS simulation results are described. GBS has been first developed to model turbulence in basic plasma physics devices, such as linear and simple magnetized toroidal devices, which contain some of the main elements of SOL turbulence in a simplified setting. In this paper we summarize the findings obtained from the simulation carried out in these configurations and we report the first simulations of SOL turbulence. We also discuss the validation project that has been carried out together with the GBS development.

  6. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    CERN Document Server

    Blazewicz, Marek; Koppelman, David M; Brandt, Steven R; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Tao, Jian

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of va...

  7. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    Directory of Open Access Journals (Sweden)

    Marek Blazewicz

    2013-01-01

    Full Text Available Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.

  8. Results and code predictions for ABCOVE (aerosol behavior code validation and evaluation) aerosol code validation with low concentration NaOH and NaI aerosol: CSTF test AB7

    Energy Technology Data Exchange (ETDEWEB)

    Hilliard, R.K.; McCormack, J.D.; Muhlestein, L.D.

    1985-10-01

    A program for aerosol behavior validation and evaluation (ABCOVE) has been developed in accordance with the LMFBR Safety Program Plan. The ABCOVE program is a cooperative effort between the USDOE, the USNRC, and their contractor organizations currently involved in aerosol code development, testing or application. The third large-scale test in the ABCOVE program, AB7, was performed in the 850-m/sup 3/ CSTF vessel with a two-species test aerosol. The test conditions involved the release of a simulated fission product aerosol, NaI, into the containment atmosphere after the end of a small sodium pool fire. Four organizations made pretest predictions of aerosol behavior using five computer codes. Two of the codes (QUICKM and CONTAIN) were discrete, multiple species codes, while three (HAA-3, HAA-4, and HAARM-3) were log-normal codes which assume uniform coagglomeration of different aerosol species. Detailed test results are presented and compared with the code predictions for eight key aerosol behavior parameters. 11 refs., 44 figs., 35 tabs.

  9. The SNS Machine Protection System Early Commissioning Results and Future Plans

    CERN Document Server

    Sibley, Coles; Jones, Alan; Justice, Thomas A; Thompson, Dave H

    2005-01-01

    The Spallation Neutron Source under construction in Oak Ridge TN has commissioned low power beam up to 187 Mev. The number of MPS inputs is about 20% of the final number envisioned. Start-up problems, including noise and false trips, have largely been overcome by replacing copper with fiber and adding filters as required. Initial recovery time from Machine Protection System (MPS) trips was slow due to a hierarchy of latched inputs in the system: at the device level, at the MPS input layer, and at the operator interface level. By reprogramming the MPS FPGA such that all resets were at the input devices, MPS availability improved to acceptable levels. For early commissioning MPS inputs will be limited to beam line devices that will prohibit beam operation. For later operation, the number of MPS inputs will increase both software alarms and less intrusive MPS inputs such as steering magnets are implemented. Two upgrades to SNS are on the horizon: a 3 MW upgrade and a second target station. Although these are yea...

  10. Effect of Micro Electrical Discharge Machining Process Conditions on Tool Wear Characteristics: Results of an Analytic Study

    DEFF Research Database (Denmark)

    Puthumana, Govindan; P., Rajeev

    2016-01-01

    Micro electrical discharge machining is one of the established techniques to manufacture high aspect ratio features on electrically conductive materials. This paper presents the results and inferences of an analytical study for estimating theeffect of process conditions on tool electrode wear...... characteristicsin micro-EDM process. A new approach with two novel factors anticipated to directly control the material removal mechanism from the tool electrode are proposed; using discharge energyfactor (DEf) and dielectric flushing factor (DFf). The results showed that the correlation between the tool wear rate...

  11. Coupling External Radiation Transport Code Results to the GADRAS Detector Response Function

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Dean J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Contraband Detection; Thoreson, Gregory G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Contraband Detection; Horne, Steven M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Contraband Detection

    2014-01-01

    Simulating gamma spectra is useful for analyzing special nuclear materials. Gamma spectra are influenced not only by the source and the detector, but also by the external, and potentially complex, scattering environment. The scattering environment can make accurate representations of gamma spectra difficult to obtain. By coupling the Monte Carlo Nuclear Particle (MCNP) code with the Gamma Detector Response and Analysis Software (GADRAS) detector response function, gamma spectrum simulations can be computed with a high degree of fidelity even in the presence of a complex scattering environment. Traditionally, GADRAS represents the external scattering environment with empirically derived scattering parameters. By modeling the external scattering environment in MCNP and using the results as input for the GADRAS detector response function, gamma spectra can be obtained with a high degree of fidelity. This method was verified with experimental data obtained in an environment with a significant amount of scattering material. The experiment used both gamma-emitting sources and moderated and bare neutron-emitting sources. The sources were modeled using GADRAS and MCNP in the presence of the external scattering environment, producing accurate representations of the experimental data.

  12. Coupling External Radiation Transport Code Results to the GADRAS Detector Response Function

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Dean J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Contraband Detection; Thoreson, Gregory G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Contraband Detection; Horne, Steven M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Contraband Detection

    2014-01-01

    Simulating gamma spectra is useful for analyzing special nuclear materials. Gamma spectra are influenced not only by the source and the detector, but also by the external, and potentially complex scattering environment. The scattering environment can make accurate representations of gamma spectra difficult to obtain. By coupling the Monte Carlo Nuclear Particle (MCNP) code with the Gamma Detector Response and Analysis Software (GADRAS) detector response function, gamma spectrum simulations can be computed with a high degree of fidelity even in the presence of a complex scattering environment. Traditionally, GADRAS represents the external scattering environment with empirically derived scattering parameters. By modeling the external scattering environment in MCNP and using the results as input for the GADRAS detector response function, gamma spectra can be obtained with a high degree of fidelity. This method was verified with experimental data obtained in an environment with a significant amount of scattering material. The experiment used both gamma-emitting sources and moderated and bare neutron-emitting sources. The sources were modeled using GADRAS and MCNP in the presence of the external scattering environment, producing accurate representations of the experimental data.

  13. Offshore Code Comparison Collaboration, Continuation within IEA Wind Task 30: Phase II Results Regarding a Floating Semisubmersible Wind System: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, A.; Jonkman, J.; Vorpahl, F.; Popko, W.; Qvist, J.; Froyd, L.; Chen, X.; Azcona, J.; Uzungoglu, E.; Guedes Soares, C.; Luan, C.; Yutong, H.; Pengcheng, F.; Yde, A.; Larsen, T.; Nichols, J.; Buils, R.; Lei, L.; Anders Nygard, T.; et al.

    2014-03-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation tools (or codes) that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, and foundation dynamics of the support structure. This paper describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, Continuation (OC4) project, which operates under the International Energy Agency (IEA) Wind Task 30. In the latest phase of the project, participants used an assortment of simulation codes to model the coupled dynamic response of a 5-MW wind turbine installed on a floating semisubmersible in 200 m of water. Code predictions were compared from load-case simulations selected to test different model features. The comparisons have resulted in a greater understanding of offshore floating wind turbine dynamics and modeling techniques, and better knowledge of the validity of various approximations. The lessons learned from this exercise have improved the participants? codes, thus improving the standard of offshore wind turbine modeling.

  14. Offshore Code Comparison Collaboration within IEA Wind Task 23: Phase IV Results Regarding Floating Wind Turbine Modeling; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Jonkman, J.; Larsen, T.; Hansen, A.; Nygaard, T.; Maus, K.; Karimirad, M.; Gao, Z.; Moan, T.; Fylling, I.

    2010-04-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation codes that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, and foundation dynamics of the support structure. This paper describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, which operates under Subtask 2 of the International Energy Agency Wind Task 23. In the latest phase of the project, participants used an assortment of codes to model the coupled dynamic response of a 5-MW wind turbine installed on a floating spar buoy in 320 m of water. Code predictions were compared from load-case simulations selected to test different model features. The comparisons have resulted in a greater understanding of offshore floating wind turbine dynamics and modeling techniques, and better knowledge of the validity of various approximations. The lessons learned from this exercise have improved the participants' codes, thus improving the standard of offshore wind turbine modeling.

  15. Debugging the virtual machine

    Energy Technology Data Exchange (ETDEWEB)

    Miller, P.; Pizzi, R.

    1994-09-02

    A computer program is really nothing more than a virtual machine built to perform a task. The program`s source code expresses abstract constructs using low level language features. When a virtual machine breaks, it can be very difficult to debug because typical debuggers provide only low level machine implementation in formation to the software engineer. We believe that the debugging task can be simplified by introducing aspects of the abstract design into the source code. We introduce OODIE, an object-oriented language extension that allows programmers to specify a virtual debugging environment which includes the design and abstract data types of the virtual machine.

  16. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    Science.gov (United States)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  17. First results for fluid dynamics, neutronics and fission product behavior in HTR applying the HTR code package (HCP) prototype

    Energy Technology Data Exchange (ETDEWEB)

    Allelein, H.-J., E-mail: h.j.allelein@fz-juelich.de [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH Aachen University, 52064 Aachen (Germany); Kasselmann, S.; Xhonneux, A.; Tantillo, F.; Trabadela, A.; Lambertz, D. [Forschungszentrum Jülich, 52425 Jülich (Germany)

    2016-09-15

    To simulate the different aspects of High Temperature Reactor (HTR) cores, a variety of specialized computer codes have been developed at Forschungszentrum Jülich (IEK-6) and Aachen University (LRST) in the last decades. In order to preserve knowledge, to overcome present limitations and to make these codes applicable to modern computer clusters, these individual programs are being integrated into a consistent code package. The so-called HTR code package (HCP) couples the related and recently applied physics models in a highly integrated manner and therefore allows to simulate phenomena with higher precision in space and time while at the same time applying state-of-the-art programming techniques and standards. This paper provides an overview of the status of the HCP and reports about first benchmark results for an HCP prototype which couples the fluid dynamics and time dependent neutronics code MGT-3D, the burn up code TNT and the fission product release code STACY. Due to the coupling of MGT-3D and TNT, a first step towards a new reactor operation and accident simulation code was made, where nuclide concentrations calculated by TNT lead to new cross sections, which are fed back into MGT-3D. Selected operation scenarios of the HTR-Module 200 concept plant and the HTTR were chosen to be simulated with the HCP prototype. The fission product release during normal operation conditions will be calculated with STACY based on a core status derived from SERPENT and MGT-3D. Comparisons will be shown against data generated by SERPENT and the legacy codes VSOP99/11, NAKURE and FRESCO-II.

  18. The Research and Appliaction of the Multi-classification Algorithm of Error-Correcting Codes Based on Support Vector Machine%基于SVM的纠错编码多分类算法的研究与应用

    Institute of Scientific and Technical Information of China (English)

    祖文超; 苑津莎; 王峰; 刘磊

    2012-01-01

    In order to enhance the accuracy rate of transformer fault diagnosis,multiclass classification algorithm,which is based upon Error-correcting codes connects with SVM,has been proposedThe mathe-matical model of transformer fault diagnosis is set up according to the theory of Support Vector Machine. Firstly,the Error-correcting codes matrix constructs some irrelevant Support Vector Machine,so that the accuracy rate of classified model can be enhanced.Finally,taking the dissolved gases in the transformer oil as the practise and testing sample of Error-correcting codes and SVM to realize transformer fault diagno- sis.And checking the arithmetic by using UCI data.The multiclass classification algorithm has been verified through VS2008 combined with Libsvm has been verified.And the result shows the method has high ac- curacy of classification.%为了提高变压器故障诊断的准确率,提出了一种基于纠错编码和支持向量机相结合的多分类算法,根据SVM理论建立变压器故障诊断数学模型,首先基于纠错编码矩阵构造出若干个互不相关的子支持向量机,以提高分类模型的分类准确率。最后把变压器油中溶解气体(DGA)作为纠错编码支持向量机的训练以及测试样本,实现变压器的故障诊断,同时用UCI数据对该算法进行验证。通过VS2008和Libsvm相结合对其进行验证,结果表明该方法具有很高的分类精度。

  19. A Fortran 90 code for magnetohydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  20. Comparison of AMOS computer code wakefield real part impedances with analytic results

    Energy Technology Data Exchange (ETDEWEB)

    Mayhall, D J; Nelson, S D

    2000-11-30

    We have performed eleven AMOS (Azimuthal Mode Simulator)[1] code runs with a simple, right circular cylindrical accelerating cavity inserted into a circular, cylindrical, lossless beam pipe to calculate the real part of the n = 1(dipole) transverse wakefield impedance of this structure. We have compared this wakefield impedance in units of ohms/m(Wm) over the frequency range of 0-1 GHz to analytic predictions from Equation (2.3.8) of Briggs et al[2]. The results from Equation (2.3.8) were converted from the CGS units of statohms to the MKS units of ohms({Omega}) and then multiplied by (2{pi}f)/c = {Omega}/c = 2{pi}/{lambda}, where f is the frequency in Hz, c is the speed of light in vacuum in m/sec, {omega} is the angular frequency in radians/sec, and {lambda} is the wavelength in m. The dipole transverse wakefield impedance written to file from AMOS must be multiplied by c/o to convert it from units of {Omega}/m to units of {Omega}. The agreement between the AMOS runs and the analytic predictions are excellent for computational grids with square cells (dz = dr) and good for grids with rectangular cells (dz < dr). The quantity dz is the fixed-size axial grid spacing, and dr is the fixed-size radial grid spacing. We have also performed one AMOS run for the same geometry to calculate the real part of the n = 0(monopole) longitudinal wakefield impedance of this structure. We have compared this wakefield impedance in units of {Omega} with analytic predictions from Equation (1.4.8) of Briggs et al[1] converted to the MKS units of {Omega}. The agreement between the two results is excellent in this case. For the monopole longitudinal wakefield impedance written to file from AMOS, nothing must be done to convert the results to units of {Omega}. In each case, the computer calculations were carried out to 50 nsec of simulation time.

  1. Time-Dependent Photoionization in a Dusty Medium I Code Description and General Results

    CERN Document Server

    Perna, R; Perna, Rosalba; Lazzati, Davide

    2002-01-01

    We present a time-dependent photoionization code that combines self-consistently metal evolution and dust destruction under an intense X-ray UV radiation field. Firstly, we extend the mathematical formulation of the time-dependent evolution of dust grains under an intense radiation flux with the inclusion of the process of ion field emission (IFE). We determine the relative importance of IFE with respect to X-ray and UV sublimation as a function of grain size, intensity and hardness of the incident spectrum. We then combine the processes of dust destruction with a photoionization code that follows the evolution of the ionization states of the metals and the relative radiative transitions. Our code treats, self-consistently, the gradual recycling of metals into gas as dust is sublimated away; it allows for any initial dust grain distribution and follows its evolution in space and time. In this first paper, we use our code to study the time-dependent behaviour of the X-ray and optical opacities in the nearby en...

  2. Preliminary In-vivo Results For Spatially Coded Synthetic Transmit Aperture Ultrasound Based On Frequency Division

    DEFF Research Database (Denmark)

    Gran, Fredrik; Hansen, Kristoffer Lindskov; Jensen, Jørgen Arendt;

    2006-01-01

    This paper investigates the possibility of using spatial coding based on frequency division for in-vivo synthetic transmit aperture (STA) ultrasound imaging. When using spatial encoding for STA, it is possible to use several transmitters simultaneously and separate the signals at the receiver. Th...

  3. Python for probability, statistics, and machine learning

    CERN Document Server

    Unpingco, José

    2016-01-01

    This book covers the key ideas that link probability, statistics, and machine learning illustrated using Python modules in these areas. The entire text, including all the figures and numerical results, is reproducible using the Python codes and their associated Jupyter/IPython notebooks, which are provided as supplementary downloads. The author develops key intuitions in machine learning by working meaningful examples using multiple analytical methods and Python codes, thereby connecting theoretical concepts to concrete implementations. Modern Python modules like Pandas, Sympy, and Scikit-learn are applied to simulate and visualize important machine learning concepts like the bias/variance trade-off, cross-validation, and regularization. Many abstract mathematical ideas, such as convergence in probability theory, are developed and illustrated with numerical examples. This book is suitable for anyone with an undergraduate-level exposure to probability, statistics, or machine learning and with rudimentary knowl...

  4. NC-code post-processor for 4-axis machining center based on NX of FANUC system%基于NX的FANUC系统四轴加工中心后置处理器构建

    Institute of Scientific and Technical Information of China (English)

    谭大庆

    2013-01-01

      A customized NC-code post-processor,based on the universal template of NX post-processor, is designed to meet the standards of 4-axis machining center equipped with FANUC CNC system.%  使用NX后置处理构建器通用模板的基础上,设计符合FANUC数控系统四轴加工中心要求的专用后置处理器。

  5. Phonemic coding might be a result of sensory-motorcoupling dynamics

    OpenAIRE

    2003-01-01

    in the Proceedings of the 7th International Conference on the Simulation of Adaptive Behavior, pp. 406-416, eds. B. Hallam, D. Floreano, J. Hallam, G. Hayes, J-A. Meyer, MIT Press.; Human sound systems are invariably phonemically coded. Furthermore,phoneme inventories follow very particular tendancies. To explain these phenomena, there existed so far three kinds of approaches : "Chomskyan"/cognitive innatism, morpho-perceptual innatism and the more recent approach of "language as a complex cu...

  6. Design of an Object-Oriented Turbomachinery Analysis Code: Initial Results

    Science.gov (United States)

    Jones, Scott M.

    2015-01-01

    Performance prediction of turbomachines is a significant part of aircraft propulsion design. In the conceptual design stage, there is an important need to quantify compressor and turbine aerodynamic performance and develop initial geometry parameters at the 2-D level prior to more extensive Computational Fluid Dynamics (CFD) analyses. The Object-oriented Turbomachinery Analysis Code (OTAC) is being developed to perform 2-D meridional flowthrough analysis of turbomachines using an implicit formulation of the governing equations to solve for the conditions at the exit of each blade row. OTAC is designed to perform meanline or streamline calculations; for streamline analyses simple radial equilibrium is used as a governing equation to solve for spanwise property variations. While the goal for OTAC is to allow simulation of physical effects and architectural features unavailable in other existing codes, it must first prove capable of performing calculations for conventional turbomachines. OTAC is being developed using the interpreted language features available in the Numerical Propulsion System Simulation (NPSS) code described by Claus et al (1991). Using the NPSS framework came with several distinct advantages, including access to the pre-existing NPSS thermodynamic property packages and the NPSS Newton-Raphson solver. The remaining objects necessary for OTAC were written in the NPSS framework interpreted language. These new objects form the core of OTAC and are the BladeRow, BladeSegment, TransitionSection, Expander, Reducer, and OTACstart Elements. The BladeRow and BladeSegment consumed the initial bulk of the development effort and required determining the equations applicable to flow through turbomachinery blade rows given specific assumptions about the nature of that flow. Once these objects were completed, OTAC was tested and found to agree with existing solutions from other codes; these tests included various meanline and streamline comparisons of axial

  7. Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

    Directory of Open Access Journals (Sweden)

    Marcus Völp

    2012-11-01

    Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

  8. THE RESULTS OF THE STUDY BOILING POINT OUT OZONE-SAFE REFRIGERANT R410A IN THE EVAPORATORS OF REFRIGERATING MACHINES

    Directory of Open Access Journals (Sweden)

    V. G. Bukin

    2012-01-01

    Full Text Available The results of experimental research boiling heat transfer of ozone-friendly R410A refrigerant in evaporators machines and the possibility of its use in place of the prohibited refrigerant R22.

  9. Offshore Code Comparison Collaboration within IEA Wind Annex XXIII: Phase III Results Regarding Tripod Support Structure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, J.; Camp, T.; Jonkman, J.; Butterfield, S.; Larsen, T.; Hansen, A.; Azcona, J.; Martinez, A.; Munduate, X.; Vorpahl, F.; Kleinhansl, S.; Kohlmeier, M.; Kossel, T.; Boker, C.; Kaufer, D.

    2009-01-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation codes. This paper describes the findings of code-to-code verification activities of the IEA Offshore Code Comparison Collaboration.

  10. Increasing asthma mortality in Denmark 1969-88 not a result of a changed coding practice

    DEFF Research Database (Denmark)

    Juel, K; Pedersen, P A

    1992-01-01

    We have studied asthma mortality in Denmark from 1969 to 1988. Age standardized mortality rates calculated in three age groups, 10-34, 35-59, and greater than or equal to 60 years, disclosed similar trends. Increasing mortality from asthma in the mid-1970s to 1988 was seen in all three age groups...... with higher mortality in 1979-88 as compared with 1969-78 of 95%, 55%, and 69%, respectively. Since the eighth revision of the International Classification of Diseases (ICD8) was used in Denmark over the entire 20-year period, changes in coding practice due to change of classification system cannot explain...

  11. Transduplication resulted in the incorporation of two protein-coding sequences into the Turmoil-1 transposable element of C. elegans

    Directory of Open Access Journals (Sweden)

    Pupko Tal

    2008-10-01

    Full Text Available Abstract Transposable elements may acquire unrelated gene fragments into their sequences in a process called transduplication. Transduplication of protein-coding genes is common in plants, but is unknown of in animals. Here, we report that the Turmoil-1 transposable element in C. elegans has incorporated two protein-coding sequences into its inverted terminal repeat (ITR sequences. The ITRs of Turmoil-1 contain a conserved RNA recognition motif (RRM that originated from the rsp-2 gene and a fragment from the protein-coding region of the cpg-3 gene. We further report that an open reading frame specific to C. elegans may have been created as a result of a Turmoil-1 insertion. Mutations at the 5' splice site of this open reading frame may have reactivated the transduplicated RRM motif. Reviewers This article was reviewed by Dan Graur and William Martin. For the full reviews, please go to the Reviewers' Reports section.

  12. RSAP - A Code for Display of Neutron Cross Section Data and SAMMY Fit Results

    Energy Technology Data Exchange (ETDEWEB)

    Sayer, R.O.

    2001-02-02

    RSAP is a computer code for display of neutron cross section data and selected SAMMY output. SAMMY is a multilevel R-matrix code for fitting neutron time-of-flight cross-section data using Bayes' method. RSAP, which runs on the Digital Unix Alpha platform, reads ORELA Data Files (ODF) created by SAMMY and uses graphics routines from the PLPLOT package. In addition, RSAP can read data and/or computed values from ASCII files with a format specified by the user. Plot output may be displayed in an X window, sent to a postscript file (rsap.ps), or sent to a color postscript file (rsap.psc). Thirteen plot types are supported, allowing the user to display cross section data, transmission data, errors, theory, Bayes fits, and residuals in various combinations. In this document the designations theory and Bayes refer to the initial and final theoretical cross sections, respectively, as evaluated by SAMMY. Special plot types include Bayes/Data, Theory--Data, and Bayes--Data. Output from two SAMMY runs may be compared by plotting the ratios Theory2/Theory1 and Bayes2/Bayes1 or by plotting the differences (Theory2-Theory1) and (Bayes2-Bayes1).

  13. THE RESULTS OF MACHINE-TRACTOR UNIT’S TESTS ON THE BASIS OF A 1,4 TRACTOR HAVING A ROTARY MASS OF THE ENGINE

    Directory of Open Access Journals (Sweden)

    Kravchenko V. A.

    2014-05-01

    Full Text Available The article contains the results of machine-tractor unit’s researches on the basis of a class 1,4 tractor having a variable rotary mass of the engine. It is concluded that the use of additional rotary mass being linked at corresponding regimes of motion to the engine’s shaft or primary transmission shaft at the tractor furthers the improvement of operation indexes of a machine-tractor unit

  14. Machine vision process monitoring on a poultry processing kill line: results from an implementation

    Science.gov (United States)

    Usher, Colin; Britton, Dougl; Daley, Wayne; Stewart, John

    2005-11-01

    width information of the entire chicken and different parts such as the breast, the legs, the wings, and the neck. The system also records average color and miss- hung birds, which can cause problems in further processing. Other relevant production information is also recorded including truck arrival and offloading times, catching crew and flock serviceman data, the grower, the breed of chicken, and the number of dead-on- arrival (DOA) birds per truck. Several interesting observations from the Georgia Tech vision system, which has been installed in a poultry processing plant for several years, are presented. Trend analysis has been performed on the performance of the catching crews and flock serviceman, and the results of the processed chicken as they relate to the bird dimensions and equipment settings in the plant. The results have allowed researchers and plant personnel to identify potential areas for improvement in the processing operation, which should result in improved efficiency and yield.

  15. Joint source channel coding using arithmetic codes

    CERN Document Server

    Bi, Dongsheng

    2009-01-01

    Based on the encoding process, arithmetic codes can be viewed as tree codes and current proposals for decoding arithmetic codes with forbidden symbols belong to sequential decoding algorithms and their variants. In this monograph, we propose a new way of looking at arithmetic codes with forbidden symbols. If a limit is imposed on the maximum value of a key parameter in the encoder, this modified arithmetic encoder can also be modeled as a finite state machine and the code generated can be treated as a variable-length trellis code. The number of states used can be reduced and techniques used fo

  16. Results on the Fundamental Gain of Memory-Assisted Universal Source Coding

    CERN Document Server

    Beirami, Ahmad; Fekri, Faramarz

    2012-01-01

    Many applications require data processing to be performed on individual pieces of data which are of finite sizes, e.g., files in cloud storage units and packets in data networks. However, traditional universal compression solutions would not perform well over the finite-length sequences. Recently, we proposed a framework called memory-assisted universal compression that holds a significant promise for reducing the amount of redundant data from the finite-length sequences. The proposed compression scheme is based on the observation that it is possible to learn source statistics (by memorizing previous sequences from the source) at some intermediate entities and then leverage the memorized context to reduce redundancy of the universal compression of finite-length sequences. We first present the fundamental gain of the proposed memory-assisted universal source coding over conventional universal compression (without memorization) for a single parametric source. Then, we extend and investigate the benefits of the ...

  17. Comparison of a laboratory spectrum of Eu-152 with results of simulation using the MCNP code

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, J. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain); Gallardo, S. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)], E-mail: sergalbe@iqn.upv.es; Ortiz, J. [Laboratorio de Radiactividad Ambiental, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)

    2007-09-21

    Detectors used for gamma spectrometry must be calibrated for each geometry considered in environmental radioactivity laboratories. This calibration is performed using a standard solution containing gamma emitter sources. Nevertheless, the efficiency curves obtained are periodically checked using a source such as {sup 152}Eu emitting many gamma rays that cover a wide energy range (20-1500 keV). {sup 152}Eu presents a problem because it has a lot of peaks affected by True Coincidence Summing (TCS). Two experimental measures have been performed placing the source (a Marinelli beaker) at 0 and 10 cm from the detector. Both spectra are simulated by the MCNP 4C code, where the TCS is not reproduced. Therefore, the comparison between experimental and simulated peak net areas permits one to choose the most convenient peaks to check the efficiency curves of the detector.

  18. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    CERN Document Server

    Melzani, Mickaël; Walder, Rolf; Folini, Doris; Favre, Jean M; Krastanov, Stefan; Messmer, Peter

    2013-01-01

    We present the parallel particle-in-cell (PIC) code Apar-T and, more importantly, address the fundamental question of the relations between the PIC model, the Vlasov-Maxwell theory, and real plasmas. First, we present four validation tests: spectra from simulations of thermal plasmas, linear growth rates of the relativistic tearing instability and of the filamentation instability, and non-linear filamentation merging phase. For the filamentation instability we show that the effective growth rates measured on the total energy can differ by more than 50% from the linear cold predictions and from the fastest modes of the simulation. Second, we detail a new method for initial loading of Maxwell-J\\"uttner particle distributions with relativistic bulk velocity and relativistic temperature, and explain why the traditional method with individual particle boosting fails. Third, we scrutinize the question of what description of physical plasmas is obtained by PIC models. These models rely on two building blocks: coarse...

  19. Compiling scheme using abstract state machines

    OpenAIRE

    2003-01-01

    The project investigates the use of Abstract State Machine in the process of computer program compilation. Compilation is to produce machine-code from a source program written in a high-level language. A compiler is a program written for the purpose. Machine-code is the computer-readable representation of sequences of computer instructions. An Abstract State Machine (ASM) is a notional computing machine, developed by Yuri Gurevich, for accurately and easily representing the semantics of...

  20. 人机显示界面中的文字和位置编码%Text and position coding of human-machine display interface

    Institute of Scientific and Technical Information of China (English)

    张磊; 庄达民

    2011-01-01

    In process of operating aircraft, pilots need to use large amounts of information. So reasonable coding of information can improve driving safety. According to research requirements, a task model was developed for the ergonomics experiment. After subjects complete the tasks, their correct rate and reaction time were measured. Combined the measured results with eye movement data, the impact of text and position coding on information identification was analyzed to provide a scientific basis for the ergonomics design of information interface. Experimental results show that subject's identification of the text information is affected by the position coding. And position coding relates to vision scope and attention allocation strategy. Identification efficiency of the center is better than the periphery, and the left position is better than the right position. Identification efficiency of Chinese information is better than English, the impact of mother tongue should be considered in the practical application.%飞行员在驾驶飞机的过程中需要用到大量的信息,对信息进行合理的编码可以提高驾驶安全性.根据研究的需要,开发一个用于工效实验的作业任务模型.通过测量被试完成作业任务的正确率和反应时间,结合眼动仪测得的眼动数据,分析文字和位置编码对信息辨识的影响,为显示界面适人性设计提供科学依据.实验结果表明:人对文字信息的辨识受到位置编码影响,位置编码方式与视野范围和注意力分配策略相关;中心位置的辨识效率优于边缘位置,左侧位置优于右侧位置;中文信息的辨识效果优于英文信息,实际应用时应考虑母语的影响.

  1. Accurate discrimination of conserved coding and non-coding regions through multiple indicators of evolutionary dynamics

    Directory of Open Access Journals (Sweden)

    Pesole Graziano

    2009-09-01

    Full Text Available Abstract Background The conservation of sequences between related genomes has long been recognised as an indication of functional significance and recognition of sequence homology is one of the principal approaches used in the annotation of newly sequenced genomes. In the context of recent findings that the number non-coding transcripts in higher organisms is likely to be much higher than previously imagined, discrimination between conserved coding and non-coding sequences is a topic of considerable interest. Additionally, it should be considered desirable to discriminate between coding and non-coding conserved sequences without recourse to the use of sequence similarity searches of protein databases as such approaches exclude the identification of novel conserved proteins without characterized homologs and may be influenced by the presence in databases of sequences which are erroneously annotated as coding. Results Here we present a machine learning-based approach for the discrimination of conserved coding sequences. Our method calculates various statistics related to the evolutionary dynamics of two aligned sequences. These features are considered by a Support Vector Machine which designates the alignment coding or non-coding with an associated probability score. Conclusion We show that our approach is both sensitive and accurate with respect to comparable methods and illustrate several situations in which it may be applied, including the identification of conserved coding regions in genome sequences and the discrimination of coding from non-coding cDNA sequences.

  2. Autocoding State Machine in Erlang

    DEFF Research Database (Denmark)

    Guo, Yu; Hoffman, Torben; Gunder, Nicholas

    2008-01-01

    This paper presents an autocoding tool suit, which supports development of state machine in a model-driven fashion, where models are central to all phases of the development process. The tool suit, which is built on the Eclipse platform, provides facilities for the graphical specification...... of a state machine model. Once the state machine is specified, it is used as input to a code generation engine that generates source code in Erlang....

  3. Towards Preserving Model Coverage and Structural Code Coverage

    Directory of Open Access Journals (Sweden)

    Raimund Kirner

    2009-01-01

    Full Text Available Embedded systems are often used in safety-critical environments. Thus, thorough testing of them is mandatory. To achieve a required structural code-coverage criteria it is beneficial to derive the test data at a higher program-representation level than machine code. Higher program-representation levels include, beside the source-code level, languages of domain-specific modeling environments with automatic code generation. For a testing framework with automatic generation of test data this will enable high retargetability of the framework. In this article we address the challenge of ensuring that the structural code coverage achieved at a higher program representation level is preserved during the code generations and code transformations down to machine code. We define the formal properties that have to be fullfilled by a code transformation to guarantee preservation of structural code coverage. Based on these properties we discuss how to preserve code coverage achieved at source-code level. Additionally, we discuss how structural code coverage at model level could be preserved. The results presented in this article are aimed toward the integration of support for preserving structural code coverage into compilers and code generators.

  4. Numerical Prediction of the Performance of Integrated Planar Solid-Oxide Fuel Cells, with Comparisons of Results from Several Codes

    Energy Technology Data Exchange (ETDEWEB)

    G. L. Hawkes; J. E. O' Brien; B. A. Haberman; A. J. Marquis; C. M. Baca; D. Tripepi; P. Costamagna

    2008-06-01

    A numerical study of the thermal and electrochemical performance of a single-tube Integrated Planar Solid Oxide Fuel Cell (IP-SOFC) has been performed. Results obtained from two finite-volume computational fluid dynamics (CFD) codes FLUENT and SOHAB and from a two-dimensional inhouse developed finite-volume GENOA model are presented and compared. Each tool uses physical and geometric models of differing complexity and comparisons are made to assess their relative merits. Several single-tube simulations were run using each code over a range of operating conditions. The results include polarization curves, distributions of local current density, composition and temperature. Comparisons of these results are discussed, along with their relationship to the respective imbedded phenomenological models for activation losses, fluid flow and mass transport in porous media. In general, agreement between the codes was within 15% for overall parameters such as operating voltage and maximum temperature. The CFD results clearly show the effects of internal structure on the distributions of gas flows and related quantities within the electrochemical cells.

  5. Analysis of results of AZTRAN and AZKIND codes for a BWR; Analisis de resultados de los codigos AZTRAN y AZKIND para un BWR

    Energy Technology Data Exchange (ETDEWEB)

    Bastida O, G. E.; Vallejo Q, J. A.; Galicia A, J.; Francois L, J. L. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Morelos (Mexico); Xolocostli M, J. V.; Rodriguez H, A.; Gomez T, A. M., E-mail: gbo729@yahoo.com.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2016-09-15

    This paper presents an analysis of results obtained from simulations performed with the neutron transport code AZTRAN and the kinetic code of neutron diffusion AZKIND, based on comparisons with models corresponding to a typical BWR, in order to verify the behavior and reliability of the values obtained with said code for its current development. For this, simulations of different geometries were made using validated nuclear codes, such as CASMO, MCNP5 and Serpent. The results obtained are considered adequate since they are comparable with those obtained and reported with other codes, based mainly on the neutron multiplication factor and the power distribution of the same. (Author)

  6. FURTHER IMPROVEMENT OF THE INVESTMENT CLIMATE IN RUSSIA AS A RESULT OF MODERNIZATION OF THE RUSSIAN CIVIL CODE

    Directory of Open Access Journals (Sweden)

    V. Musin

    2014-01-01

    Full Text Available This article traces the history, and discusses some of the recent changes in the Russian Federation Civil Code, which result in a more favorable business climate inRussia. In particular, it discusses the development of changes related to the documentation of contracts, expansion in the durations and uses of powers of attorney, and the modernization of the statute of limitations period for bringing an action.">Russia >

  7. Inrush Current Simulation of Power Transformer using Machine Parameters Estimated by Design Procedure of Winding Structure and Genetic Algorithm

    Science.gov (United States)

    Tokunaga, Yoshitaka

    This paper presents estimation techniques of machine parameters for power transformer using design procedure of transformer and genetic algorithm with real coding. Especially, it is very difficult to obtain machine parameters for transformers in customers' facilities. Using estimation techniques, machine parameters could be calculated from the only nameplate data of these transformers. Subsequently, EMTP-ATP simulation of the inrush current was carried out using machine parameters estimated by techniques developed in this study and simulation results were reproduced measured waveforms.

  8. A Radiation-Hydrodynamics Code Comparison for Laser-Produced Plasmas: FLASH versus HYDRA and the Results of Validation Experiments

    CERN Document Server

    Orban, Chris; Chawla, Sugreev; Wilks, Scott C; Lamb, Donald Q

    2013-01-01

    The potential for laser-produced plasmas to yield fundamental insights into high energy density physics (HEDP) and deliver other useful applications can sometimes be frustrated by uncertainties in modeling the properties and expansion of these plasmas using radiation-hydrodynamics codes. In an effort to overcome this and to corroborate the accuracy of the HEDP capabilities recently added to the publicly available FLASH radiation-hydrodynamics code, we present detailed comparisons of FLASH results to new and previously published results from the HYDRA code used extensively at Lawrence Livermore National Laboratory. We focus on two very different problems of interest: (1) an Aluminum slab irradiated by 15.3 and 76.7 mJ of "pre-pulse" laser energy and (2) a mm-long triangular groove cut in an Aluminum target irradiated by a rectangular laser beam. Because this latter problem bears a resemblance to astrophysical jets, Grava et al., Phys. Rev. E, 78, (2008) performed this experiment and compared detailed x-ray int...

  9. Validation and Verification of MCNP6 Against Intermediate and High-Energy Experimental Data and Results by Other Codes

    CERN Document Server

    Mashnik, Stepan G

    2010-01-01

    MCNP6, the latest and most advanced LANL transport code representing a recent merger of MCNP5 and MCNPX, has been Validated and Verified (V&V) against a variety of intermediate and high-energy experimental data and against results by different versions of MCNPX and other codes. In the present work, we V&V MCNP6 using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.02 and LAQGSM03.03. We found that MCNP6 describes reasonably well various reactions induced by particles and nuclei at incident energies from 18 MeV to about 1 TeV per nucleon measured on thin and thick targets and agrees very well with similar results obtained with MCNPX and calculations by CEM03.02, LAQGSM03.01 (03.03), INCL4 + ABLA, and Bertini INC + Dresner evaporation, EPAX, ABRABLA, HIPSE, and AMD, used as stand alone codes. Most of several computational bugs and more serious physics problems observed in MCNP6/X during our V...

  10. Position Paper: Applying Machine Learning to Software Analysis to Achieve Trusted, Repeatable Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Prowell, Stacy J [ORNL; Symons, Christopher T [ORNL

    2015-01-01

    Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.

  11. Machine Perfusion of Porcine Livers with Oxygen-Carrying Solution Results in Reprogramming of Dynamic Inflammation Networks

    Directory of Open Access Journals (Sweden)

    David Sadowsky

    2016-11-01

    Full Text Available Background: Ex vivo machine perfusion (MP can better preserve organs for transplantation. We have recently reported on the first application of a MP protocol in which liver allografts were fully oxygenated, under dual pressures and subnormothermic conditions, with a new hemoglobin-based oxygen carrier solution specifically developed for ex vivo utilization. In those studies, MP improved organ function post-operatively and reduced inflammation in porcine livers. Herein, we sought to refine our knowledge regarding the impact of MP by defining dynamic networks of inflammation in both tissue and perfusate. Methods: Porcine liver allografts were preserved either with MP (n = 6 or with cold static preservation (CSP; n = 6, then transplanted orthotopically after 9 h of preservation. Fourteen inflammatory mediators were measured in both tissue and perfusate during liver preservation at multiple time points, and analyzed using Dynamic Bayesian Network (DyBN inference to define feedback interactions, as well as Dynamic Network Analysis (DyNA to define the time-dependent development of inflammation networks.Results: Network analyses of tissue and perfusate suggested an NLRP3 inflammasome-regulated response in both treatment groups, driven by the pro-inflammatory cytokine interleukin (IL-18 and the anti-inflammatory mediator IL-1 receptor antagonist (IL-1RA. Both DyBN and DyNA suggested a reduced role of IL-18 and increased role of IL-1RA with MP, along with increased liver damage with CSP. DyNA also suggested divergent progression of responses over the 9 h preservation time, with CSP leading to a stable pattern of IL-18-induced liver damage and MP leading to a resolution of the pro-inflammatory response. These results were consistent with prior clinical, biochemical, and histological findings after liver transplantation. Conclusion: Our results suggest that analysis of dynamic inflammation networks in the setting of liver preservation may identify novel

  12. Description of premixing with the MC3D code including molten jet behavior modeling. Comparison with FARO experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Berthoud, G.; Crecy, F. de; Meignen, R.; Valette, M. [CEA-G, DRN/DTP/SMTH, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France)

    1998-01-01

    The premixing phase of a molten fuel-coolant interaction is studied by the way of mechanistic multidimensional calculation. Beside water and steam, corium droplet flow and continuous corium jet flow are calculated independent. The 4-field MC3D code and a detailed hot jet fragmentation model are presented. MC3D calculations are compared to the FARO L14 experiment results and are found to give satisfactory results; heat transfer and jet fragmentation models are still to be improved to predict better final debris size values. (author)

  13. When Machines Design Machines!

    DEFF Research Database (Denmark)

    2011-01-01

    Until recently we were the sole designers, alone in the driving seat making all the decisions. But, we have created a world of complexity way beyond human ability to understand, control, and govern. Machines now do more trades than humans on stock markets, they control our power, water, gas...... and food supplies, manage our elevators, microclimates, automobiles and transport systems, and manufacture almost everything. It should come as no surprise that machines are now designing machines. The chips that power our computers and mobile phones, the robots and commercial processing plants on which we...... depend, all are now largely designed by machines. So what of us - will be totally usurped, or are we looking at a new symbiosis with human and artificial intelligences combined to realise the best outcomes possible. In most respects we have no choice! Human abilities alone cannot solve any of the major...

  14. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  15. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  16. Computational results with a branch and cut code for the capacitated vehicle routing problem

    Energy Technology Data Exchange (ETDEWEB)

    Augerat, P.; Naddef, D. [Institut National Polytechnique, 38 - Grenoble (France); Belenguer, J.M.; Benavent, E.; Corberan, A. [Valencia Univ. (Spain); Rinaldi, G. [Consiglio Nazionale delle Ricerche, Rome (Italy)

    1995-09-01

    The Capacitated Vehicle Routing Problem (CVRP) we consider in this paper consists in the optimization of the distribution of goods from a single depot to a given set of customers with known demand using a given number of vehicles of fixed capacity. There are many practical routing applications in the public sector such as school bus routing, pick up and mail delivery, and in the private sector such as the dispatching of delivery trucks. We present a Branch and Cut algorithm to solve the CVRP which is based in the partial polyhedral description of the corresponding polytope. The valid inequalities used in our method can ne found in Cornuejols and Harche (1993), Harche and Rinaldi (1991) and in Augerat and Pochet (1995). We concentrated mainly on the design of separation procedures for several classes of valid inequalities. The capacity constraints (generalized sub-tour eliminations inequalities) happen to play a crucial role in the development of a cutting plane algorithm for the CVRP. A large number of separation heuristics have been implemented and compared for these inequalities. There has been also implemented heuristic separation algorithms for other classes of valid inequalities that also lead to significant improvements: comb and extended comb inequalities, generalized capacity inequalities and hypo-tour inequalities. The resulting cutting plane algorithm has been applied to a set of instances taken from the literature and the lower bounds obtained are better than the ones previously known. Some branching strategies have been implemented to develop a Branch an Cut algorithm that has been able to solve large CVRP instances, some of them which had never been solved before. (authors). 32 refs., 3 figs., 10 tabs.

  17. Two or three machined vs roughened surface dental implants loaded immediately supporting total fixed prostheses: 1-year results from a randomised controlled trial.

    Science.gov (United States)

    Cannizzaro, Gioacchino; Gastaldi, Giorgio; Gherlone, Enrico; Vinci, Raffaele; Loi, Ignazio; Trullenque-Eriksson, Anna; Esposito, Marco

    2017-01-01

    both groups losing marginal bone in a statistically significant way (0.35 ± 0.23 mm for machined and 0.42 ± 0.27 mm for roughened surface). These preliminary results suggest that immediately loaded cross-arch prostheses can be supported by only two mandibular or three maxillary dental implants at least up to 1 year post-loading, independently of the type of implant surface used. Longer follow-ups are needed to understand whether one of the two-implant surfaces is preferable.

  18. PyNeb: a new tool for analyzing emission lines. I. Code description and validation of results

    Science.gov (United States)

    Luridiana, V.; Morisset, C.; Shaw, R. A.

    2015-01-01

    Analysis of emission lines in gaseous nebulae yields direct measures of physical conditions and chemical abundances and is the cornerstone of nebular astrophysics. Although the physical problem is conceptually simple, its practical complexity can be overwhelming since the amount of data to be analyzed steadily increases; furthermore, results depend crucially on the input atomic data, whose determination also improves each year. To address these challenges we created PyNeb, an innovative code for analyzing emission lines. PyNeb computes physical conditions and ionic and elemental abundances and produces both theoretical and observational diagnostic plots. It is designed to be portable, modular, and largely customizable in aspects such as the atomic data used, the format of the observational data to be analyzed, and the graphical output. It gives full access to the intermediate quantities of the calculation, making it possible to write scripts tailored to the specific type of analysis one wants to carry out. In the case of collisionally excited lines, PyNeb works by solving the equilibrium equations for an n-level atom; in the case of recombination lines, it works by interpolation in emissivity tables. The code offers a choice of extinction laws and ionization correction factors, which can be complemented by user-provided recipes. It is entirely written in the python programming language and uses standard python libraries. It is fully vectorized, making it apt for analyzing huge amounts of data. The code is stable and has been benchmarked against IRAF/NEBULAR. It is public, fully documented, and has already been satisfactorily used in a number of published papers.

  19. 380 kW synchronous machine with HTS rotor windings--development at Siemens and first test results

    Science.gov (United States)

    Nick, W.; Nerowski, G.; Neumüller, H.-W.; Frank, M.; van Hasselt, P.; Frauenhofer, J.; Steinmeyer, F.

    2002-08-01

    Applying HTS conductors in the rotor of synchronous machines allows the design of future motors or generators that are lighter, more compact and feature an improved coefficient of performance. To address these goals a project collaboration was installed within Siemens, including Automation & Drives, Large Drives as a leading supplier of electrical machines, Corporate Technology as a competence center for superconducting technology, and other partners. The main task of the project was to demonstrate the feasibility of basic concepts. The rotor was built from racetrack coils of Bi-2223 HTS tape conductor, these were assembled on a core and fixed by a bandage of glass-fibre composite. Rotor coil cooling is performed by thermal conduction, one end of the motor shaft is hollow to give access for the cooling system. Two cooling systems were designed and operated successfully: firstly an open circuit using cold gaseous helium from a storage vessel, but also a closed circuit system based on a cryogenerator. To take advantage of the increased rotor induction levels the stator winding was designed as an air gap winding. This was manufactured and fitted in a standard motor housing. After assembling of the whole system in a test facility with a DC machine load experiments have been started to prove the validity of our design, including operation with both cooling systems and driving the stator from the grid as well as by a power inverter.

  20. 麻醉机质量控制检测与结果的分析%Analysis of quality control testing and results of the anesthesia machine

    Institute of Scientific and Technical Information of China (English)

    黄韬; 崔骊; 李向东; 云庆辉

    2013-01-01

    目的:通过对麻醉机的质量控制检测,分析造成麻醉机不合格的主要原因,并采取相应措施确保麻醉机的安全使用。方法:使用美国VT-PLUS-HF气流分析仪对医院麻醉机定期检测,并利用检测结果对麻醉机的质量状况进行统计分析,评估不合格因素。结果:通过对医院麻醉机定期检测和分析检测结果,采取相应的方法消除了多台麻醉机的安全隐患,保障其使用安全。结论:麻醉机的质量控制检测工作是医疗安全的重要组成部分,通过检测能够及时发现和消除麻醉机的安全隐患,确保患者的生命安全。%Objective:Though the result study of quality control testing of anesthesia machine by 4 yesrs, the factors to anesthesia machine failure is investigated, and the proposal of improving the quality of the equipment testing standards is given. Methods:Statistical analysis the quality of the anesthesia machines using the VT-PLUS-HF gas analyzers with U.S.FLUKE Corporation is used, the test results can be to assess the quality of the anesthesia machine fail factors. Results: Through regular detection, not only can anesthesia machine potential risk be eliminated but also quality safety of clinical use will be safeguarded. Conclusion: Quality control detection to anesthesia machine is an important component of medical safety. It concluded that detect the anesthesia machine security risks by quality controls, ensure the life safety of patients.

  1. PyNeb: a new tool for analyzing emission lines. I. Code description and validation of results

    CERN Document Server

    Luridiana, Valentina; Shaw, Richard A

    2014-01-01

    Analysis of emission lines in gaseous nebulae yields direct measures of physical conditions and chemical abundances and is the cornerstone of nebular astrophysics. Although the physical problem is conceptually simple, its practical complexity can be overwhelming since the amount of data to be analyzed steadily increases; furthermore, results depend crucially on the input atomic data, whose determination also improves each year. To address these challenges we created PyNeb, an innovative code for analyzing emission lines. PyNeb computes physical conditions and ionic and elemental abundances, and produces both theoretical and observational diagnostic plots. It is designed to be portable, modular, and largely customizable in aspects such as the atomic data used, the format of the observational data to be analyzed, and the graphical output. It gives full access to the intermediate quantities of the calculation, making it possible to write scripts tailored to the specific type of analysis one wants to carry out. I...

  2. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    Science.gov (United States)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They

  3. Preliminary results of processing of Pulkovo series of photographic observations of double star 61 Cygni measured by automatic machine "Fantasy"

    Science.gov (United States)

    Gorshanov, D. L.; Shakht, N. A.; Kisselev, A. A.; Polyakov, E. V.; Bronnikova, A. A.; Kanaev, I. I.

    2003-11-01

    Two long-term series of photographic observations of one of the nearest double star 61 Cygni have been obtained at Pulkovo by means of normal astrograph in 1895-2000 (I) and by means of 26'' refractor in 1958-2000 (II). All these observations have been measured by means automatic machine "Fantasy" with mean error of yearly positions 0.016'' and 0.008'' for I and II series correspondly. The periodic deviations with period 6.4 +/- 0.5 yr in the residuals in relative distances between components are noticed for series II.

  4. Binary neutron-star mergers with Whisky and SACRA: First quantitative comparison of results from independent general-relativistic hydrodynamics codes

    Science.gov (United States)

    Baiotti, Luca; Shibata, Masaru; Yamamoto, Tetsuro

    2010-09-01

    We present the first quantitative comparison of two independent general-relativistic hydrodynamics codes, the whisky code and the sacra code. We compare the output of simulations starting from the same initial data and carried out with the configuration (numerical methods, grid setup, resolution, gauges) which for each code has been found to give consistent and sufficiently accurate results, in particular, in terms of cleanness of gravitational waveforms. We focus on the quantities that should be conserved during the evolution (rest mass, total mass energy, and total angular momentum) and on the gravitational-wave amplitude and frequency. We find that the results produced by the two codes agree at a reasonable level, with variations in the different quantities but always at better than about 10%.

  5. Binary neutron-star mergers with Whisky and SACRA: First quantitative comparison of results from independent general-relativistic hydrodynamics codes

    CERN Document Server

    Baiotti, Luca; Yamamoto, Tetsuro

    2010-01-01

    We present the first quantitative comparison of two independent general-relativistic hydrodynamics codes, the Whisky code and the SACRA code. We compare the output of simulations starting from the same initial data and carried out with the configuration (numerical methods, grid setup, resolution, gauges) which for each code has been found to give consistent and sufficiently accurate results, in particular in terms of cleanness of gravitational waveforms. We focus on the quantities that should be conserved during the evolution (rest mass, total mass energy, and total angular momentum) and on the gravitational-wave amplitude and frequency. We find that the results produced by the two codes agree at a reasonable level, with variations in the different quantities but always at better than about 10%.

  6. Comparison of Analytical and Measured Performance Results on Network Coding in IEEE 802.11 Ad-Hoc Networks

    DEFF Research Database (Denmark)

    Zhao, Fang; Médard, Muriel; Hundebøll, Martin

    2012-01-01

    Network coding is a promising technology that has been shown to improve throughput in wireless mesh networks. In this paper, we compare the analytical and experimental performance of COPE-style network coding in IEEE 802.11 ad-hoc networks. In the experiments, we use a lightweight scheme called...

  7. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison phase results

    Science.gov (United States)

    Grenier, Christophe; Rühaak, Wolfram

    2016-04-01

    Climate change impacts in permafrost regions have received considerable attention recently due to the pronounced warming trends experienced in recent decades and which have been projected into the future. Large portions of these permafrost regions are characterized by surface water bodies (lakes, rivers) that interact with the surrounding permafrost often generating taliks (unfrozen zones) within the permafrost that allow for hydrologic interactions between the surface water bodies and underlying aquifers and thus influence the hydrologic response of a landscape to climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model past and future evolution such units (Kurylyk et al. 2014). However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, which can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. A benchmark exercise was initialized at the end of 2014. Participants convened from USA, Canada, Europe, representing 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones (Kurylyk et al. 2014; Grenier et al. in prep.; Rühaak et al. 2015). They range from simpler, purely thermal 1D cases to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in a cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test case databases at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases TH2 & TH3. Both cases

  8. The importance of code status discussions in the psychiatric hospital: results of a single site survey of psychiatrists.

    Science.gov (United States)

    McKean, Alastair J S; Lapid, Maria I; Geske, Jennifer R; Kung, Simon

    2015-04-01

    Documentation of code status is a requirement with hospital admission, yet this discussion may present unique challenges with psychiatric inpatients. Currently, no standards exist on conducting these discussions with psychiatric inpatients. The authors surveyed psychiatry trainees and faculty regarding their perceptions and practice to gain further insight into the types of approaches used. The authors conducted an IRB-approved, Web-based survey of psychiatry faculty and trainees using a 25-item questionnaire of demographics and opinions about code status among psychiatric inpatients. The response rate was 36.1 % (n = 30; 15 faculty and 15 trainees). Respondents felt that it was important to discuss code status with each admission. Faculty placed a higher emphasis on assessing patients with a recent suicide attempt (p = 0.024). Psychiatric faculty and trainees endorsed the importance of assessing code status with each admission. The authors suggest that educational programs are needed on strategies to conduct code status discussions properly and effectively in psychiatric populations.

  9. Polar Codes

    Science.gov (United States)

    2014-12-01

    QPSK Gaussian channels . .......................................................................... 39 vi 1. INTRODUCTION Forward error correction (FEC...Capacity of BSC. 7 Figure 5. Capacity of AWGN channel . 8 4. INTRODUCTION TO POLAR CODES Polar codes were introduced by E. Arikan in [1]. This paper...Under authority of C. A. Wilgenbusch, Head ISR Division EXECUTIVE SUMMARY This report describes the results of the project “More reliable wireless

  10. Interactive QR code beautification with full background image embedding

    Science.gov (United States)

    Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo

    2017-06-01

    QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.

  11. Visual input enhancement via essay coding results in deaf learners' long-term retention of improved English grammatical knowledge.

    Science.gov (United States)

    Berent, Gerald P; Kelly, Ronald R; Schmitz, Kathryn L; Kenney, Patricia

    2009-01-01

    This study explored the efficacy of visual input enhancement, specifically essay enhancement, for facilitating deaf college students' improvement in English grammatical knowledge. Results documented students' significant improvement immediately after a 10-week instructional intervention, a replication of recent research. Additionally, the results of delayed assessment documented students' significant retention of that improvement five and a half months beyond the instructional intervention period. Essay enhancement served to highlight, via a coding procedure, students' successful and unsuccessful production of discourse-required target grammatical structures. The procedure converted students' written communicative output into enhanced input for inducing noticing of grammatical form and, through essay revision, establishing form-meaning connections leading to acquisition. With its optimal design characteristics supported by theoretical and empirical research, essay enhancement is a highly effective methodology that can be easily implemented as primary or supplementary English instruction for deaf students. The results of this study hold great promise for facilitating deaf students' English language and literacy development and have broad implications for second-language research, teaching, and learning.

  12. Study of surface integrity AISI 4140 as result of hard, dry and high speed machining using CBN

    Science.gov (United States)

    Ginting, B.; Sembiring, R. W.; Manurung, N.

    2017-09-01

    The concept of hard, dry and high speed machining can be combined, to produce high productivity, with lower production costs in manufacturing industry. Hard lathe process can be a solution to reduce production time. In lathe hard alloy steels reported problems relating to the integrity of such surface roughness, residual stress, the white layer and the surface integrity. AISI 4140 material is used for high reliable hydraulic system components. This material includes in cold work tool steel. Consideration election is because this material is able to be hardened up to 55 HRC. In this research, the experimental design using CCD model fit with three factors, each factor is composed of two levels, and six central point, experiments were conducted with 1 replications. The experimental design research using CCD model fit.

  13. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results

    Science.gov (United States)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

    1990-01-01

    Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

  14. Communication Studies of DMP and SMP Machines

    Science.gov (United States)

    Sohn, Andrew; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    Understanding the interplay between machines and problems is key to obtaining high performance on parallel machines. This paper investigates the interplay between programming paradigms and communication capabilities of parallel machines. In particular, we explicate the communication capabilities of the IBM SP-2 distributed-memory multiprocessor and the SGI PowerCHALLENGEarray symmetric multiprocessor. Two benchmark problems of bitonic sorting and Fast Fourier Transform are selected for experiments. Communication-efficient algorithms are developed to exploit the overlapping capabilities of the machines. Programs are written in Message-Passing Interface for portability and identical codes are used for both machines. Various data sizes and message sizes are used to test the machines' communication capabilities. Experimental results indicate that the communication performance of the multiprocessors are consistent with the size of messages. The SP-2 is sensitive to message size but yields a much higher communication overlapping because of the communication co-processor. The PowerCHALLENGEarray is not highly sensitive to message size and yields a low communication overlapping. Bitonic sorting yields lower performance compared to FFT due to a smaller computation-to-communication ratio.

  15. Thermal-hydraulic characteristics of a Westinghouse Model 51 steam generator. Volume 2. Appendix A, numerical results. Interim report. [CALIPSOS code numerical data

    Energy Technology Data Exchange (ETDEWEB)

    Fanselau, R.W.; Thakkar, J.G.; Hiestand, J.W.; Cassell, D.

    1981-03-01

    The Comparative Thermal-Hydraulic Evaluation of Steam Generators program represents an analytical investigation of the thermal-hydraulic characteristics of four PWR steam generators. The analytical tool utilized in this investigation is the CALIPSOS code, a three-dimensional flow distribution code. This report presents the steady state thermal-hydraulic characteristics on the secondary side of a Westinghouse Model 51 steam generator. Details of the CALIPSOS model with accompanying assumptions, operating parameters, and transport correlations are identified. Comprehensive graphical and numerical results are presented to facilitate the desired comparison with other steam generators analyzed by the same flow distribution code.

  16. Electron impact excitation of N IV: calculations with the DARC code and a comparison with ICFT results

    CERN Document Server

    Aggarwal, K M; Lawson, K D

    2016-01-01

    There have been discussions in the recent literature regarding the accuracy of the available electron impact excitation rates (equivalently effective collision strengths $\\Upsilon$) for transitions in Be-like ions. In the present paper we demonstrate, once again, that earlier results for $\\Upsilon$ are indeed overestimated (by up to four orders of magnitude), for over 40\\% of transitions and over a wide range of temperatures. To do this we have performed two sets of calculations for N~IV, with two different model sizes consisting of 166 and 238 fine-structure energy levels. As in our previous work, for the determination of atomic structure the GRASP (General-purpose Relativistic Atomic Structure Package) is adopted and for the scattering calculations (the standard and parallelised versions of) the Dirac Atomic R-matrix Code ({\\sc darc}) are employed. Calculations for collision strengths and effective collision strengths have been performed over a wide range of energy (up to 45~Ryd) and temperature (up to 2.0$...

  17. Machine Translation

    Institute of Scientific and Technical Information of China (English)

    张严心

    2015-01-01

    As a kind of ancillary translation tool, Machine Translation has been paid increasing attention to and received different kinds of study by a great deal of researchers and scholars for a long time. To know the definition of Machine Translation and to analyse its benefits and problems are significant for translators in order to make good use of Machine Translation, and helpful to develop and consummate Machine Translation Systems in the future.

  18. A new volumetric CT machine for dental imaging based on the cone-beam technique: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Mozzo, P. [Dept. of Medical Physics, University Hospital, Verona (Italy); Procacci, C.; Tacconi, A.; Tinazzi Martini, P.; Bergamo Andreis, I.A. [Dept. of Radiology, University Hospital, Verona (Italy)

    1998-12-01

    The objective of this paper is to present a new type of volumetric CT which uses the cone-beam technique instead of traditional fan-beam technique. The machine is dedicated to the dento-maxillo-facial imaging, particularly for planning in the field of implantology. The main characteristics of the unit are presented with reference to the technical parameters as well as the software performance. Images obtained are reported as various 2D sections of a volume reconstruction. Also, measurements of the geometric accuracy and the radiation dose absorbed by the patient are obtained using specific phantoms. Absorbed dose is compared with that given off by spiral CT. Geometric accuracy, evaluated with reference to various reconstruction modalities and different spatial orientations, is 0.8-1 % for width measurements and 2.2 % for height measurements. Radiation dose absorbed during the scan shows different profiles in central and peripheral axes. As regards the maximum value of the central profile, dose from the new unit is approximately one sixth that of traditional spiral CT. The new system appears to be very promising in dento-maxillo-facial imaging and, due to the good ratio between performance and low cost, together with low radiation dose, very interesting in view of large-scale use of the CT technique in such diagnostic applications. (orig.) With 10 figs., 3 tabs., 15 refs.

  19. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  20. Test results of a 40 kW Stirling engine and comparison with the NASA-Lewis computer code predictions

    Science.gov (United States)

    Allen, D.; Cairelli, J.

    1985-01-01

    A Stirling engine was tested without auxiliaries at NASA-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations with those predicted by the code. The measured data tended to be lower than the computer code predictions. The silicon carbide foam regenerators appear to be structurally suitable, but the foam matrix tested severely reduced performance.

  1. Coded Random Access

    DEFF Research Database (Denmark)

    Paolini, Enrico; Stefanovic, Cedomir; Liva, Gianluigi

    2015-01-01

    , in which the structure of the access protocol can be mapped to a structure of an erasure-correcting code defined on graph. This opens the possibility to use coding theory and tools for designing efficient random access protocols, offering markedly better performance than ALOHA. Several instances of coded......The rise of machine-to-machine communications has rekindled the interest in random access protocols as a support for a massive number of uncoordinatedly transmitting devices. The legacy ALOHA approach is developed under a collision model, where slots containing collided packets are considered...... as waste. However, if the common receiver (e.g., base station) is capable to store the collision slots and use them in a transmission recovery process based on successive interference cancellation, the design space for access protocols is radically expanded. We present the paradigm of coded random access...

  2. Offshore code comparison collaboration continuation within IEA Wind Task 30: Phase II results regarding a floating semisubmersible wind system

    DEFF Research Database (Denmark)

    Robertson, Amy; Jonkman, Jason M.; Vorpahl, Fabian

    2014-01-01

    in a greater understanding of offshore floating wind turbine dynamics and modeling techniques, and better knowledge of the validity of various approximations. The lessons learned from this exercise have improved the participants’ codes, thus improving the standard of offshore wind turbine modeling....

  3. Energy Consumption Model and Measurement Results for Network Coding-enabled IEEE 802.11 Meshed Wireless Networks

    DEFF Research Database (Denmark)

    Paramanathan, Achuthan; Rasmussen, Ulrik Wilken; Hundebøll, Martin

    2012-01-01

    This paper presents an energy model and energy measurements for network coding enabled wireless meshed networks based on IEEE 802.11 technology. The energy model and the energy measurement testbed is limited to a simple Alice and Bob scenario. For this toy scenario we compare the energy usages...

  4. Machine learning with R

    CERN Document Server

    Lantz, Brett

    2013-01-01

    Written as a tutorial to explore and understand the power of R for machine learning. This practical guide that covers all of the need to know topics in a very systematic way. For each machine learning approach, each step in the process is detailed, from preparing the data for analysis to evaluating the results. These steps will build the knowledge you need to apply them to your own data science tasks.Intended for those who want to learn how to use R's machine learning capabilities and gain insight from your data. Perhaps you already know a bit about machine learning, but have never used R; or

  5. Research on Manufacturing Technology Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    HU Zhanqi; ZHENG Kuijing

    2006-01-01

    The concept of machine vision based manufacturing technology is proposed first, and the key algorithms used in two-dimensional and three-dimensional machining are discussed in detail. Machining information can be derived from the binary images and gray picture after processing and transforming the picture. Contour and the parallel cutting method about two-dimensional machining are proposed. Polygon approximating algorithm is used to cutting the profile of the workpiece. Fill Scanning algorithm used to machining inner part of a pocket. The improved Shape From Shading method with adaptive pre-processing is adopted to reconstruct the three-dimensional model. Layer cutting method is adopted for three-dimensional machining. The tool path is then gotten from the model, and NC code is formed subsequently. The model can be machined conveniently by the lathe, milling machine or engraver. Some examples are given to demonstrate the results of ImageCAM system, which is developed by the author to implement the algorithms previously mentioned.

  6. Keyboard with Universal Communication Protocol Applied to CNC Machine

    Directory of Open Access Journals (Sweden)

    Mejía-Ugalde Mario

    2014-04-01

    Full Text Available This article describes the use of a universal communication protocol for industrial keyboard based microcontroller applied to computer numerically controlled (CNC machine. The main difference among the keyboard manufacturers is that each manufacturer has its own programming of source code, producing a different communication protocol, generating an improper interpretation of the function established. The above results in commercial industrial keyboards which are expensive and incompatible in their connection with different machines. In the present work the protocol allows to connect the designed universal keyboard and the standard keyboard of the PC at the same time, it is compatible with all the computers through the communications USB, AT or PS/2, to use in CNC machines, with extension to other machines such as robots, blowing, injection molding machines and others. The advantages of this design include its easy reprogramming, decreased costs, manipulation of various machine functions and easy expansion of entry and exit signals. The results obtained of performance tests were satisfactory, because each key has the programmed and reprogrammed facility in different ways, generating codes for different functions, depending on the application where it is required to be used.

  7. Space Time Codes from Permutation Codes

    CERN Document Server

    Henkel, Oliver

    2006-01-01

    A new class of space time codes with high performance is presented. The code design utilizes tailor-made permutation codes, which are known to have large minimal distances as spherical codes. A geometric connection between spherical and space time codes has been used to translate them into the final space time codes. Simulations demonstrate that the performance increases with the block lengths, a result that has been conjectured already in previous work. Further, the connection to permutation codes allows for moderate complex en-/decoding algorithms.

  8. Machinability evaluation of machinable ceramics with fuzzy theory

    Institute of Scientific and Technical Information of China (English)

    YU Ai-bing; ZHONG Li-jun; TAN Ye-fa

    2005-01-01

    The property parameters and machining output parameters were selected for machinability evaluation of machinable ceramics. Based on fuzzy evaluation theory, two-stage fuzzy evaluation approach was applied to consider these parameters. Two-stage fuzzy comprehensive evaluation model was proposed to evaluate machinability of machinable ceramic materials. Ce-ZrO2/CePO4 composites were fabricated and machined for evaluation of machinable ceramics. Material removal rates and specific normal grinding forces were measured. The parameters concerned with machinability were selected as alternative set. Five grades were chosen for the machinability evaluation of machnable ceramics. Machinability grades of machinable ceramics were determined through fuzzy operation. Ductile marks are observed on Ce-ZrO2/CePO4 machined surface. Five prepared Ce-ZrO2/CePO4 composites are classified as three machinability grades according to the fuzzy comprehensive evaluation results. The machinability grades of Ce-ZrO2/CePO4 composites are concerned with CePO4 content.

  9. Advanced Analysis of Nontraditional Machining

    CERN Document Server

    Tsai, Hung-Yin

    2013-01-01

    Nontraditional machining utilizes thermal, chemical, electrical, mechanical and optical sources of energy to form and cut materials. Advanced Analysis of Nontraditional Machining explains in-depth how each of these advanced machining processes work, their machining system components, and process variables and industrial applications, thereby offering advanced knowledge and scientific insight. This book also documents the latest and frequently cited research results of a few key nonconventional machining processes for the most concerned topics in industrial applications, such as laser machining, electrical discharge machining, electropolishing of die and mold, and wafer processing for integrated circuit manufacturing. This book also: Fills the gap of the advanced knowledge of nonconventional machining between industry and research Documents latest and frequently cited research of key nonconventional machining processes for the most sought after topics in industrial applications Demonstrates advanced multidisci...

  10. Investigating What Undergraduate Students Know About Science: Results from Complementary Strategies to Code Open-Ended Responses

    Science.gov (United States)

    Tijerino, K.; Buxner, S.; Impey, C.; CATS

    2013-04-01

    This paper presents new findings from an ongoing study of undergraduate student science literacy. Using data drawn from a 22 year project and over 11,000 student responses, we present how students' word usage in open-ended responses relates to what it means to study something scientifically. Analysis of students' responses show that they easily use words commonly associated with science, such as hypothesis, study, method, test, and experiment; but do these responses use scientific words knowledgeably? As with many multifaceted disciplines, demonstration of comprehension varies. This paper presents three different ways that student responses have been coded to investigate their understanding of science; 1) differentiating quality of a response with a coding scheme; 2) using word counting as an indicator of overall response strength; 3) responses are coded for quality of students' response. Building on previous research, comparison of science literacy and open-ended responses demonstrates that knowledge of science facts and vocabulary does not indicate a comprehension of the concepts behind these facts and vocabulary. This study employs quantitative and qualitative methods to systematically determine frequency and meaning of responses to standardized questions, and illustrates how students are able to demonstrate a knowledge of vocabulary. However, this knowledge is not indicative of conceptual understanding and poses important questions about how we assess students' understandings of science.

  11. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  12. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi

    2012-06-02

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  13. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    Science.gov (United States)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  14. Test results of a 40-kW Stirling engine and comparison with the NASA Lewis computer code predictions

    Science.gov (United States)

    Allen, David J.; Cairelli, James E.

    1988-01-01

    A Stirling engine was tested without auxiliaries at Nasa-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were: (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations from those predicted by the code. The silicon carbide foam generators appear to be structurally suitable, but the foam matrix showed severely reduced performance.

  15. Description and results of a two-dimensional lattice physics code benchmark for the Canadian Pressure Tube Supercritical Water-cooled Reactor (PT-SCWR)

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, D.W.; Langton, S.E.; Ball, M.R.; Novog, D.R.; Buijs, A., E-mail: hummeld@mcmaster.ca [McMaster Univ., Hamilton, Ontario (Canada)

    2013-07-01

    Discrepancies have been observed among a number of recent reactor physics studies in support of the PT-SCWR pre-conceptual design, including differences in lattice-level predictions of infinite neutron multiplication factor, coolant void reactivity, and radial power profile. As a first step to resolving these discrepancies, a lattice-level benchmark problem was designed based on the 78-element plutonium-thorium PT-SCWR fuel design under a set of prescribed local conditions. This benchmark problem was modeled with a suite of both deterministic and Monte Carlo neutron transport codes. The results of these models are presented here as the basis of a code-to-code comparison. (author)

  16. Code Generation in the Columbia Esterel Compiler

    Directory of Open Access Journals (Sweden)

    Jia Zeng

    2007-02-01

    Full Text Available The synchronous language Esterel provides deterministic concurrency by adopting a semantics in which threads march in step with a global clock and communicate in a very disciplined way. Its expressive power comes at a cost, however: it is a difficult language to compile into machine code for standard von Neumann processors. The open-source Columbia Esterel Compiler is a research vehicle for experimenting with new code generation techniques for the language. Providing a front-end and a fairly generic concurrent intermediate representation, a variety of back-ends have been developed. We present three of the most mature ones, which are based on program dependence graphs, dynamic lists, and a virtual machine. After describing the very different algorithms used in each of these techniques, we present experimental results that compares twenty-four benchmarks generated by eight different compilation techniques running on seven different processors.

  17. Electric machine

    Science.gov (United States)

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  18. Offshore code comparison collaboration continuation (OC4), phase I - Results of coupled simulations of an offshore wind turbine with jacket support structure

    DEFF Research Database (Denmark)

    Popko, Wojciech; Vorpahl, Fabian; Zuga, Adam;

    2012-01-01

    In this paper, the exemplary results of the IEA Wind Task 30 "Offshore Code Comparison Collaboration Continuation" (OC4) Project - Phase I, focused on the coupled simulation of an offshore wind turbine (OWT) with a jacket support structure, are presented. The focus of this task has been...

  19. Micro-machining.

    Science.gov (United States)

    Brinksmeier, Ekkard; Preuss, Werner

    2012-08-28

    Manipulating bulk material at the atomic level is considered to be the domain of physics, chemistry and nanotechnology. However, precision engineering, especially micro-machining, has become a powerful tool for controlling the surface properties and sub-surface integrity of the optical, electronic and mechanical functional parts in a regime where continuum mechanics is left behind and the quantum nature of matter comes into play. The surprising subtlety of micro-machining results from the extraordinary precision of tools, machines and controls expanding into the nanometre range-a hundred times more precise than the wavelength of light. In this paper, we will outline the development of precision engineering, highlight modern achievements of ultra-precision machining and discuss the necessity of a deeper physical understanding of micro-machining.

  20. Design of Sugarcane Peeling Machine

    Directory of Open Access Journals (Sweden)

    Ge Xinfeng

    2015-02-01

    Full Text Available In order to solve the problem that appeared in hand peeling sugarcane, the sugarcane peeling machine is designed, the sugarcane peeling machine includes motor, groove wheel, cutting room, slider crank mechanism, reducer (including belt drive, chain drive and so on. The designed sugarcane peeling machine is simulated, the results show that the sugarcane peeling machine can peel sugarcane successfully with convenient, fast and uniform.

  1. Variability in estimation of self-reported dietary intake data from elite athletes resulting from coding by different sports dietitians.

    Science.gov (United States)

    Braakhuis, Andrea J; Meredith, Kelly; Cox, Gregory R; Hopkins, William G; Burke, Louise M

    2003-06-01

    A routine activity for a sports dietitian is to estimate energy and nutrient intake from an athlete's self-reported food intake. Decisions made by the dietitian when coding a food record are a source of variability in the data. The aim of the present study was to determine the variability in estimation of the daily energy and key nutrient intakes of elite athletes, when experienced coders analyzed the same food record using the same database and software package. Seven-day food records from a dietary survey of athletes in the 1996 Australian Olympic team were randomly selected to provide 13 sets of records, each set representing the self-reported food intake of an endurance, team, weight restricted, and sprint/power athlete. Each set was coded by 3-5 members of Sports Dietitians Australia, making a total of 52 athletes, 53 dietitians, and 1456 athlete-days of data. We estimated within- and between- athlete and dietitian variances for each dietary nutrient using mixed modeling, and we combined the variances to express variability as a coefficient of variation (typical variation as a percent of the mean). Variability in the mean of 7-day estimates of a nutrient was 2- to 3-fold less than that of a single day. The variability contributed by the coder was less than the true athlete variability for a 1-day record but was of similar magnitude for a 7-day record. The most variable nutrients (e.g., vitamin C, vitamin A, cholesterol) had approximately 3-fold more variability than least variable nutrients (e.g., energy, carbohydrate, magnesium). These athlete and coder variabilities need to be taken into account in dietary assessment of athletes for counseling and research.

  2. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  3. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  4. Fundamentals of machine design

    CERN Document Server

    Karaszewski, Waldemar

    2011-01-01

    A forum of researchers, educators and engineers involved in various aspects of Machine Design provided the inspiration for this collection of peer-reviewed papers. The resultant dissemination of the latest research results, and the exchange of views concerning the future research directions to be taken in this field will make the work of immense value to all those having an interest in the topics covered. The book reflects the cooperative efforts made in seeking out the best strategies for effecting improvements in the quality and the reliability of machines and machine parts and for extending

  5. Polishing tool and the resulting TIF for three variable machine parameters as input for the removal simulation

    Science.gov (United States)

    Schneider, Robert; Haberl, Alexander; Rascher, Rolf

    2017-06-01

    The trend in the optic industry shows, that it is increasingly important to be able to manufacture complex lens geometries on a high level of precision. From a certain limit on the required shape accuracy of optical workpieces, the processing is changed from the two-dimensional to point-shaped processing. It is very important that the process is as stable as possible during the in point-shaped processing. To ensure stability, usually only one process parameter is varied during processing. It is common that this parameter is the feed rate, which corresponds to the dwell time. In the research project ArenA-FOi (Application-oriented analysis of resource-saving and energy-efficient design of industrial facilities for the optical industry), a touching procedure is used in the point-attack, and in this case a close look is made as to whether a change of several process parameters is meaningful during a processing. The ADAPT tool in size R20 from Satisloh AG is used, which is also available for purchase. The behavior of the tool is tested under constant conditions in the MCP 250 CNC by OptoTech GmbH. A series of experiments should enable the TIF (tool influence function) to be determined using three variable parameters. Furthermore, the maximum error frequency that can be processed is calculated as an example for one parameter set and serves as an outlook for further investigations. The test results serve as the basic for the later removal simulation, which must be able to deal with a variable TIF. This topic has already been successfully implemented in another research project of the Institute for Precision Manufacturing and High-Frequency Technology (IPH) and thus this algorithm can be used. The next step is the useful implementation of the collected knowledge. The TIF must be selected on the basis of the measured data. It is important to know the error frequencies to select the optimal TIF. Thus, it is possible to compare the simulated results with real measurement

  6. Are minimized perfusion circuits the better heart lung machines? Final results of a prospective randomized multicentre study.

    Science.gov (United States)

    El-Essawi, A; Hajek, T; Skorpil, J; Böning, A; Sabol, F; Ostrovsky, Y; Hausmann, H; Harringer, W

    2011-11-01

    Minimized perfusion circuits (MPCs), although aiming at minimizing the adverse effects of cardiopulmonary bypass, have not yet gained popularity. This can be attributed to concerns regarding their safety, as well as lack of sufficient evidence of their benefit. Described is a randomized, multicentre study comparing the MPC - ROCsafeRX to standard cardiopulmonary bypass in patients undergoing elective coronary artery bypass grafting and/ or aortic valve replacement. Five hundred patients were included in the study (252 randomized to the ROCsafeRX group and 248 to standard cardiopulmonary bypass). Both groups were well matched for demographic characteristics and type of surgery. No operative mortality and no device-related complications were encountered. Transfusion requirement (333 ± 603 vs. 587 ± 1010 ml; p=0.001), incidence of atrial fibrillation (16.3% vs. 24.2%; p=0.03) and the incidence of major adverse events (9.1% vs. 16.5%; p=0.02) were all in favour of the MPC group. These results confirm both the safety and efficacy of the ROCsafeRX MPC for a large variety of cardiac patients. Minimized perfusion circuits should, therefore, play a greater role in daily practice so that as many patients as possible can benefit from their advantages.

  7. Perspex machine: VII. The universal perspex machine

    Science.gov (United States)

    Anderson, James A. D. W.

    2006-01-01

    The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and, perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general

  8. Combined chirp coded tissue harmonic and fundamental ultrasound imaging for intravascular ultrasound: 20-60 MHz phantom and ex vivo results.

    Science.gov (United States)

    Park, Jinhyoung; Li, Xiang; Zhou, Qifa; Shung, K Kirk

    2013-02-01

    The application of chirp coded excitation to pulse inversion tissue harmonic imaging can increase signal to noise ratio. On the other hand, the elevation of range side lobe level, caused by leakages of the fundamental signal, has been problematic in mechanical scanners which are still the most prevalent in high frequency intravascular ultrasound imaging. Fundamental chirp coded excitation imaging can achieve range side lobe levels lower than -60dB with Hanning window, but it yields higher side lobes level than pulse inversion chirp coded tissue harmonic imaging (PI-CTHI). Therefore, in this paper a combined pulse inversion chirp coded tissue harmonic and fundamental imaging mode (CPI-CTHI) is proposed to retain the advantages of both chirp coded harmonic and fundamental imaging modes by demonstrating 20-60MHz phantom and ex vivo results. A simulation study shows that the range side lobe level of CPI-CTHI is 16dB lower than PI-CTHI, assuming that the transducer translates incident positions by 50μm when two beamlines of pulse inversion pair are acquired. CPI-CTHI is implemented for a proto-typed intravascular ultrasound scanner capable of combined data acquisition in real-time. A wire phantom study shows that CPI-CTHI has a 12dB lower range side lobe level and a 7dB higher echo signal to noise ratio than PI-CTHI, while the lateral resolution and side lobe level are 50μm finer and -3dB less than fundamental chirp coded excitation imaging respectively. Ex vivo scanning of a rabbit trachea demonstrates that CPI-CTHI is capable of visualizing blood vessels as small as 200μm in diameter with 6dB better tissue contrast than either PI-CTHI or fundamental chirp coded excitation imaging. These results clearly indicate that CPI-CTHI may enhance tissue contrast with less range side lobe level than PI-CTHI. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Investigations on CERN PSB Beam Dynamics with Strong Direct Space Charge Effects Using the PTC-Orbit Code

    CERN Document Server

    Forte, V; Carli, C; Martini, M; Metral, E; Mikulec, B; Schmidt, F; Molodozhentsev, A

    2013-01-01

    The CERN PS Booster (PSB) has the largest space charge tune spread in the LHC injector chain. As part of the LHC Injectors Upgrade (LIU) project, the machine will be upgraded. Space charge and resonances are serious is- sues for the good quality of the beam at injection energy. Consequently simulations are needed to track the beam in the machine taking into account space charge effects: PTC-ORBIT has been used as tracking code. This paper presents simulation results that are compared with mea-surements for machine performances evaluation and code- benchmarking purposes.

  10. Machine Learning

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.

  11. WRAITH - A Computer Code for Calculating Internal and External Doses Resulting From An Atmospheric Release of Radioactive Material

    Energy Technology Data Exchange (ETDEWEB)

    Scherpelz, R. I.; Borst, F. J.; Hoenes, G. R.

    1980-12-01

    WRAITH is a FORTRAN computer code which calculates the doses received by a standard man exposed to an accidental release of radioactive material. The movement of the released material through the atmosphere is calculated using a bivariate straight-line Gaussian distribution model, with Pasquill values for standard deviations. The quantity of material in the released cloud is modified during its transit time to account for radioactive decay and daughter production. External doses due to exposure to the cloud can be calculated using a semi-infinite cloud approximation. In situations where the semi-infinite cloud approximation is not a good one, the external dose can be calculated by a "finite plume" three-dimensional point-kernel numerical integration technique. Internal doses due to acute inhalation are cal.culated using the ICRP Task Group Lung Model and a four-segmented gastro-intestinal tract model. Translocation of the material between body compartments and retention in the body compartments are calculated using multiple exponential retention functions. Internal doses to each organ are calculated as sums of cross-organ doses, with each target organ irradiated by radioactive material in a number of source organs. All doses are calculated in rads, with separate values determined for high-LET and low-LET radiation.

  12. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  13. PC Cluster Machine Equipped with High-Speed Communication Software

    CERN Document Server

    Tanaka, M

    2004-01-01

    A high performance Beowulf (PC cluster) machine installed with Linux operating system and MPI (Message Passing Interface) for interprocessor communications has been constructed using Gigabit Ethernet and the communication software GAMMA (Genoa Active Message Machine), instead of the standard TCP/IP protocol. Fast C/Fortran compilers have been exploited with the GAMMA communication libraries. This method has eliminated large communication overhead of TCP/IP and resulted in significant increase in the computational performance of real application programs including the first-principle molecular dynamics simulation code. (Keywords: non TCP/IP, active messages, small latency, fast C/Fortran compilers, materials science, first-principle molecular dynamics)

  14. Verification Calculation Results to Validate the Procedures and Codes for Pin-by-Pin Power Computation in VVER Type Reactors with MOX Fuel Loading

    Energy Technology Data Exchange (ETDEWEB)

    Chizhikova, Z.N.; Kalashnikov, A.G.; Kapranova, E.N.; Korobitsyn, V.E.; Manturov, G.N.; Tsiboulia, A.A.

    1998-12-01

    One of the important problems for ensuring the VVER type reactor safety when the reactor is partially loaded with MOX fuel is the choice of appropriate physical zoning to achieve the maximum flattening of pin-by-pin power distribution. When uranium fuel is replaced by MOX one provided that the reactivity due to fuel assemblies is kept constant, the fuel enrichment slightly decreases. However, the average neutron spectrum fission microscopic cross-section for {sup 239}Pu is approximately twice that for {sup 235}U. Therefore power peaks occur in the peripheral fuel assemblies containing MOX fuel which are aggravated by the interassembly water. Physical zoning has to be applied to flatten the power peaks in fuel assemblies containing MOX fuel. Moreover, physical zoning cannot be confined to one row of fuel elements as is the case with a uniform lattice of uranium fuel assemblies. Both the water gap and the jump in neutron absorption macroscopic cross-sections which occurs at the interface of fuel assemblies with different fuels make the problem of calculating space-energy neutron flux distribution more complicated since it increases nondiffusibility effects. To solve this problem it is necessary to update the current codes, to develop new codes and to verify all the codes including nuclear-physical constants libraries employed. In so doing it is important to develop and validate codes of different levels--from design codes to benchmark ones. This paper presents the results of the burnup calculation for a multiassembly structure, consisting of MOX fuel assemblies surrounded by uranium dioxide fuel assemblies. The structure concerned can be assumed to model a fuel assembly lattice symmetry element of the VVER-1000 type reactor in which 1/4 of all fuel assemblies contains MOX fuel.

  15. A design code to study vertical-axis wind turbine control strategies

    Science.gov (United States)

    Vachon, William A.

    1987-07-01

    A computer code called ASYM is described. The code permits a wind turbine designer to examine the role of low and high wind speed cut-in and cutout control strategies on the production of energy and the consumption of fatigue life by a wind turbine. The primary goal of the code development has been to create a design tool to optimize the energy production and the fatigue life of a wind machine through optimized high wind speed control schemes. The code is also very useful in evaluating start-up algorithms. It works primarily in the time domain and simulates high-frequency random wind of specific statistical characteristics while employing energy and damage density functions to calculate the results. A modified net present value calculation of the annual machine revenues and costs over the calculated life of the wind turbine is used to compare the merits of various control algorithms. Typical results are provided to demonstrate the use of the code.

  16. Prediction of Machine Tool Condition Using Support Vector Machine

    Science.gov (United States)

    Wang, Peigong; Meng, Qingfeng; Zhao, Jian; Li, Junjie; Wang, Xiufeng

    2011-07-01

    Condition monitoring and predicting of CNC machine tools are investigated in this paper. Considering the CNC machine tools are often small numbers of samples, a condition predicting method for CNC machine tools based on support vector machines (SVMs) is proposed, then one-step and multi-step condition prediction models are constructed. The support vector machines prediction models are used to predict the trends of working condition of a certain type of CNC worm wheel and gear grinding machine by applying sequence data of vibration signal, which is collected during machine processing. And the relationship between different eigenvalue in CNC vibration signal and machining quality is discussed. The test result shows that the trend of vibration signal Peak-to-peak value in surface normal direction is most relevant to the trend of surface roughness value. In trends prediction of working condition, support vector machine has higher prediction accuracy both in the short term ('One-step') and long term (multi-step) prediction compared to autoregressive (AR) model and the RBF neural network. Experimental results show that it is feasible to apply support vector machine to CNC machine tool condition prediction.

  17. Developing a methodology for the evaluation of results uncertainties in CFD codes; Desarrollo de una Metodologia para la Evaluacion de Incertidumbres en los Resultados de Codigos de CFD

    Energy Technology Data Exchange (ETDEWEB)

    Munoz-cobo, J. L.; Chiva, S.; Pena, C.; Vela, E.

    2014-07-01

    In this work the development of a methodology is studied to evaluate the uncertainty in the results of CFD codes and is compatible with the VV-20 standard Standard for Verification and Validation in CFD and Heat Transfer {sup ,} developed by the Association of Mechanical Engineers ASME . Similarly, the alternatives are studied for obtaining existing uncertainty in the results to see which is the best choice from the point of view of implementation and time. We have developed two methods for calculating uncertainty of the results of a CFD code, the first method based on the use of techniques of Monte-Carlo for the propagation of uncertainty in this first method we think it is preferable to use the statistics of the order to determine the number of cases to execute the code, because this way we can always determine the confidence interval desired level of output quantities. The second type of method we have developed is based on non-intrusive polynomial chaos. (Author)

  18. FY05 LDRD Fianl Report Investigation of AAA+ protein machines that participate in DNA replication, recombination, and in response to DNA damage LDRD Project Tracking Code: 04-LW-049

    Energy Technology Data Exchange (ETDEWEB)

    Sawicka, D; de Carvalho-Kavanagh, M S; Barsky, D; Venclovas, C

    2006-12-04

    The AAA+ proteins are remarkable macromolecules that are able to self-assemble into nanoscale machines. These protein machines play critical roles in many cellular processes, including the processes that manage a cell's genetic material, but the mechanism at the molecular level has remained elusive. We applied computational molecular modeling, combined with advanced sequence analysis and available biochemical and genetic data, to structurally characterize eukaryotic AAA+ proteins and the protein machines they form. With these models we have examined intermolecular interactions in three-dimensions (3D), including both interactions between the components of the AAA+ complexes and the interactions of these protein machines with their partners. These computational studies have provided new insights into the molecular structure and the mechanism of action for AAA+ protein machines, thereby facilitating a deeper understanding of processes involved in DNA metabolism.

  19. Metalworking and machining fluids

    Science.gov (United States)

    Erdemir, Ali; Sykora, Frank; Dorbeck, Mark

    2010-10-12

    Improved boron-based metal working and machining fluids. Boric acid and boron-based additives that, when mixed with certain carrier fluids, such as water, cellulose and/or cellulose derivatives, polyhydric alcohol, polyalkylene glycol, polyvinyl alcohol, starch, dextrin, in solid and/or solvated forms result in improved metalworking and machining of metallic work pieces. Fluids manufactured with boric acid or boron-based additives effectively reduce friction, prevent galling and severe wear problems on cutting and forming tools.

  20. Laser Marked Codes For Paperless Tracking Applications

    Science.gov (United States)

    Crater, David

    1987-01-01

    The application of laser markers for marking machine readable codes is described. Use of such codes for automatic tracking and considerations for marker performance and features are discussed. Available laser marker types are reviewed. Compatibility of laser/material combinations and material/code/reader systems are reviewed.

  1. Defending Malicious Script Attacks Using Machine Learning Classifiers

    Directory of Open Access Journals (Sweden)

    Nayeem Khan

    2017-01-01

    Full Text Available The web application has become a primary target for cyber criminals by injecting malware especially JavaScript to perform malicious activities for impersonation. Thus, it becomes an imperative to detect such malicious code in real time before any malicious activity is performed. This study proposes an efficient method of detecting previously unknown malicious java scripts using an interceptor at the client side by classifying the key features of the malicious code. Feature subset was obtained by using wrapper method for dimensionality reduction. Supervised machine learning classifiers were used on the dataset for achieving high accuracy. Experimental results show that our method can efficiently classify malicious code from benign code with promising results.

  2. Machine Translation Effect on Communication

    DEFF Research Database (Denmark)

    Jensen, Mika Yasuoka; Bjørn, Pernille

    2011-01-01

    Intercultural collaboration facilitated by machine translation has gradually spread in various settings. Still, little is known as for the practice of machine-translation mediated communication. This paper investigates how machine translation affects intercultural communication in practice. Based...... determines communication process largely, our data indicates communication relies more on a dynamic process where participants establish common ground than on reproducibility and grammatical accuracy.......Intercultural collaboration facilitated by machine translation has gradually spread in various settings. Still, little is known as for the practice of machine-translation mediated communication. This paper investigates how machine translation affects intercultural communication in practice. Based...... on communication in which multilingual communication system is applied, we identify four communication types and its’ influences on stakeholders’ communication process, especially focusing on establishment and maintenance of common ground. Different from our expectation that quality of machine translation results...

  3. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  4. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  5. From Turing machines to computer viruses.

    Science.gov (United States)

    Marion, Jean-Yves

    2012-07-28

    Self-replication is one of the fundamental aspects of computing where a program or a system may duplicate, evolve and mutate. Our point of view is that Kleene's (second) recursion theorem is essential to understand self-replication mechanisms. An interesting example of self-replication codes is given by computer viruses. This was initially explained in the seminal works of Cohen and of Adleman in the 1980s. In fact, the different variants of recursion theorems provide and explain constructions of self-replicating codes and, as a result, of various classes of malware. None of the results are new from the point of view of computability theory. We now propose a self-modifying register machine as a model of computation in which we can effectively deal with the self-reproduction and in which new offsprings can be activated as independent organisms.

  6. PCP-ML: protein characterization package for machine learning.

    Science.gov (United States)

    Eickholt, Jesse; Wang, Zheng

    2014-11-18

    Machine Learning (ML) has a number of demonstrated applications in protein prediction tasks such as protein structure prediction. To speed further development of machine learning based tools and their release to the community, we have developed a package which characterizes several aspects of a protein commonly used for protein prediction tasks with machine learning. A number of software libraries and modules exist for handling protein related data. The package we present in this work, PCP-ML, is unique in its small footprint and emphasis on machine learning. Its primary focus is on characterizing various aspects of a protein through sets of numerical data. The generated data can then be used with machine learning tools and/or techniques. PCP-ML is very flexible in how the generated data is formatted and as a result is compatible with a variety of existing machine learning packages. Given its small size, it can be directly packaged and distributed with community developed tools for protein prediction tasks. Source code and example programs are available under a BSD license at http://mlid.cps.cmich.edu/eickh1jl/tools/PCPML/. The package is implemented in C++ and accessible as a Python module.

  7. Derivation of correction factor to be applied for calculated results of PWR fuel isotopic composition by ORIGEN2 code

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya; Nomura, Yasushi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murazaki, Minoru [Tokyo Nuclear Service Inc., Tokyo (Japan); Mochizuki, Hiroki [The Japan Research Institute Ltd., Tokyo (Japan)

    2001-11-01

    For providing conservative PWR spent fuel compositions from the view point of nuclear criticality safety, correction factors applicable for result of burnup calculation by ORIGEN2 were evaluated. Its conservativeness was verified by criticality calculations using MVP. To calculate these correction factors, analyses of spent fuel isotopic composition data were performed by ORIGEN2. Maximum or minimum value of the ratio of calculation result to experimental data was chosen as correction factor. These factors are given to each set of fuel assembly and ORIGEN2 library. They could be considered as the re-definition of recommended isotopic composition given in Nuclear Criticality Safety Handbook. (author)

  8. Machine testning

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with a laboratory exercise of 3 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercise includes a series of tests carried out by the student on a conventional and a numerically controled lathe, respectively. This document...

  9. Representational Machines

    DEFF Research Database (Denmark)

    Petersson, Dag; Dahlgren, Anna; Vestberg, Nina Lager

    to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...

  10. Design considerations and initial physical layer performance results for a space time coded OFDM 4G cellular network

    OpenAIRE

    Doufexi, A; Armour, SMD; Nix, AR; Beach, MA

    2002-01-01

    The exponential growth of cellular radio, WLANs and the Internet sets the context for a discussion on the role and objectives of 4G. In this paper OFDM is proposed as a leading candidate for a 4G cellular communications standard. The key design considerations and link parameters for a 4G OFDM system are identified and initial physical layer performance results are presented for a number of transmission modes and channel scenarios. Additionally, space-time techniques are considered as a means ...

  11. Comparing the Floating Point Systems, Inc. AP-190L to representative scientific computers: some benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Brengle, T.A.; Maron, N.

    1980-03-27

    Results are presented of comparative timing tests made by running a typical FORTRAN physics simulation code on the following machines: DEC PDP-10 with KI processor; DEC PDP-10, KI processor, and FPS AP-190L; CDC 7600; and CRAY-1. Factors such as DMA overhead, code size for the AP-190L, and the relative utilization of floating point functional units for the different machines are discussed. 1 table.

  12. Offshore code comparison collaboration continuation (OC4), phase I - Results of coupled simulations of an offshore wind turbine with jacket support structure

    DEFF Research Database (Denmark)

    Popko, Wojciech; Vorpahl, Fabian; Zuga, Adam

    2012-01-01

    In this paper, the exemplary results of the IEA Wind Task 30 "Offshore Code Comparison Collaboration Continuation" (OC4) Project - Phase I, focused on the coupled simulation of an offshore wind turbine (OWT) with a jacket support structure, are presented. The focus of this task has been the verif...... such as the buoyancy calculation and methods of accounting for additional masses (such as hydrodynamic added mass). Finally, recommendations concerning the modeling of the jacket are given. Copyright © 2012 by the International Society of Offshore and Polar Engineers (ISOPE)....

  13. Verification of the both hydrogeological and hydrogeochemical code results by an on-site test in granitic rocks

    Directory of Open Access Journals (Sweden)

    Michal Polák

    2007-01-01

    Full Text Available The project entitled “Methods and tools for the evaluation of the effect of engeneered barriers on distant interactions in the environment of a deep repository facility” deals with the ability to validate the behavior of applied engeneered barriers on hydrodynamic and migration parameters in the water-bearing granite environment of a radioactive waste deep repository facility. A part of the project represents a detailed mapping of the fracture network by means of geophysical and drilling surveys on the test-site (active granite quarry, construction of model objects (about 100 samples with the shape of cylinders, ridges and blocks, and the mineralogical, petrological and geochemical description of granite. All the model objects were subjected to migration and hydrodynamic tests with the use of fluorescein and NaCl as tracers. The tests were performed on samples with simple fractures, injected fractures and with an undisturbed integrity (verified by ultrasonic. The gained hydrodynamic and migration parameters of the model objects were processed with the modeling software NAPSAC and FEFLOW. During the following two years, these results and parameters will be verified (on the test-site by means of a long-term field test including the tuning of the software functionality.

  14. Adding machine and calculating machine

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In 1642 the French mathematician Blaise Pascal(1623-1662) invented a machine;.that could add and subtract. It had.wheels that each had: 1 to 10 marked off along its circumference. When the wheel at the right, representing units, made one complete circle, it engaged the wheel to its left, represents tens, and moved it forward one notch.

  15. Analysis of the quench propagation along Nb3Sn Rutherford cables with the THELMA code. Part II: Model predictions and comparison with experimental results

    Science.gov (United States)

    Manfreda, G.; Bellina, F.; Bajas, H.; Perez, J. C.

    2016-12-01

    To improve the technology of the new generation of accelerator magnets, prototypes are being manufactured and tested in several laboratories. In parallel, many numerical analyses are being carried out to predict the magnets behaviour and interpret the experimental results. This paper focuses on the quench propagation velocity, which is a crucial parameter as regards the energy dissipation along the magnet conductor. The THELMA code, originally developed for cable-in-conduit conductors for fusion magnets, has been used to study such quench propagation. To this purpose, new code modules have been added to describe the Rutherford cable geometry, the material non-linear thermal properties and to describe the thermal conduction problem in transient regime. THELMA can describe the Rutherford cable at the strand level, modelling both the electrical and thermal contact resistances between strands and enabling the analysis of the effects of local hot spots and quench heaters. This paper describes the model application to a sample of Short Model Coil tested at CERN: a comparison is made between the experimental results and the model prediction, showing a good agreement. A comparison is also made with the prediction of the most common analytical models, which give large inaccuracies when dealing with low n-index cables like Nb3Sn cables.

  16. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Kanemoto, Shigeru; Watanabe, Masaya [The University of Aizu, Aizuwakamatsu (Japan); Yusa, Noritaka [Tohoku University, Sendai (Japan)

    2014-08-15

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology.

  17. Genesis machines

    CERN Document Server

    Amos, Martyn

    2014-01-01

    Silicon chips are out. Today's scientists are using real, wet, squishy, living biology to build the next generation of computers. Cells, gels and DNA strands are the 'wetware' of the twenty-first century. Much smaller and more intelligent, these organic computers open up revolutionary possibilities. Tracing the history of computing and revealing a brave new world to come, Genesis Machines describes how this new technology will change the way we think not just about computers - but about life itself.

  18. Impact of the floating-point precision and interpolation scheme on the results of DNS of turbulence by pseudo-spectral codes

    CERN Document Server

    Homann, Holger; Grauer, Rainer

    2007-01-01

    In this paper we investigate the impact of the floating-point precision and interpolation scheme on the results of direct numerical simulations (DNS) of turbulence by pseudo-spectral codes. Three different types of floating-point precision configurations show no differences in the statistical results. This implies that single precision computations allow for increased Reynolds numbers due to the reduced amount of memory needed. The interpolation scheme for obtaining velocity values at particle positions has a noticeable impact on the Lagrangian acceleration statistics. A tri-cubic scheme results in a slightly broader acceleration probability density function than a tri-linear scheme. Furthermore the scaling behavior obtained by the cubic interpolation scheme exhibits a tendency towards a slightly increased degree of intermittency compared to the linear one.

  19. The Implications of Virtual Machine Introspection for Digital Forensics on Nonquiescent Virtual Machines

    Science.gov (United States)

    2011-06-01

    TERMS Virtual Machine Introspection, VMI , Virtual Machine, Forensics, Xen 15. NUMBER OF PAGES 61 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...taken for VMI memory dump utility to run with different loads ...........22 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF TABLES Table 1...Time to compile kernel without VMI ..............................................................20 Table 2. Time to compile kernel with VMI

  20. Learning thermodynamics with Boltzmann machines

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2016-10-01

    A Boltzmann machine is a stochastic neural network that has been extensively used in the layers of deep architectures for modern machine learning applications. In this paper, we develop a Boltzmann machine that is capable of modeling thermodynamic observables for physical systems in thermal equilibrium. Through unsupervised learning, we train the Boltzmann machine on data sets constructed with spin configurations importance sampled from the partition function of an Ising Hamiltonian at different temperatures using Monte Carlo (MC) methods. The trained Boltzmann machine is then used to generate spin states, for which we compare thermodynamic observables to those computed by direct MC sampling. We demonstrate that the Boltzmann machine can faithfully reproduce the observables of the physical system. Further, we observe that the number of neurons required to obtain accurate results increases as the system is brought close to criticality.

  1. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  2. Using GTO-Velo to Facilitate Communication and Sharing of Simulation Results in Support of the Geothermal Technologies Office Code Comparison Study

    Energy Technology Data Exchange (ETDEWEB)

    White, Signe K.; Purohit, Sumit; Boyd, Lauren W.

    2015-01-26

    The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that provides a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is

  3. Obtaining informed consent from study participants and results of field studies. Methodological problems caused by the literal treatment of codes of ethics

    Directory of Open Access Journals (Sweden)

    Grzyb Tomasz

    2017-06-01

    Full Text Available The article discusses the issue of the necessity of obtaining informed consent from an individual who is to be a participant in an experiment. Codes of ethics concerning the behaviour of a psychologist fundamentally do not permit conducting experiments without informing their participants in advance that they will be conducted. Meanwhile, the act of obtaining prior consent (and thus of informing the study participant that they will be taking part in an experiment can have a significant impact on results. The article describes an experiment in the field of social influence psychology during which one group was asked for their informed consent to participate in a study, while the second was simply presented with the main request (to sign a letter to the mayor about reducing the number of parking spaces for the disabled. The results demonstrate the strong influence of awareness that a study is being conducted on the decisions taken in the course of the experiment.

  4. Performance investigation of capillary tubes for machine tool coolers retrofitted with HFC-407C refrigerant

    Science.gov (United States)

    Wang, Fujen; Chang, Tongbou; Chiang, Weiming; Lee, Haochung

    2012-09-01

    The machine tool coolers are the best managers of coolant temperature in avoiding the deviation of spindle centerline for machine tools. However, the machine coolers are facing the compressed schedule to phase out the HCFC (hydro-chloro-floro-carbon) refrigerant and little attention has been paid to comparative study on sizing capillary tube for retrofitted HFC (hydro-floro-carbon) refrigerant. In this paper, the adiabatic flow in capillary tube is analyzed and modeled for retrofitting of HFC-407C refrigerant in a machine tool cooler system. A computer code including determining the length of sub-cooled flow region and the two phase region of capillary tube is developed. Comparative study of HCFC-22 and HFC-407C in a capillary tube is derived and conducted to simplify the traditional trial-and-error method of predicting the length of capillary tubes. Besides, experimental investigation is carried out by field tests to verify the simulation model and cooling performance of the machine tool cooler system. The results from the experiments reveal that the numerical model provides an effective approach to determine the performance data of capillary tube specific for retrofitting a HFC-407C machine tool cooler. The developed machine tool cooler system is not only directly compatible with new HFC-407C refrigerant, but can also perform a cost-effective temperature control specific for industrial machines.

  5. Simulating Turing machines on Maurer machines

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2008-01-01

    In a previous paper, we used Maurer machines to model and analyse micro-architectures. In the current paper, we investigate the connections between Turing machines and Maurer machines with the purpose to gain an insight into computability issues relating to Maurer machines. We introduce ways to

  6. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  7. Dynamics of cyclic machines

    CERN Document Server

    Vulfson, Iosif

    2015-01-01

    This book focuses on modern methods of oscillation analysis in machines, including cyclic action mechanisms (linkages, cams, steppers, etc.). It presents schematization techniques and mathematical descriptions of oscillating systems, taking into account the variability of the parameters and nonlinearities, engineering evaluations of dynamic errors, and oscillation suppression methods. The majority of the book is devoted to the development of new methods of dynamic analysis and synthesis for cyclic machines that form regular oscillatory systems with multiple duplicate modules.  There are also sections examining aspects of general engineering interest (nonlinear dissipative forces, systems with non-stationary constraints, impacts and pseudo-impacts in clearances, etc.)  The examples in the book are based on the widely used results of theoretical and experimental studies as well as engineering calculations carried out in relation to machines used in the textile, light, polygraphic and other industries. Particu...

  8. Machine Transliteration

    CERN Document Server

    Knight, K; Knight, Kevin; Graehl, Jonathan

    1997-01-01

    It is challenging to translate names and technical terms across languages with different alphabets and sound inventories. These items are commonly transliterated, i.e., replaced with approximate phonetic equivalents. For example, "computer" in English comes out as "konpyuutaa" in Japanese. Translating such items from Japanese back to English is even more challenging, and of practical interest, as transliterated items make up the bulk of text phrases not found in bilingual dictionaries. We describe and evaluate a method for performing backwards transliterations by machine. This method uses a generative model, incorporating several distinct stages in the transliteration process.

  9. Man - Machine Communication

    CERN Document Server

    Petersen, Peter; Nielsen, Henning

    1984-01-01

    This report describes a Man-to-Machine Communication module which together with a STAC can take care of all operator inputs from the touch-screen, tracker balls and mechanical buttons. The MMC module can also contain a G64 card which could be a GPIB driver but many other G64 cards could be used. The soft-ware services the input devices and makes the results accessible from the CAMAC bus. NODAL functions for the Man Machine Communication is implemented in the STAC and in the ICC.

  10. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  11. Are ICD-10 codes appropriate for performance assessment in asthma and COPD in general practice? Results of a cross sectional observational study

    Directory of Open Access Journals (Sweden)

    Wensing Michel

    2005-02-01

    Full Text Available Abstract Background The increasing prevalence and impact of obstructive lung diseases and new insights, reflected in clinical guidelines, have led to concerns about the diagnosis and therapy of asthma and COPD in primary care. In Germany diagnoses written in medical records are used for reimbursement, which may influence physicians' documentation behaviour. For that reason it is unclear to what respect ICD-10 codes reflect the real problems of the patients in general practice. The aim of this study was to assess the appropriateness of the recorded diagnoses and to determine what diagnostic information is used to guide medical treatment. Methods All patients with lower airway symptoms (n = 857 who had attended six general practices between January and June 2003 were included into this cross sectional observational study. Patients were selected from the computerised medical record systems, focusing on ICD-10-codes concerning lower airway diseases (J20-J22, J40-J47, J98 and R05. The performed diagnostic procedures and actual medication for each identified patient were extracted manually. Then we examined the associations between recorded diagnoses, diagnostic procedures and prescribed treatment for asthma and COPD in general practice. Results Spirometry was used in 30% of the patients with a recorded diagnosis of asthma and in 58% of the patients with a recorded diagnosis of COPD. Logistic regression analysis showed an improved use of spirometry when inhaled corticosteroids were prescribed for asthma (OR = 5.2; CI 2.9–9.2 or COPD (OR = 4.7; CI 2.0–10.6. Spirometry was also used more often when sympathomimetics were prescribed (asthma: OR = 2.3; CI 1.2–4.2; COPD: OR = 4.1; CI 1.8–9.4. Conclusions This study revealed that spirometry was used more often when corticosteroids or sympathomimetics were prescribed. The findings suggest that treatment was based on diagnostic test results rather than on recorded diagnoses. The documented ICD-10 codes

  12. MACHINING OPTIMISATION AND OPERATION ALLOCATION FOR NC LATHE MACHINES IN A JOB SHOP MANUFACTURING SYSTEM

    Directory of Open Access Journals (Sweden)

    MUSSA I. MGWATU

    2013-08-01

    Full Text Available Numerical control (NC machines in a job shop may not be cost and time effective if the assignment of cutting operations and optimisation of machining parameters are overlooked. In order to justify better utilisation and higher productivity of invested NC machine tools, it is necessary to determine the optimum machining parameters and realize effective assignment of cutting operations on machines. This paper presents two mathematical models for optimising machining parameters and effectively allocating turning operations on NC lathe machines in a job shop manufacturing system. The models are developed as non-linear programming problems and solved using a commercial LINGO software package. The results show that the decisions of machining optimisation and operation allocation on NC lathe machines can be simultaneously made while minimising both production cost and cycle time. In addition, the results indicate that production cost and cycle time can be minimised while significantly reducing or totally eliminating idle times among machines.

  13. Development of Fractal Pattern Making Application using L-System for Enhanced Machine Controller

    Directory of Open Access Journals (Sweden)

    Gunawan Alexander A S

    2014-03-01

    Full Text Available One big issue facing the industry today is an automated machine lack of flexibility for customization because it is designed by the manufacturers based on certain standards. In this research, it is developed customized application software for CNC (Computer Numerically Controlled machines using open source platform. The application is enable us to create designs by means of fractal patterns using L-System, developed by turtle geometry interpretation and Python programming languages. The result of the application is the G-Code of fractal pattern formed by the method of L-System. In the experiment on the CNC machine, the G-Code of fractal pattern which involving the branching structure has been able to run well.

  14. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR(Adaptive-Mesh Refinement) and Results From Special and General Relativistic Hydrodynamic

    CERN Document Server

    Donmez, O

    2004-01-01

    In this paper, the general procedure to solve the General Relativistic Hydrodynamical(GRH) equations with Adaptive-Mesh Refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of general relativistic hydrodynamic equations are done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second order convergence of the code in 1D, 2D and 3D. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the general relativistic hydrodynamical equa...

  15. Cleaning of Free Machining Brass

    Energy Technology Data Exchange (ETDEWEB)

    Shen, T

    2005-12-29

    We have investigated four brightening treatments proposed by two cleaning vendors for cleaning free machining brass. The experimental results showed that none of the proposed brightening treatments passed the swipe test. Thus, we maintain the recommendation of not using the brightening process in the cleaning of free machining brass for NIF application.

  16. Plug into 'the modernizing machine'!

    DEFF Research Database (Denmark)

    Krejsler, John B.

    2013-01-01

    ‘The modernizing machine’ codes individual bodies, things and symbols with images from New Public Management, neoliberal and Knowledge Economy discourses. Drawing on Deleuze & Guattari’s concept of machines, this article explores how ‘the modernizing machine’ produces neo-liberal modernization...... of the public sector. Taking its point of departure in Danish university reform, the article explores how the university is transformed by this desiring-producing machine. ‘The modernizing machine’ wrestles with the so-called ‘democratic-Humboldtian machine’. The University Act of 2003 and the host of reforms...... bodies and minds simultaneously produce academic subjectivities by plugging into these transformative machinic forces and are produced as they are traversed by them. What is experienced as stressful closures vis-à-vis new opportunities depends to a great extent upon how these producing...

  17. Object-Oriented Support for Adaptive Methods on Paranel Machines

    Directory of Open Access Journals (Sweden)

    Sandeep Bhatt

    1993-01-01

    Full Text Available This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent, and the low overhead of the resulting C++ code (over hand-crafted C code supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.

  18. Machine Protection

    CERN Document Server

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an ...

  19. Diamond Measuring Machine

    Energy Technology Data Exchange (ETDEWEB)

    Krstulic, J.F.

    2000-01-27

    The fundamental goal of this project was to develop additional capabilities to the diamond measuring prototype, work out technical difficulties associated with the original device, and perform automated measurements which are accurate and repeatable. For this project, FM and T was responsible for the overall system design, edge extraction, and defect extraction and identification. AccuGem provided a lab and computer equipment in Lawrence, 3D modeling, industry expertise, and sets of diamonds for testing. The system executive software which controls stone positioning, lighting, focusing, report generation, and data acquisition was written in Microsoft Visual Basic 6, while data analysis and modeling were compiled in C/C++ DLLs. All scanning parameters and extracted data are stored in a central database and available for automated analysis and reporting. The Phase 1 study showed that data can be extracted and measured from diamond scans, but most of the information had to be manually extracted. In this Phase 2 project, all data required for geometric modeling and defect identification were automatically extracted and passed to a 3D modeling module for analysis. Algorithms were developed which automatically adjusted both light levels and stone focus positioning for each diamond-under-test. After a diamond is analyzed and measurements are completed, a report is printed for the customer which shows carat weight, summarizes stone geometry information, lists defects and their size, displays a picture of the diamond, and shows a plot of defects on a top view drawing of the stone. Initial emphasis of defect extraction was on identification of feathers, pinpoints, and crystals. Defects were plotted color-coded by industry standards for inclusions (red), blemishes (green), and unknown defects (blue). Diamonds with a wide variety of cut quality, size, and number of defects were tested in the machine. Edge extraction, defect extraction, and modeling code were tested for

  20. Quasi-Periodic Oscillations and Frequencies in AN Accretion Disk and Comparison with the Numerical Results from Non-Rotating Black Hole Computed by the Grh Code

    Science.gov (United States)

    Donmez, Orhan

    The shocked wave created on the accretion disk after different physical phenomena (accretion flows with pressure gradients, star-disk interaction etc.) may be responsible observed Quasi Periodic Oscillations (QPOs) in X-ray binaries. We present the set of characteristics frequencies associated with accretion disk around the rotating and non-rotating black holes for one particle case. These persistent frequencies are results of the rotating pattern in an accretion disk. We compare the frequency's from two different numerical results for fluid flow around the non-rotating black hole with one particle case. The numerical results are taken from Refs. 1 and 2 using fully general relativistic hydrodynamical code with non-selfgravitating disk. While the first numerical result has a relativistic tori around the black hole, the second one includes one-armed spiral shock wave produced from star-disk interaction. Some physical modes presented in the QPOs can be excited in numerical simulation of relativistic tori and spiral waves on the accretion disk. The results of these different dynamical structures on the accretion disk responsible for QPOs are discussed in detail.

  1. Noisy Network Coding

    CERN Document Server

    Lim, Sung Hoon; Gamal, Abbas El; Chung, Sae-Young

    2010-01-01

    A noisy network coding scheme for sending multiple sources over a general noisy network is presented. For multi-source multicast networks, the scheme naturally extends both network coding over noiseless networks by Ahlswede, Cai, Li, and Yeung, and compress-forward coding for the relay channel by Cover and El Gamal to general discrete memoryless and Gaussian networks. The scheme also recovers as special cases the results on coding for wireless relay networks and deterministic networks by Avestimehr, Diggavi, and Tse, and coding for wireless erasure networks by Dana, Gowaikar, Palanki, Hassibi, and Effros. The scheme involves message repetition coding, relay signal compression, and simultaneous decoding. Unlike previous compress--forward schemes, where independent messages are sent over multiple blocks, the same message is sent multiple times using independent codebooks as in the network coding scheme for cyclic networks. Furthermore, the relays do not use Wyner--Ziv binning as in previous compress-forward sch...

  2. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  3. Laser machining of explosives

    Science.gov (United States)

    Perry, Michael D.; Stuart, Brent C.; Banks, Paul S.; Myers, Booth R.; Sefcik, Joseph A.

    2000-01-01

    The invention consists of a method for machining (cutting, drilling, sculpting) of explosives (e.g., TNT, TATB, PETN, RDX, etc.). By using pulses of a duration in the range of 5 femtoseconds to 50 picoseconds, extremely precise and rapid machining can be achieved with essentially no heat or shock affected zone. In this method, material is removed by a nonthermal mechanism. A combination of multiphoton and collisional ionization creates a critical density plasma in a time scale much shorter than electron kinetic energy is transferred to the lattice. The resulting plasma is far from thermal equilibrium. The material is in essence converted from its initial solid-state directly into a fully ionized plasma on a time scale too short for thermal equilibrium to be established with the lattice. As a result, there is negligible heat conduction beyond the region removed resulting in negligible thermal stress or shock to the material beyond a few microns from the laser machined surface. Hydrodynamic expansion of the plasma eliminates the need for any ancillary techniques to remove material and produces extremely high quality machined surfaces. There is no detonation or deflagration of the explosive in the process and the material which is removed is rendered inert.

  4. Quasi Periodic Oscillations (QPOs) and frequencies in an accretion disk and comparison with the numerical results from non-rotating black hole computed by the GRH code

    CERN Document Server

    Donmez, O

    2006-01-01

    The shocked wave created on the accretion disk after different physical phenomena (accretion flows with pressure gradients, star-disk interaction etc.) may be responsible observed Quasi Periodic Oscillations (QPOs) in $X-$ray binaries. We present the set of characteristics frequencies associated with accretion disk around the rotating and non-rotating black holes for one particle case. These persistent frequencies are results of the rotating pattern in an accretion disk. We compare the frequency's from two different numerical results for fluid flow around the non-rotating black hole with one particle case. The numerical results are taken from our papers Refs.\\refcite{Donmez2} and \\refcite{Donmez3} using fully general relativistic hydrodynamical code with non-selfgravitating disk. While the first numerical result has a relativistic tori around the black hole, the second one includes one-armed spiral shock wave produced from star-disk interaction. Some physical modes presented in the QPOs can be excited in nume...

  5. Possibilities for Automatic Control of Hydro-Mechanical Transmission and Birotating Electric Machine

    Directory of Open Access Journals (Sweden)

    V. V. Mikhailov

    2014-01-01

    Full Text Available The paper presents mathematical models and results of virtual investigations pertaining to the selected motion parameters of a mobile machine equipped with hydro mechanical and modernized transmissions. The machine has been tested in similar technological cycles and it has been equipped with a universal automatic control system. Changes in structure and type of power transmission have been obtained with the help of a control algorithm including an extra reversible electric machine which is switched in at some operational modes.Implementation of the proposed  concept makes it possible to obtain and check the improved C-code of the control system and enhance operational parameters of the transmission and machine efficiency, reduce slippage and tire wear while using braking energy for its later beneficial use which is usually considered as a consumable element.

  6. Efficient RTL-based code generation for specified DSP C-compiler

    Science.gov (United States)

    Pan, Qiaohai; Liu, Peng; Shi, Ce; Yao, Qingdong; Zhu, Shaobo; Yan, Li; Zhou, Ying; Huang, Weibing

    2001-12-01

    A C-compiler is a basic tool for most embedded systems programmers. It is the tool by which the ideas and algorithms in your application (expressed as C source code) are transformed into machine code executable by the target processor. Our research was to develop an optimizing C-compiler for a specified 16-bit DSP. As one of the most important part in the C-compiler, Code Generation's efficiency and performance directly affect to the resultant target assembly code. Thus, in order to improve the performance of the C-compiler, we constructed an efficient code generation based on RTL, an intermediate language used in GNU CC. The code generation accepts RTL as main input, takes good advantage of features specific to RTL and specified DSP's architecture, and generates compact assembly code of the specified DSP. In this paper, firstly, the features of RTL will be briefly introduced. Then, the basic principle of constructing the code generation will be presented in detail. According to the basic principle, this paper will discuss the architecture of the code generation, including: syntax tree construction / reconstruction, basic RTL instruction extraction, behavior description at RTL level, and instruction description at assembly level. The optimization strategies used in the code generation for generating compact assembly code will also be given in this paper. Finally, we will achieve the conclusion that the C-compiler using this special code generation achieved high efficiency we expected.

  7. QR code for medical information uses.

    Science.gov (United States)

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  8. Holographic codes

    CERN Document Server

    Latorre, Jose I

    2015-01-01

    There exists a remarkable four-qutrit state that carries absolute maximal entanglement in all its partitions. Employing this state, we construct a tensor network that delivers a holographic many body state, the H-code, where the physical properties of the boundary determine those of the bulk. This H-code is made of an even superposition of states whose relative Hamming distances are exponentially large with the size of the boundary. This property makes H-codes natural states for a quantum memory. H-codes exist on tori of definite sizes and get classified in three different sectors characterized by the sum of their qutrits on cycles wrapped through the boundaries of the system. We construct a parent Hamiltonian for the H-code which is highly non local and finally we compute the topological entanglement entropy of the H-code.

  9. Sharing code

    OpenAIRE

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  10. Strong Normalization by Type-Directed Partial Evaluation and Run-Time Code Generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1997-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  11. Strong normalization by type-directed partial evaluation and run-time code generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1998-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  12. Large Core Code Evaluation Working Group Benchmark Problem Four: neutronics and burnup analysis of a large heterogeneous fast reactor. Part 1. Analysis of benchmark results. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, C.L.; Protsik, R.; Lewellen, J.W. (eds.)

    1984-01-01

    The Large Core Code Evaluation Working Group Benchmark Problem Four was specified to provide a stringent test of the current methods which are used in the nuclear design and analyses process. The benchmark specifications provided a base for performing detailed burnup calculations over the first two irradiation cycles for a large heterogeneous fast reactor. Particular emphasis was placed on the techniques for modeling the three-dimensional benchmark geometry, and sensitivity studies were carried out to determine the performance parameter sensitivities to changes in the neutronics and burnup specifications. The results of the Benchmark Four calculations indicated that a linked RZ-XY (Hex) two-dimensional representation of the benchmark model geometry can be used to predict mass balance data, power distributions, regionwise fuel exposure data and burnup reactivities with good accuracy when compared with the results of direct three-dimensional computations. Most of the small differences in the results of the benchmark analyses by the different participants were attributed to ambiguities in carrying out the regionwise flux renormalization calculations throughout the burnup step.

  13. Uncertainty analysis for results of thermal hydraulic codes of best-estimate-type; Analisis de incertidumbre para resultados de codigos termohidraulicos de mejor estimacion

    Energy Technology Data Exchange (ETDEWEB)

    Alva N, J.

    2010-07-01

    In this thesis, some fundamental knowledge is presented about uncertainty analysis and about diverse methodologies applied in the study of nuclear power plant transient event analysis, particularly related to thermal hydraulics phenomena. These concepts and methodologies mentioned in this work come from a wide bibliographical research in the nuclear power subject. Methodologies for uncertainty analysis have been developed by quite diverse institutions, and they have been widely used worldwide for application to results from best-estimate-type computer codes in nuclear reactor thermal hydraulics and safety analysis. Also, the main uncertainty sources, types of uncertainties, and aspects related to best estimate modeling and methods are introduced. Once the main bases of uncertainty analysis have been set, and some of the known methodologies have been introduced, it is presented in detail the CSAU methodology, which will be applied in the analyses. The main objective of this thesis is to compare the results of an uncertainty and sensibility analysis by using the Response Surface Technique to the application of W ilks formula, apply through a loss coolant experiment and an event of rise in a BWR. Both techniques are options in the part of uncertainty and sensibility analysis of the CSAU methodology, which was developed for the analysis of transients and accidents at nuclear power plants, and it is the base of most of the methodologies used in licensing of nuclear power plants practically everywhere. Finally, the results of applying both techniques are compared and discussed. (Author)

  14. Graph Codes with Reed-Solomon Component Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2006-01-01

    We treat a specific case of codes based on bipartite expander graphs coming from finite geometries. The code symbols are associated with the branches and the symbols connected to a given node are restricted to be codewords in a Reed-Solomon code. We give results on the parameters of the codes...

  15. A Concrete Framework for Environment Machines

    DEFF Research Database (Denmark)

    Biernacka, Malgorzata; Danvy, Olivier

    2007-01-01

    calculus with explicit substitutions), we extend it minimally so that it can also express one-step reduction strategies, and we methodically derive a series of environment machines from the specification of two one-step reduction strategies for the lambda-calculus: normal order and applicative order....... The derivation extends Danvy and Nielsen’s refocusing-based construction of abstract machines with two new steps: one for coalescing two successive transitions into one, and the other for unfolding a closure into a term and an environment in the resulting abstract machine. The resulting environment machines...... include both the Krivine machine and the original version of Krivine’s machine, Felleisen et al.’s CEK machine, and Leroy’s Zinc abstract machine....

  16. Machine intelligence and signal processing

    CERN Document Server

    Vatsa, Mayank; Majumdar, Angshul; Kumar, Ajay

    2016-01-01

    This book comprises chapters on key problems in machine learning and signal processing arenas. The contents of the book are a result of a 2014 Workshop on Machine Intelligence and Signal Processing held at the Indraprastha Institute of Information Technology. Traditionally, signal processing and machine learning were considered to be separate areas of research. However in recent times the two communities are getting closer. In a very abstract fashion, signal processing is the study of operator design. The contributions of signal processing had been to device operators for restoration, compression, etc. Applied Mathematicians were more interested in operator analysis. Nowadays signal processing research is gravitating towards operator learning – instead of designing operators based on heuristics (for example wavelets), the trend is to learn these operators (for example dictionary learning). And thus, the gap between signal processing and machine learning is fast converging. The 2014 Workshop on Machine Intel...

  17. Two-Level Semantics and Code Generation

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis

    1988-01-01

    not absolutely necessary for describing the input-output semantics of programming languages, it is necessary when issues such as data flow analysis and code generation are considered. For an example stack-machine, the authors show how to generate code for the run-time computations and still perform the compile......-time computations. Using an example, it is argued that compiler-tricks such as the use of activation records suggest how to cope with certain syntactic restrictions in the metalanguage. The correctness of the code generation is proved using Kripke-like relations and using a modified machine that can be made to loop...

  18. QCD on the connection machine: beyond LISP

    Science.gov (United States)

    Brickner, Ralph G.; Baillie, Clive F.; Johnsson, S. Lennart

    1991-04-01

    We report on the status of code development for a simulation of quantum chromodynamics (QCD) with dynamical Wilson fermions on the Connection Machine model CM-2. Our original code, written in Lisp, gave performance in the near-GFLOPS range. We have rewritten the most time-consuming parts of the code in the low-level programming systems CMIS, including the matrix multiply and the communication. Current versions of the code run at approximately 3.6 GFLOPS for the fermion matrix inversion, and we expect the next version to reach or exceed 5 GFLOPS.

  19. A "Living" Machine

    Institute of Scientific and Technical Information of China (English)

    N.R.Bogatyrev

    2004-01-01

    Biomimetics (or bionics) is the engineering discipline that constructs artificial systems using biological principles. The ideal final result in biomimetics is to create a living machine. But what are the desirable and non-desirable properties of biomimetic product? Where can natural prototypes be found? How can technical solutions be transferred from nature to technology? Can we use living nature like LEGO bricks for construction our machines? How can biology help us? What is a living machine? In biomimetic practice only some "part" (organ, part of organ, tissue) of the observed whole organism is utilized. A possible template for future super-organism extension for biomimetic methods might be drawn from experiments in holistic ecological agriculture (ecological design, permaculture, ecological engineering, etc. ). The necessary translation of these rules to practical action can be achieved with the Russian Theory of Inventive Problem Solving (TRIZ), specifically adjusted to biology. Thus, permaculture, reinforced by a TRIZ conceptual framework, might provide the basis for Super-Organismic Bionics, which is hypothesized as necessary for effective ecological engineering. This hypothesis is supported by a case study-the design of a sustainable artificial nature reserve for wild pollinators as a living machine.

  20. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  1. Updated Users' Guide for RSAP -- A Code for Display and Manipulation of Neutron Cross Section Data and SAMMY Fit Results

    Energy Technology Data Exchange (ETDEWEB)

    Sayer, R.O.

    2003-07-29

    RSAP [1] is a computer code for display and manipulation of neutron cross section data and selected SAMMY output. SAMMY [2] is a multilevel R-matrix code for fitting neutron time-of-flight cross-section data using Bayes' method. This users' guide provides documentation for the recently updated RSAP code (version 6). The code has been ported to the Linux platform, and several new features have been added, including the capability to read cross section data from ASCII pointwise ENDF files as well as double-precision PLT output from SAMMY. A number of bugs have been found and corrected, and the input formats have been improved. Input items are parsed so that items may be separated by spaces or commas.

  2. Automation of printing machine

    OpenAIRE

    Sušil, David

    2016-01-01

    Bachelor thesis is focused on the automation of the printing machine and comparing the two types of printing machines. The first chapter deals with the history of printing, typesettings, printing techniques and various kinds of bookbinding. The second chapter describes the difference between sheet-fed printing machines and offset printing machines, the difference between two representatives of rotary machines, technological process of the products on these machines, the description of the mac...

  3. Motion and Virtual Cutting Simulation System for a Five-Axis Virtual Machine Tool

    Directory of Open Access Journals (Sweden)

    Rong-Shean Lee

    2011-09-01

    Full Text Available Since five-axis machine tools are very costly and their use requires a high level of knowledge and expertise, a virtual machine tool must be used to simulate five-axis machine tool operation. Configuration code or a mechanism topology matrix must be used to describe a machine tool, and can be used as the framework for design of a virtual machine tool system. The first step is to isolate the basic motions of each element of a virtual machine tool and then establish their coordinate systems. The establishment of a node tree allows coordinate transformation matrices for virtual motion components to be derived, which are then used to simulate movements. The simulation of virtual cutting must take into consideration both accuracy and efficiency. While either a GPU or CPU can be used to perform calculations, there are currently restrictions on GPU memory use which results in relatively lower accuracy. In contrast, a CPU can perform calculations using an adaptive octree with voxels and multithreading to yield sufficient accuracy and efficiency. A five-axis virtual machine tool motion and virtual cutting simulation system was written in C/C++ with OpenGL and OpenMP, and can perform real-time cutting simulations.

  4. Final report for hydrologic studies support: INTERCOMP code conversion

    Energy Technology Data Exchange (ETDEWEB)

    Ichimura, V.

    1982-01-11

    Mass and energy balance errors noted in a number of IBM-executed problems are caused by the lack of precision in computing total mass and energy values for a domain. This problem is evident in domains constructed with highly variable mesh sizes during the early time of simulation. The machine round-off was corrected by double-precisioning certain calculations for mass and energy balance. Small differences that exist between the improved INTERCOMP code operating on an IBM machine and the old version on a CDC machine seem unimportant. The noted differences are greatest at an onset of physical system perturbation. These differences diminish rapidly with each succeeding time step. Comparisons with numerical and analytical solutions appear to prove authenticity of code results. Numerical comparisons with the CCC computer code on the Mobile experiment data demonstrate the advantage of using aquifer influence functions in place of an infinitely large mesh. The one-dimensional heat transfer in the overburden and underburden appears sufficiently accurate to describe aquifer heat losses.

  5. Salinas - An implicit finite element structural dynamics code developed for massively parallel platforms

    Energy Technology Data Exchange (ETDEWEB)

    BHARDWAJ, MANLJ K.; REESE,GARTH M.; DRIESSEN,BRIAN; ALVIN,KENNETH F.; DAY,DAVID M.

    2000-04-06

    As computational needs for structural finite element analysis increase, a robust implicit structural dynamics code is needed which can handle millions of degrees of freedom in the model and produce results with quick turn around time. A parallel code is needed to avoid limitations of serial platforms. Salinas is an implicit structural dynamics code specifically designed for massively parallel platforms. It computes the structural response of very large complex structures and provides solutions faster than any existing serial machine. This paper gives a current status of Salinas and uses demonstration problems to show Salinas' performance.

  6. Motility mapping as evaluation tool for bowel motility: initial results on the development of an automated color-coding algorithm in cine MRI.

    Science.gov (United States)

    Hahnemann, Maria L; Nensa, Felix; Kinner, Sonja; Gerken, Guido; Lauenstein, Thomas C

    2015-02-01

    To develop and implement an automated algorithm for visualizing and quantifying bowel motility using cine magnetic resonance imaging (MRI). Four healthy volunteers as well as eight patients with suspected or diagnosed inflammatory bowel disease (IBD) underwent MR examinations on a 1.5T scanner. Coronal T2-weighted cine MR images were acquired in healthy volunteers without and with intravenous (i.v.) administration of butylscopolamine. In patients with IBD, cine MRI sequences were collected prior to standard bowel MRI. Bowel motility was assessed using an optical flow algorithm. The resulting motion vector magnitudes were presented as bowel motility maps. Motility changes after i.v. administration of butylscopolamine were measured in healthy volunteers. Inflamed bowel segments in patients were correlated with motility map findings. The acquisition of bowel motility maps was feasible in all subjects examined. In healthy volunteers butylscopolamine led to quantitatively measurable decrease in bowel motility (mean decrease of 59%; P = 0.171). In patients with IBD, visualization of bowel movement by color-coded motility mapping allowed for the detection of segments with abnormal bowel motility. Inflamed bowel segments could be identified by exhibiting a decreased motility. Our method is a feasible and promising approach for the assessment of bowel motility disorders. © 2014 Wiley Periodicals, Inc.

  7. Excitation functions of proton induced reactions on {sup nat}Os up to 65 MeV: Experiments and comparison with results from theoretical codes

    Energy Technology Data Exchange (ETDEWEB)

    Hermanne, A.; Adam Rebeles, R. [Cyclotron Laboratory, Vrije Universiteit Brussel, Brussels 1090 (Belgium); Tárkányi, F.; Takács, S. [Institute of Nuclear Research, Hungarian Academy of Science, 4026 Debrecen (Hungary)

    2015-02-15

    Activation of thin {sup nat}Os targets, electrodeposited on Ni backings, was investigated for the first time in stacked foil irradiations with 65 MeV and 34 MeV proton beams. Assessments of the produced radionuclides by high resolution gamma-ray spectroscopy yielded excitation functions for formation of {sup 184,} {sup 185,} {sup 186m,m+g,} {sup 187m+g,} {sup 188m+g,} {sup 189m2+m1+g,} {sup 190m2,m1+g,} {sup 192m1+g}Ir and {sup 185cum,} {sup 191m+g}Os, {sup 183m+g}Re. Where available comparisons with the reaction cross sections obtained in 2 earlier studies on enriched {sup 192}Os were made. Reduced uncertainty on cross sections is obtained by simultaneous remeasurement of the {sup 27}Al(p,x){sup 22,24}Na, {sup nat}Ni(p,x){sup 57}Ni and {sup nat}Ti(p,x){sup 48}V monitor reactions over wide relevant energy ranges. Confirmation of monitoring took place by assessment of excitation functions of {sup 61}Cu, {sup 56}Ni, {sup 55,56,57,58}Co and {sup 52}Mn induced in the Ni backings and comparison with a recent compilation for most of these radionuclides. Contributing reactions and overall cross sections are discussed and were evaluated in comparison with the results of the theoretical code TALYS 1.6 (values from the on-line library TENDL-2013)

  8. Benchmarking of Decay Heat Measured Values of ITER Materials Induced by 14 MeV Neutron Activation with Calculated Results by ACAB Activation Code

    Energy Technology Data Exchange (ETDEWEB)

    Tore, C.; Ortego, P.; Rodriguez Rivada, A.

    2014-07-01

    The aim of this paper is the comparison between the calculated and measured decay heat of material samples which were irradiated at the Fusion Neutron Source of JAERI in Japan with D-T production of 14MeV neutrons. In the International Thermonuclear Experimental Reactor (ITER) neutron activation of the structural material will result in a source of heat after shutdown of the reactor. The estimation of decay heat value with qualified codes and nuclear data is an important parameter for the safety analyses of fusion reactors against lost of coolant accidents. When a loss of coolant and/or flow accident happen plasma facing components are heated up by decay heat. If the temperature of the components exceeds the allowable temperature, the accident would expand to loose the integrity of ITER. Uncertainties associated with decay prediction less than 15% are strongly requested by the ITER designers. Additionally, accurate decay heat prediction is required for making reasonable shutdown scenarios of ITER. (Author)

  9. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    Science.gov (United States)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  10. New code match strategy for wideband code division multiple access code tree management

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Orthogonal variable spreading factor channelization codes are widely used to provide variable data rates for supporting different bandwidth requirements in wideband code division multiple access (WCDMA) systems. A new code match scheme for WCDMA code tree management was proposed. The code match scheme is similar to the existing crowed-first scheme. When choosing a code for a user, the code match scheme only compares the one up layer of the allocated codes, unlike the crowed-first scheme which perhaps compares all up layers. So the operation of code match scheme is simple, and the average time delay is decreased by 5.1%. The simulation results also show that the code match strategy can decrease the average code blocking probability by 8.4%.

  11. Machine musicianship

    Science.gov (United States)

    Rowe, Robert

    2002-05-01

    The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.

  12. Comparison of the Results of the Whole Core Decay Power Using the ORIGEN Code and ANS-1979 for the Uljin Unit 6

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Eun Hyun; Jeong, Hae Sun; Kim, Dong Ha [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    When a detailed tracking of the nuclide is not required-, i.e., only the whole core decay heat information is required, then the RN package is not activated and the DCH package is solely used, whereas both the RN and DCH packages are used when we need a fission product transport simulation and location information. For DCH only mode, there are four options to calculate the whole core decay heat calculation for the time after a shut-down. The first is using a summation of the decay heat data from ORIGEN-based fission product inventories for the representative BWRs and PWRs, which are scaled if necessary. The second is using the ANS-1979 standard for the decay heat power. The third is using a user-specified tabular function of the whole-core decay as a function of time. The fourth is using a user-specified control function to define the decay heat. In this research, for option 2, the ANS-1979 standard for the whole core decay heat calculation is compared with the result of the ORIGEN calculation for Uljin Unit 6 after arranging the ORIGEN result based on the mass, radioactivity, and decay heat for the elements and nuclides. The MELCOR code is currently using the ANS-1979 standard, the lasted version for decay heat in ANS standards is not mainly dealt with in this research. The goal of the examination is to find the necessity of changing old standard for the enhancement of the accuracy. The ANS-1979 is an old standard about decay heat, thus recent standards which are ANSI/ANS-5.1-1994 and ANSI/ANS-5.1-2005 should be investigated in the long term research. This research has certain drawback in that the mere multiplication of the number of assemblies is done for the whole core decay heat calculation in the arrangement of the ORIGEN result.

  13. A language for easy and efficient modeling of Turing machines

    Institute of Scientific and Technical Information of China (English)

    Pinaki Chakraborty

    2007-01-01

    A Turing Machine Description Language (TMDL) is developed for easy and efficient modeling of Turing machines.TMDL supports formal symbolic representation of Turing machines. The grammar for the language is also provided. Then a fast singlepass compiler is developed for TMDL. The scope of code optimization in the compiler is examined. An interpreter is used to simulate the exact behavior of the compiled Turing machines. A dynamically allocated and resizable array is used to simulate the infinite tape of a Turing machine. The procedure for simulating composite Turing machines is also explained. In this paper, two sample Turing machines have been designed in TMDL and their simulations are discussed. The TMDL can be extended to model the different variations of the standard Turing machine.

  14. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  15. Electrical machines mathematical fundamentals of machine topologies

    CERN Document Server

    Gerling, Dieter

    2015-01-01

    Electrical Machines and Drives play a powerful role in industry with an ever increasing importance. This fact requires the understanding of machine and drive principles by engineers of many different disciplines. Therefore, this book is intended to give a comprehensive deduction of these principles. Special attention is given to the precise mathematical derivation of the necessary formulae to calculate machines and drives and to the discussion of simplifications (if applied) with the associated limits. The book shows how the different machine topologies can be deduced from general fundamentals, and how they are linked together. This book addresses graduate students, researchers, and developers of Electrical Machines and Drives, who are interested in getting knowledge about the principles of machine and drive operation and in detecting the mathematical and engineering specialties of the different machine and drive topologies together with their mutual links. The detailed - but nevertheless compact - mat...

  16. 3D unstructured-mesh radiation transport codes

    Energy Technology Data Exchange (ETDEWEB)

    Morel, J. [Los Alamos National Lab., NM (United States)

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options: $S{_}n$ (discrete-ordinates), $P{_}n$ (spherical harmonics), and $SP{_}n$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $S{_}n$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.

  17. Laser machining of advanced materials

    CERN Document Server

    Dahotre, Narendra B

    2011-01-01

    Advanced materialsIntroductionApplicationsStructural ceramicsBiomaterials CompositesIntermetallicsMachining of advanced materials IntroductionFabrication techniquesMechanical machiningChemical Machining (CM)Electrical machiningRadiation machining Hybrid machiningLaser machiningIntroductionAbsorption of laser energy and multiple reflectionsThermal effectsLaser machining of structural ceramicsIntrodu

  18. Assessing the performance of a parallel MATLAB-based 3D convection code

    Science.gov (United States)

    Kirkpatrick, G. J.; Hasenclever, J.; Phipps Morgan, J.; Shi, C.

    2008-12-01

    We are currently building 2D and 3D MATLAB-based parallel finite element codes for mantle convection and melting. The codes use the MATLAB implementation of core MPI commands (eg. Send, Receive, Broadcast) for message passing between computational subdomains. We have found that code development and algorithm testing are much faster in MATLAB than in our previous work coding in C or FORTRAN, this code was built from scratch with only 12 man-months of effort. The one extra cost w.r.t. C coding on a Beowulf cluster is the cost of the parallel MATLAB license for a >4core cluster. Here we present some preliminary results on the efficiency of MPI messaging in MATLAB on a small 4 machine, 16core, 32Gb RAM Intel Q6600 processor-based cluster. Our code implements fully parallelized preconditioned conjugate gradients with a multigrid preconditioner. Our parallel viscous flow solver is currently 20% slower for a 1,000,000 DOF problem on a single core in 2D as the direct solve MILAMIN MATLAB viscous flow solver. We have tested both continuous and discontinuous pressure formulations. We test with various configurations of network hardware, CPU speeds, and memory using our own and MATLAB's built in cluster profiler. So far we have only explored relatively small (up to 1.6GB RAM) test problems. We find that with our current code and Intel memory controller bandwidth limitations we can only get ~2.3 times performance out of 4 cores than 1 core per machine. Even for these small problems the code runs faster with message passing between 4 machines with one core each than 1 machine with 4 cores and internal messaging (1.29x slower), or 1 core (2.15x slower). It surprised us that for 2D ~1GB-sized problems with only 3 multigrid levels, the direct- solve on the coarsest mesh consumes comparable time to the iterative solve on the finest mesh - a penalty that is greatly reduced either by using a 4th multigrid level or by using an iterative solve at the coarsest grid level. We plan to

  19. Gloved Human-Machine Interface

    Science.gov (United States)

    Adams, Richard (Inventor); Olowin, Aaron (Inventor); Hannaford, Blake (Inventor)

    2015-01-01

    Certain exemplary embodiments can provide a system, machine, device, manufacture, circuit, composition of matter, and/or user interface adapted for and/or resulting from, and/or a method and/or machine-readable medium comprising machine-implementable instructions for, activities that can comprise and/or relate to: tracking movement of a gloved hand of a human; interpreting a gloved finger movement of the human; and/or in response to interpreting the gloved finger movement, providing feedback to the human.

  20. Dress Codes for Teachers?

    Science.gov (United States)

    Million, June

    2004-01-01

    In this article, the author discusses an e-mail survey of principals from across the country regarding whether or not their school had a formal staff dress code. The results indicate that most did not have a formal dress code, but agreed that professional dress for teachers was not only necessary, but showed respect for the school and had a…

  1. Machine-assisted verification of latent fingerprints: first results for nondestructive contact-less optical acquisition techniques with a CWL sensor

    Science.gov (United States)

    Hildebrandt, Mario; Kiltz, Stefan; Krapyvskyy, Dmytro; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus

    2011-11-01

    A machine-assisted analysis of traces from crime scenes might be possible with the advent of new high-resolution non-destructive contact-less acquisition techniques for latent fingerprints. This requires reliable techniques for the automatic extraction of fingerprint features from latent and exemplar fingerprints for matching purposes using pattern recognition approaches. Therefore, we evaluate the NIST Biometric Image Software for the feature extraction and verification of contact-lessly acquired latent fingerprints to determine potential error rates. Our exemplary test setup includes 30 latent fingerprints from 5 people in two test sets that are acquired from different surfaces using a chromatic white light sensor. The first test set includes 20 fingerprints on two different surfaces. It is used to determine the feature extraction performance. The second test set includes one latent fingerprint on 10 different surfaces and an exemplar fingerprint to determine the verification performance. This utilized sensing technique does not require a physical or chemical visibility enhancement of the fingerprint residue, thus the original trace remains unaltered for further investigations. No particular feature extraction and verification techniques have been applied to such data, yet. Hence, we see the need for appropriate algorithms that are suitable to support forensic investigations.

  2. A study of electrodischarge machining–pulse electrochemical machining combined machining for holes with high surface quality on superalloy

    Directory of Open Access Journals (Sweden)

    Ning Ma

    2015-11-01

    Full Text Available Noncircular holes on the surface of turbine rotor blades are usually machined by electrodischarge machining. A recast layer containing numerous micropores and microcracks is easily generated during the electrodischarge machining process due to the rapid heating and cooling effects, which restrict the wide applications of noncircular holes in aerospace and aircraft industries. Owing to the outstanding advantages of pulse electrochemical machining, electrodischarge machining–pulse electrochemical machining combined technique is provided to improve the overall quality of electrodischarge machining-drilled holes. The influence of pulse electrochemical machining processing parameters on the surface roughness and the influence of the electrodischarge machining–pulse electrochemical machining method on the surface quality and accuracy of holes have been studied experimentally. The results indicate that the pulse electrochemical machining processing time for complete removal of the recast layer decreases with the increase in the pulse electrochemical machining current. The low pulse electrochemical machining current results in uneven dissolution of the recast layer, while the higher pulse electrochemical machining current induces relatively homogeneous dissolution. The surface roughness is reduced from 4.277 to 0.299 µm, and the hole taper induced by top-down electrodischarge machining process was reduced from 1.04° to 0.17° after pulse electrochemical machining. On account of the advantages of electrodischarge machining and the pulse electrochemical machining, the electrodischarge machining–pulse electrochemical machining combined technique could be applied for machining noncircular holes with high shape accuracy and surface quality.

  3. Machine-to-machine communications architectures, technology, standards, and applications

    CERN Document Server

    Misic, Vojislav B

    2014-01-01

    With the number of machine-to-machine (M2M)-enabled devices projected to reach 20 to 50 billion by 2020, there is a critical need to understand the demands imposed by such systems. Machine-to-Machine Communications: Architectures, Technology, Standards, and Applications offers rigorous treatment of the many facets of M2M communication, including its integration with current technology.Presenting the work of a different group of international experts in each chapter, the book begins by supplying an overview of M2M technology. It considers proposed standards, cutting-edge applications, architectures, and traffic modeling and includes case studies that highlight the differences between traditional and M2M communications technology.Details a practical scheme for the forward error correction code designInvestigates the effectiveness of the IEEE 802.15.4 low data rate wireless personal area network standard for use in M2M communicationsIdentifies algorithms that will ensure functionality, performance, reliability, ...

  4. Agent Based Computing Machine

    Science.gov (United States)

    2005-12-09

    be used in Phase 2 to accomplish the following enhancements. Due to the speed and support of MPI for C/C++ on Beowulf clusters , these languages could...1.7 ABC Machine Formal Definition 24 1.8 Computational Analysis 31 1.9 Programming Concepts 34 1.10 Cluster Mapping 38 1.11 Phase 1 Results 43 2...options for hardware implementation are explored including an emulation with a high performance cluster , a high performance silicon chip and the

  5. The deleuzian abstract machines

    DEFF Research Database (Denmark)

    Werner Petersen, Erik

    2005-01-01

    production. In Kafka: Toward a Minor Literature, Deleuze and Guatari gave the most comprehensive explanation to the abstract machine in the work of art. Like the war-machines of Virilio, the Kafka-machine operates in three gears or speeds. Furthermore, the machine is connected to spatial diagrams...

  6. Machine Learning for Medical Imaging.

    Science.gov (United States)

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. (©)RSNA, 2017.

  7. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  8. An improved Genetic Algorithm of Bi-level Coding for Flexible Job Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Ye Li

    2014-07-01

    Full Text Available The current study presents an improved genetic algorithm(GA for the flexible job shop scheduling problem (FJSP. The coding is divided into working sequence level and machine level and two effective crossover operators and mutation operators are designed for the generation and reduce the disruptive effects of genetic operators. The algorithm is tested on instances of 10 working sequences and 10 machines. Computational results show that the proposed GA was successfully and efficiently applied to the FJSP. The results were compared with other approaches, such as traditional GA and GA with neural network. Compared to traditional genetic algorithm, the proposed approach yields significant improvement in solution quality.

  9. Blind multiuser detector for chaos-based CDMA using support vector machine.

    Science.gov (United States)

    Kao, Johnny Wei-Hsun; Berber, Stevan Mirko; Kecman, Vojislav

    2010-08-01

    The algorithm and the results of a blind multiuser detector using a machine learning technique called support vector machine (SVM) on a chaos-based code division multiple access system is presented in this paper. Simulation results showed that the performance achieved by using SVM is comparable to existing minimum mean square error (MMSE) detector under both additive white Gaussian noise (AWGN) and Rayleigh fading conditions. However, unlike the MMSE detector, the SVM detector does not require the knowledge of spreading codes of other users in the system or the estimate of the channel noise variance. The optimization of this algorithm is considered in this paper and its complexity is compared with the MMSE detector. This detector is much more suitable to work in the forward link than MMSE. In addition, original theoretical bit-error rate expressions for the SVM detector under both AWGN and Rayleigh fading are derived to verify the simulation results.

  10. An algebraic approach to graph codes

    DEFF Research Database (Denmark)

    Pinero, Fernando

    theory as evaluation codes. Chapter three consists of the introduction to graph based codes, such as Tanner codes and graph codes. In Chapter four, we compute the dimension of some graph based codes with a result combining graph based codes and subfield subcodes. Moreover, some codes in chapter four......This thesis consists of six chapters. The first chapter, contains a short introduction to coding theory in which we explain the coding theory concepts we use. In the second chapter, we present the required theory for evaluation codes and also give an example of some fundamental codes in coding...... are optimal or best known for their parameters. In chapter five we study some graph codes with Reed–Solomon component codes. The underlying graph is well known and widely used for its good characteristics. This helps us to compute the dimension of the graph codes. We also introduce a combinatorial concept...

  11. Ontology-supported processing of clinical text using medical knowledge integration for multi-label classification of diagnosis coding

    CERN Document Server

    Waraporn, Phanu; Clayton, Gareth

    2010-01-01

    This paper discusses the knowledge integration of clinical information extracted from distributed medical ontology in order to ameliorate a machine learning-based multi-label coding assignment system. The proposed approach is implemented using a decision tree based cascade hierarchical technique on the university hospital data for patients with Coronary Heart Disease (CHD). The preliminary results obtained show a satisfactory finding.

  12. 4th Machining Innovations Conference

    CERN Document Server

    2014-01-01

    This contributed volume contains the research results presented at the 4th Machining Innovations Conference, Hannover, September 2013. The topic of the conference are new production technologies in aerospace industry and the focus is on energy efficient machine tools as well as sustainable process planning. The target audience primarily comprises researchers and experts in the field but the book may also be beneficial for graduate students.

  13. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  14. Online Dynamic Parameter Estimation of Synchronous Machines

    Science.gov (United States)

    West, Michael R.

    Traditionally, synchronous machine parameters are determined through an offline characterization procedure. The IEEE 115 standard suggests a variety of mechanical and electrical tests to capture the fundamental characteristics and behaviors of a given machine. These characteristics and behaviors can be used to develop and understand machine models that accurately reflect the machine's performance. To perform such tests, the machine is required to be removed from service. Characterizing a machine offline can result in economic losses due to down time, labor expenses, etc. Such losses may be mitigated by implementing online characterization procedures. Historically, different approaches have been taken to develop methods of calculating a machine's electrical characteristics, without removing the machine from service. Using a machine's input and response data combined with a numerical algorithm, a machine's characteristics can be determined. This thesis explores such characterization methods and strives to compare the IEEE 115 standard for offline characterization with the least squares approximation iterative approach implemented on a 20 h.p. synchronous machine. This least squares estimation method of online parameter estimation shows encouraging results for steady-state parameters, in comparison with steady-state parameters obtained through the IEEE 115 standard.

  15. Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines

    Science.gov (United States)

    Le, Martin; Zheng, Xin; Katanyoutant, Sunant

    2008-01-01

    Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state

  16. Tree-Particle-Mesh an adaptive, efficient, and parallel code for collisionless cosmological simulation

    CERN Document Server

    Bode, P; Bode, Paul; Ostriker, Jeremiah P.

    2003-01-01

    An improved implementation of an N-body code for simulating collisionless cosmological dynamics is presented. TPM (Tree-Particle-Mesh) combines the PM method on large scales with a tree code to handle particle-particle interactions at small separations. After the global PM forces are calculated, spatially distinct regions above a given density contrast are located; the tree code calculates the gravitational interactions inside these denser objects at higher spatial and temporal resolution. The new implementation includes individual particle time steps within trees, an improved treatment of tidal forces on trees, new criteria for higher force resolution and choice of time step, and parallel treatment of large trees. TPM is compared to P^3M and a tree code (GADGET) and is found to give equivalent results in significantly less time. The implementation is highly portable (requiring a Fortran compiler and MPI) and efficient on parallel machines. The source code can be found at http://astro.princeton.edu/~bode/TPM/

  17. Machine learning a probabilistic perspective

    CERN Document Server

    Murphy, Kevin P

    2012-01-01

    Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic method...

  18. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    ; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....

  19. Proof-Carrying Code with Correct Compilers

    Science.gov (United States)

    Appel, Andrew W.

    2009-01-01

    In the late 1990s, proof-carrying code was able to produce machine-checkable safety proofs for machine-language programs even though (1) it was impractical to prove correctness properties of source programs and (2) it was impractical to prove correctness of compilers. But now it is practical to prove some correctness properties of source programs, and it is practical to prove correctness of optimizing compilers. We can produce more expressive proof-carrying code, that can guarantee correctness properties for machine code and not just safety. We will construct program logics for source languages, prove them sound w.r.t. the operational semantics of the input language for a proved-correct compiler, and then use these logics as a basis for proving the soundness of static analyses.

  20. Astrophysics Source Code Library

    CERN Document Server

    Allen, Alice; Berriman, Bruce; Hanisch, Robert J; Mink, Jessica; Teuben, Peter J

    2012-01-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  1. Machine Vision Implementation in Rapid PCB Prototyping

    Directory of Open Access Journals (Sweden)

    Yosafat Surya Murijanto

    2012-03-01

    Full Text Available Image processing, the heart of machine vision, has proven itself to be an essential part of the industries today. Its application has opened new doorways, making more concepts in manufacturing processes viable. This paper presents an application of machine vision in designing a module with the ability to extract drills and route coordinates from an un-mounted or mounted printed circuit board (PCB. The algorithm comprises pre-capturing processes, image segmentation and filtering, edge and contour detection, coordinate extraction, and G-code creation. OpenCV libraries and Qt IDE are the main tools used. Throughout some testing and experiments, it is concluded that the algorithm is able to deliver acceptable results. The drilling and routing coordinate extraction algorithm can extract in average 90% and 82% of the whole drills and routes available on the scanned PCB in a total processing time of less than 3 seconds. This is achievable through proper lighting condition, good PCB surface condition and good webcam quality. 

  2. Management of Statically Modifiable Prolog Code

    Institute of Scientific and Technical Information of China (English)

    张晨曦; 慈云桂

    1989-01-01

    The Warren Abstract Machine is an efficient execution model for Prolog,which has become the basis of many high performance Prolog systems.However.little support for the implementation of the non-logical components of Prolog is provided in the WAM.The original Warren code is not modifiable.In this paper,we show how static modifications of Warren code can be achieved by adding a few instructions and a little extra information to the code.The implementation of the code manager is discussed.Algorithms for some basic operations are given.

  3. Non-Pauli observables for CWS codes

    Science.gov (United States)

    Santiago, Douglas F. G.; Portugal, Renato; Melo, Nolmar

    2013-05-01

    It is known that nonadditive quantum codes can have higher code dimensions than stabilizer codes for the same length and minimum distance. The class of codeword stabilized codes (CWS) provides tools to obtain new nonadditive quantum codes by reducing the problem to finding nonlinear classical codes. In this work, we establish some results on the kind of non-Pauli operators that can be used as observables in the decoding scheme of CWS codes and propose a procedure to obtain those observables.

  4. Machinability of Green Powder Metallurgy Components: Part II. Sintered Properties of Components Machined in Green State

    Science.gov (United States)

    Robert-Perron, Etienne; Blais, Carl; Pelletier, Sylvain; Thomas, Yannig

    2007-06-01

    The green machining process is virtually a must if the powder metallurgy (PM) industries are to solve the lower machining performances associated with PM components. This process is known for lowering the rate of tool wear. Recent improvements in binder/lubricant technologies have led to high-green-strength systems that enable green machining. Combined with the optimized cutting parameters determined in Part I of the study, the green machining of PM components seems to be a viable process for fabricating high performance parts on large scale and complete other shaping processes. This second part of our study presents a comparison between the machining behaviors and the sintered properties of components machined prior to or after sintering. The results show that the radial crush strength measured on rings machined in their green state is equal to that of parts machined after sintering.

  5. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe...... the codes succinctly using Gröbner bases....

  6. "Mommy Blogs" and the Vaccination Exemption Narrative: Results From A Machine-Learning Approach for Story Aggregation on Parenting Social Media Sites.

    Science.gov (United States)

    Tangherlini, Timothy R; Roychowdhury, Vwani; Glenn, Beth; Crespi, Catherine M; Bandari, Roja; Wadia, Akshay; Falahi, Misagh; Ebrahimzadeh, Ehsan; Bastani, Roshan

    2016-11-22

    Social media offer an unprecedented opportunity to explore how people talk about health care at a very large scale. Numerous studies have shown the importance of websites with user forums for people seeking information related to health. Parents turn to some of these sites, colloquially referred to as "mommy blogs," to share concerns about children's health care, including vaccination. Although substantial work has considered the role of social media, particularly Twitter, in discussions of vaccination and other health care-related issues, there has been little work on describing the underlying structure of these discussions and the role of persuasive storytelling, particularly on sites with no limits on post length. Understanding the role of persuasive storytelling at Internet scale provides useful insight into how people discuss vaccinations, including exemption-seeking behavior, which has been tied to a recent diminution of herd immunity in some communities. To develop an automated and scalable machine-learning method for story aggregation on social media sites dedicated to discussions of parenting. We wanted to discover the aggregate narrative frameworks to which individuals, through their exchange of experiences and commentary, contribute over time in a particular topic domain. We also wanted to characterize temporal trends in these narrative frameworks on the sites over the study period. To ensure that our data capture long-term discussions and not short-term reactions to recent events, we developed a dataset of 1.99 million posts contributed by 40,056 users and viewed 20.12 million times indexed from 2 parenting sites over a period of 105 months. Using probabilistic methods, we determined the topics of discussion on these parenting sites. We developed a generative statistical-mechanical narrative model to automatically extract the underlying stories and story fragments from millions of posts. We aggregated the stories into an overarching narrative framework

  7. Machine Intelligence

    Science.gov (United States)

    2013-03-01

    36 16 Aggregate sample observation results. . . . . . . . . . . . . . . . . . . . . . . . . . . 38 17 Selected subset composition results...encourages exploration of the new tiles. 3.2 Continuous State Space Abstraction Given a Markov Decision Problem ( MDP ) defined over a set of states S and...to make optimal choices in sequential decision making problems. The problem environment can be compactly expressed as a Markov Decision Process ( MDP

  8. Active Control of Machine Tool Chatter

    OpenAIRE

    Håkansson, Lars; Claesson, Ingvar; Lagö, Thomas L.

    1999-01-01

    In the turning operation chatter or vibration is a frequent problem, which affects the result of the machining, and, in particular, the surface finish. Tool life is also influenced by vibration. Severe acoustic noise in the working environment frequently occurs as a result of dynamic motion between the cutting tool and the workpiece. By proper machine design, e.g. improved stiffness of the machine structure, the problem of relative dynamic motion between cutting tool and workpiece may be part...

  9. The Complexity of Abstract Machines

    Directory of Open Access Journals (Sweden)

    Beniamino Accattoli

    2017-01-01

    Full Text Available The lambda-calculus is a peculiar computational model whose definition does not come with a notion of machine. Unsurprisingly, implementations of the lambda-calculus have been studied for decades. Abstract machines are implementations schema for fixed evaluation strategies that are a compromise between theory and practice: they are concrete enough to provide a notion of machine and abstract enough to avoid the many intricacies of actual implementations. There is an extensive literature about abstract machines for the lambda-calculus, and yet—quite mysteriously—the efficiency of these machines with respect to the strategy that they implement has almost never been studied. This paper provides an unusual introduction to abstract machines, based on the complexity of their overhead with respect to the length of the implemented strategies. It is conceived to be a tutorial, focusing on the case study of implementing the weak head (call-by-name strategy, and yet it is an original re-elaboration of known results. Moreover, some of the observation contained here never appeared in print before.

  10. An Experimental Study on Electro Chemical Machining of Microelectrode

    Institute of Scientific and Technical Information of China (English)

    ZHANG Liao-yuan; LIU Yao

    2006-01-01

    Puts forward a new method in machining microelectrode by electro chemical machining (ECM) and plastic deformed theory. Theprocedure of this method is to machine the microelectrode according to the basic rule of ECM theory at first. Then, with the change of ECM machining parameters, one of the microelectrode ends is exerted by a load. As a result, the elastic and plastic deformation is produced at the machining section and the microelectrode diameter is reduced.It has been proved that the proposed method can determine the optimum machining parameters to machine the microelectrode of Cu.

  11. Plug Into "The Modernizing Machine"! Danish University Reform and Its Transformable Academic Subjectivities

    Science.gov (United States)

    Krejsler, John Benedicto

    2013-01-01

    "The modernizing machine" codes individual bodies, things, and symbols with images from New Public Management, neo-liberal, and Knowledge Economy discourses. Drawing on Deleuze and Guattari's concept of machines, this article explores how "the modernizing machine" produces neo-liberal modernization of the public sector. Taking…

  12. Classical Holographic Codes

    CERN Document Server

    Brehm, Enrico M

    2016-01-01

    In this work, we introduce classical holographic codes. These can be understood as concatenated probabilistic codes and can be represented as networks uniformly covering hyperbolic space. In particular, classical holographic codes can be interpreted as maps from bulk degrees of freedom to boundary degrees of freedom. Interestingly, they are shown to exhibit features similar to those expected from the AdS/CFT correspondence. Among these are a version of the Ryu-Takayanagi formula and intriguing properties regarding bulk reconstruction and boundary representations of bulk operations. We discuss the relation of our findings with expectations from AdS/CFT and, in particular, with recent results from quantum error correction.

  13. Design of Demining Machines

    CERN Document Server

    Mikulic, Dinko

    2013-01-01

    In constant effort to eliminate mine danger, international mine action community has been developing safety, efficiency and cost-effectiveness of clearance methods. Demining machines have become necessary when conducting humanitarian demining where the mechanization of demining provides greater safety and productivity. Design of Demining Machines describes the development and testing of modern demining machines in humanitarian demining.   Relevant data for design of demining machines are included to explain the machinery implemented and some innovative and inspiring development solutions. Development technologies, companies and projects are discussed to provide a comprehensive estimate of the effects of various design factors and to proper selection of optimal parameters for designing the demining machines.   Covering the dynamic processes occurring in machine assemblies and their components to a broader understanding of demining machine as a whole, Design of Demining Machines is primarily tailored as a tex...

  14. Applied machining technology

    CERN Document Server

    Tschätsch, Heinz

    2010-01-01

    Machining and cutting technologies are still crucial for many manufacturing processes. This reference presents all important machining processes in a comprehensive and coherent way. It includes many examples of concrete calculations, problems and solutions.

  15. Machining with abrasives

    CERN Document Server

    Jackson, Mark J

    2011-01-01

    Abrasive machining is key to obtaining the desired geometry and surface quality in manufacturing. This book discusses the fundamentals and advances in the abrasive machining processes. It provides a complete overview of developing areas in the field.

  16. Women, Men, and Machines.

    Science.gov (United States)

    Form, William; McMillen, David Byron

    1983-01-01

    Data from the first national study of technological change show that proportionately more women than men operate machines, are more exposed to machines that have alienating effects, and suffer more from the negative effects of technological change. (Author/SSH)

  17. Brain versus Machine Control.

    Directory of Open Access Journals (Sweden)

    Jose M Carmena

    2004-12-01

    Full Text Available Dr. Octopus, the villain of the movie "Spiderman 2", is a fusion of man and machine. Neuroscientist Jose Carmena examines the facts behind this fictional account of a brain- machine interface

  18. Fingerprinting Communication and Computation on HPC Machines

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean

    2010-06-02

    How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requested an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.

  19. Study of on-machine error identification and compensation methods for micro machine tools

    Science.gov (United States)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-08-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  20. Wavefront coding with adaptive optics

    Science.gov (United States)

    Agbana, Temitope E.; Soloviev, Oleg; Bezzubik, Vitalii; Patlan, Vsevolod; Verhaegen, Michel; Vdovin, Gleb

    2015-03-01

    We have implemented an extended depth of field optical system by wavefront coding with a micromachined membrane deformable mirror. This approach provides a versatile extension to standard wavefront coding based on fixed phase mask. First experimental results validate the feasibility of the use of adaptive optics for variable depth wavefront coding in imaging optical systems.

  1. Combining human and machine learning for morphological analysis of galaxy images

    CERN Document Server

    Kuminski, Evan; Wallin, John; Shamir, Lior

    2014-01-01

    The increasing importance of digital sky surveys collecting many millions of galaxy images has reinforced the need for robust methods that can perform morphological analysis of large galaxy image databases. Citizen science initiatives such as Galaxy Zoo showed that large datasets of galaxy images can be analyzed effectively by non-scientist volunteers, but since databases generated by robotic telescopes grow much faster than the processing power of any group of citizen scientists, it is clear that computer analysis is required. Here we propose to use citizen science data for training machine learning systems, and show experimental results demonstrating that machine learning systems can be trained with citizen science data. Our findings show that the performance of machine learning depends on the quality of the data, which can be improved by using samples that have a high degree of agreement between the citizen scientists. The source code of the method is publicly available.

  2. Doubly Fed Induction Machine Control For Wind Energy Conversion System

    Science.gov (United States)

    2009-06-01

    this_block) % Revision History: % % 18-Dec-2008 (15:15 hours): % Original code was machine generated by Xilinx’s System Generator % after...this_block.setTopLevelLanguage(’VHDL’); this_block.setEntityName(’code’); % System Generator has to assume that your entity has a combinational % feed through

  3. An object-oriented extension for debugging the virtual machine

    Energy Technology Data Exchange (ETDEWEB)

    Pizzi, R.G. Jr. [California Univ., Davis, CA (United States)

    1994-12-01

    A computer is nothing more then a virtual machine programmed by source code to perform a task. The program`s source code expresses abstract constructs which are compiled into some lower level target language. When a virtual machine breaks, it can be very difficult to debug because typical debuggers provide only low-level target implementation information to the software engineer. We believe that the debugging task can be simplified by introducing aspects of the abstract design and data into the source code. We introduce OODIE, an object-oriented extension to programming languages that allows programmers to specify a virtual environment by describing the meaning of the design and data of a virtual machine. This specification is translated into symbolic information such that an augmented debugger can present engineers with a programmable debugging environment specifically tailored for the virtual machine that is to be debugged.

  4. Evaluation of panel code predictions with experimental results of inlet performance for a 17-inch ducted prop/fab simulator operating at Mach 0.2

    Science.gov (United States)

    Boldman, D. R.; Iek, C.; Hwang, D. P.; Jeracki, R. J.; Larkin, M.; Sorin, G.

    1991-01-01

    An axisymmetric panel code was used to evaluate a series of ducted propeller inlets. The inlets were tested in the Lewis 9 by 15 Foot Low Speed Wind Tunnel. Three basic inlets having ratios of shroud length to propeller diameter of 0.2, 0.4, and 0.5 were tested with the Pratt and Whitney ducted prop/fan simulator. A fourth hybrid inlet consisting of the shroud from the shortest basic inlet coupled with the spinner from the largest basic inlet was also tested. This later configuration represented the shortest overall inlet. The simulator duct diameter at the propeller face was 17.25 inches. The short and long spinners provided hub-to-tip ratios of 0.44 at the propeller face. The four inlets were tested at a nominal free stream Mach number of 0.2 and at angles of attack from 0 degrees to 35 degrees. The panel code method incorporated a simple two-part separation model which yielded conservative estimates of inlet separation.

  5. Machine Learning in Parliament Elections

    Directory of Open Access Journals (Sweden)

    Ahmad Esfandiari

    2012-09-01

    Full Text Available Parliament is considered as one of the most important pillars of the country governance. The parliamentary elections and prediction it, had been considered by scholars of from various field like political science long ago. Some important features are used to model the results of consultative parliament elections. These features are as follows: reputation and popularity, political orientation, tradesmen's support, clergymen's support, support from political wings and the type of supportive wing. Two parameters of reputation and popularity and the support of clergymen and religious scholars that have more impact in reducing of prediction error in election results, have been used as input parameters in implementation. In this study, the Iranian parliamentary elections, modeled and predicted using learnable machines of neural network and neuro-fuzzy. Neuro-fuzzy machine combines the ability of knowledge representation of fuzzy sets and the learning power of neural networks simultaneously. In predicting the social and political behavior, the neural network is first trained by two learning algorithms using the training data set and then this machine predict the result on test data. Next, the learning of neuro-fuzzy inference machine is performed. Then, be compared the results of two machines.

  6. A Universal Reactive Machine

    DEFF Research Database (Denmark)

    Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.

    1997-01-01

    Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...

  7. Rapid Response Small Machining NNR Project 703025

    Energy Technology Data Exchange (ETDEWEB)

    Kanies, Tim

    2008-12-05

    This project was an effort to develop a machining area for small sized parts that is capable of delivering product with a quick response time. This entailed focusing efforts on leaning out specific work cells that would result in overall improvement to the entire machining area. This effort involved securing the most efficient available technologies for these areas. In the end, this incorporated preparing the small machining area for transformation to a new facility.

  8. Determination of Induction Machine Parameters by Simulation

    OpenAIRE

    Dr. E.A. Anazia; Engr. Samson Ugochukwu; Dr. J. C. Onuegbu; Engr. Onyedikachi S.N

    2016-01-01

    A 38.1 Watt fractional horse power induction laboratory test motor model was created using MotorSolve 5.2; an electrical machine design application. Initial machine parameters such as voltage, speed, and main dimensions were selected and fed into the application and simulations ran. The simulated results were presented and analyzed. Unlike other induction machine parameter identification process, this method is simple, flexible and accurate since it does not involve rigorous mathe...

  9. POLYSHIFT Communications Software for the Connection Machine System CM-200

    Directory of Open Access Journals (Sweden)

    William George

    1994-01-01

    Full Text Available We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3–4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientific Software Library (CMSSL.

  10. Parameters optimization in a fission-fusion system with a mirror machine based neutron source

    Science.gov (United States)

    Yurov, D. V.; Anikeev, A. V.; Bagryansky, P. A.; Brednikhin, S. A.; Frolov, S. A.; Lezhnin, S. I.; Prikhodko, V. V.

    2012-06-01

    Long-lived fission products utilization is a problem of high importance for the modern nuclear reactor technology. BINP jointly with NSI RAS develops a conceptual design of a hybrid sub-critical minor actinides burner with a neutron source based on the gas dynamic mirror machine (GDT) to resolve the stated task. A number of modelling tools was created to calculate the main parameters of the device. First of the codes, GENESYS, is a zero-dimensional code, designed for plasma dynamics numerical investigation in a GDT-based neutron source. The code contains a Monte-Carlo module for the determination of linear neutron emission intensity along the machine axis. Fuel blanket characteristics calculation was implemented by means of a static Monte-Carlo code NMC. Subcritical core, which has been previously analyzed by OECD-NEA, was used as a template for the fuel blanket of the modelled device. This article represents the codes used and recent results of the described system parameters optimization. Particularly, optimum emission zone length of the source and core multiplicity dependence on buffer zone thickness were defined.

  11. Comparison and validation of the results of the AZNHEX v.1.0 code with the MCNP code simulating the core of a fast reactor cooled with sodium; Comparacion y validacion de los resultados del codigo AZNHEX v.1.0 con el codigo MCNP simulando el nucleo de un reactor rapido refrigerado con sodio

    Energy Technology Data Exchange (ETDEWEB)

    Galicia A, J.; Francois L, J. L.; Bastida O, G. E. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Esquivel E, J., E-mail: blink19871@hotmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2016-09-15

    The development of the AZTLAN platform for the analysis and design of nuclear reactors is led by Instituto Nacional de Investigaciones Nucleares (ININ) and divided into four working groups, which have well-defined activities to achieve significant progress in this project individually and jointly. Within these working groups is the users group, whose main task is to use the codes that make up the AZTLAN platform to provide feedback to the developers, and in this way to make the final versions of the codes are efficient and at the same time reliable and easy to understand. In this paper we present the results provided by the AZNHEX v.1.0 code when simulating the core of a fast reactor cooled with sodium at steady state. The validation of these results is a fundamental part of the platform development and responsibility of the users group, so in this research the results obtained with AZNHEX are compared and analyzed with those provided by the Monte Carlo code MCNP-5, software worldwide used and recognized. A description of the methodology used with MCNP-5 is also presented for the calculation of the interest variables and the difference that is obtained with respect to the calculated with AZNHEX. (Author)

  12. Asynchronized synchronous machines

    CERN Document Server

    Botvinnik, M M

    1964-01-01

    Asynchronized Synchronous Machines focuses on the theoretical research on asynchronized synchronous (AS) machines, which are "hybrids” of synchronous and induction machines that can operate with slip. Topics covered in this book include the initial equations; vector diagram of an AS machine; regulation in cases of deviation from the law of full compensation; parameters of the excitation system; and schematic diagram of an excitation regulator. The possible applications of AS machines and its calculations in certain cases are also discussed. This publication is beneficial for students and indiv

  13. Quantum machine learning.

    Science.gov (United States)

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-13

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  14. Precision machine design

    CERN Document Server

    Slocum, Alexander H

    1992-01-01

    This book is a comprehensive engineering exploration of all the aspects of precision machine design - both component and system design considerations for precision machines. It addresses both theoretical analysis and practical implementation providing many real-world design case studies as well as numerous examples of existing components and their characteristics. Fast becoming a classic, this book includes examples of analysis techniques, along with the philosophy of the solution method. It explores the physics of errors in machines and how such knowledge can be used to build an error budget for a machine, how error budgets can be used to design more accurate machines.

  15. Experimental investigation of neutronic characteristics of the IR-8 reactor to confirm the results of calculations by MCU-PTR code

    Energy Technology Data Exchange (ETDEWEB)

    Surkov, A. V., E-mail: surkov.andrew@gmail.com; Kochkin, V. N.; Pesnya, Yu. E.; Nasonov, V. A.; Vihrov, V. I.; Erak, D. Yu. [National Research Center Kurchatov Institute (Russian Federation)

    2015-12-15

    A comparison of measured and calculated neutronic characteristics (fast neutron flux and fission rate of {sup 235}U) in the core and reflector of the IR-8 reactor is presented. The irradiation devices equipped with neutron activation detectors were prepared. The determination of fast neutron flux was performed using the {sup 54}Fe (n, p) and {sup 58}Ni (n, p) reactions. The {sup 235}U fission rate was measured using uranium dioxide with 10% enrichment in {sup 235}U. The determination of specific activities of detectors was carried out by measuring the intensity of characteristic gamma peaks using the ORTEC gamma spectrometer. Neutron fields in the core and reflector of the IR-8 reactor were calculated using the MCU-PTR code.

  16. Determination of the dead layer and full-energy peak efficiency of an HPGe detector using the MCNP code and experimental results

    Directory of Open Access Journals (Sweden)

    M Moeinifar

    2017-02-01

    Full Text Available One important factor in using an High Purity Germanium (HPGe detector is its efficiency that highly depends on the geometry and absorption factors, so that when the configuration of source-detector geometry is changed, the detector efficiency must be re-measured. The best way of determining the efficiency of a detector is measuring the efficiency of standard sources. But considering the fact that standard sources are hardly available and it is time consuming to find them, determinig the efficiency by simulation which gives enough efficiency in less time, is important. In this study, the dead layer thickness and the full-energy peak efficiency of an HPGe detector was obtained by Monte Carlo simulation, using MCNPX code. For this, we first measured gamma–ray spectra for different sources placed at various distances from the detector and stored the measured spectra obtained. Then the obtained spectra were simulated under similar conditions in vitro.At first, the whole volume of germanium was regarded as active, and the obtaind spectra from calculation were compared with the corresponding experimental spectra. Comparison of the calculated spectra with the measured spectra showed considerable differences. By making small variations in the dead layer thickness of the detector (about a few hundredths of a millimeter in the simulation program, we tried to remove these differences and in this way a dead layer of 0.57 mm was obtained for the detector. By incorporating this value for the dead layer in the simulating program, the full-energy peak efficiency of the detector was then obtained both by experiment and by simulation, for various sources at various distances from the detector, and both methods showed good agreements. Then, using MCNP code and considering the exact measurement system, one can conclude that the efficiency of an HPGe detector for various source-detector geometries can be calculated with rather good accuracy by simulation method

  17. Cerebral aneurysm treatment using flow-diverting stents: in-vivo visualization of flow alterations by parametric colour coding to predict aneurysmal occlusion: preliminary results.

    Science.gov (United States)

    Gölitz, Philipp; Struffert, Tobias; Rösch, Julie; Ganslandt, Oliver; Knossalla, Frauke; Doerfler, Arnd

    2015-02-01

    After deployment of flow-diverting stents (FDS), complete aneurysm occlusion is not predictable. This study investigated whether parametric colour coding (PCC) could allow in vivo visualization of flow alterations induced by FDS and identify favourable or adverse flow modulations. Thirty-six patients treated by FDS were analyzed. Preinterventional and postinterventional DSA-series were postprocessed by PCC and time-density curves (TDCs) were calculated. The parameters aneurysmal inflow, outflow, and relative time-to-peak (rTTP) were calculated. Preinterventional and postinterventional values were compared and related to occlusion rate. Postinterventional inflow showed a mean reduction of 37%, outflow of 51%, and rTTP a prolongation of 82%. Saccular aneurysm occlusion occurred if a reduction of at least 15% was achieved for inflow and 35% for outflow (sensitivity: 89%, specificity: 82%). Unchanged outflow and a slightly prolonged rTTP were associated with growth in one fusiform aneurysm. PCC allows visualization of flow alterations after FDS treatment, illustrating "flow diverting effects" by the TDC shape and indicating mainly aneurysmal outflow and lesser inflow changes. Quantifiable parameters (inflow, outflow, rTTP) can be obtained, thresholds for predicting aneurysm occlusion determined, and adverse flow modulations assumed. As a rapid intraprocedural tool, PCC might support the decision to implant more than one FDS. • After deployment of a flow-diverting stent, complete aneurysm occlusion is unpredictable. • Parametric colour coding offers new options for visualizing in vivo flow alterations non-invasively. • Quantifiable parameters, i.e., aneurysmal inflow/outflow can be obtained allowing prognostic stratification. • Rapid, intraprocedural application allows treatment monitoring, potentially contributing to patient safety.

  18. Classifying injury narratives of large administrative databases for surveillance-A practical approach combining machine learning ensembles and human review.

    Science.gov (United States)

    Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R

    2017-01-01

    Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NBSW=NBBI-GRAM=SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we

  19. SIMULATED MACHINES FOR SOIL TREATMENT FOR FELLING AND SLASH AREAS

    Directory of Open Access Journals (Sweden)

    Alekseev A. E.

    2013-10-01

    Full Text Available The article presents the results of a simulation of functioning machines for tillage in AutoCAD Mechanical environment, the algorithm of machine work and the results of numerical experiments on the developed model

  20. Effect of Assembling and Machining Errors on Wavefront Coding Imaging Performance of Cubic Phase Mask%波前编码中立方相位板的装配及加工误差对成像性能的影响

    Institute of Scientific and Technical Information of China (English)

    张效栋; 张林; 刘现磊; 姜丽丽

    2015-01-01

    在波前编码技术中,通过立方相位板的光学调制和后续图像处理,扩展了系统景深.其中系统的光学调制过程,可以用广义光瞳函数描述.系统的广义光瞳函数描述了光通过立方相位板后相位的变化过程.立方相位板是波前编码技术的关键器件,装配及加工误差直接影响系统的成像性能.本文通过推导不同误差情况下的广义光曈函数,得到了立方相位板装配及加工误差对点扩散函数( PSF)和调制传递函数( MTF)的变化规律.评估这些规律,得到了装配和加工误差对系统成像性能的变化规律,为装配和加工过程提供了基本的指导.文中分析了不同装配误差和加工误差对于系统性能的影响,其中围绕Z轴的装配误差和加工中振动引起的正弦形状误差对于MTF的影响最大.因此,在装配和加工中应尽量避免围绕Z轴的装配误差和正弦形状误差,正弦形状误差的PV值应保持在0.5μm之内.%In wavefront coding , a cubic phase mask extends the field depth of an imaging system through modulation and deconvolution .The modulation could be defined by a pupil function which describes how light wave is affected after passing through the mask .As the key optical element responsible for providing the designated spatially varying path length ,the machining and assembling errors ( MAEs) affect the ima-ging performance directly by deviating the modulation .In this paper , with derivation of the generalized pupil function under various errors , the influence rules of assembly and machining errors of cubic phase mask on imaging performance were obtained .To assess these MAEs , the point spread function ( PSF) and the modulation transfer function(MTF) were derived.The influence rules of MAEs on imaging perform-ance were established as well .These rules are eventually developed into the necessary and essential guidelines to the machining and

  1. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  2. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  3. NOVEL BIPHASE CODE -INTEGRATED SIDELOBE SUPPRESSION CODE

    Institute of Scientific and Technical Information of China (English)

    Wang Feixue; Ou Gang; Zhuang Zhaowen

    2004-01-01

    A kind of novel binary phase code named sidelobe suppression code is proposed in this paper. It is defined to be the code whose corresponding optimal sidelobe suppression filter outputs the minimum sidelobes. It is shown that there do exist sidelobe suppression codes better than the conventional optimal codes-Barker codes. For example, the sidelobe suppression code of length 11 with filter of length 39 has better sidelobe level up to 17dB than that of Barker code with the same code length and filter length.

  4. COMPUTER SIMULATION OF A STIRLING REFRIGERATING MACHINE

    Directory of Open Access Journals (Sweden)

    V.V. Trandafilov

    2015-10-01

    Full Text Available In present numerical research, the mathematical model for precise performance simulation and detailed behavior of Stirling refrigerating machine is considered. The mathematical model for alpha Stirling refrigerating machine with helium as the working fluid will be useful in optimization of these machines mechanical design. Complete non-linear mathematical model of the machine, including thermodynamics of helium, and heat transfer from the walls, as well as heat transfer and gas resistance in the regenerator is developed. Non-dimensional groups are derived, and the mathematical model is numerically solved. Important design parameters are varied and their effect on Stirling refrigerating machine performance determined. The simulation results of Stirling refrigerating machine which include heat transfer and coefficient of performance are presented.

  5. High frequency group pulse electrochemical machining

    Institute of Scientific and Technical Information of China (English)

    WU Gaoyang; ZHANG Zhijing; ZHANG Weimin; TANG Xinglun

    2007-01-01

    In the process of machining ultrathin metal structure parts,the signal composition of high frequency group pulse,the influence of frequency to reverse current,and the design of the cathode in high frequency group pulse electrochemical machining (HGPECM) are discussed.The experiments on process were carried out.Results indicate that HGPECM can greatly improve the characteristics of the inter-electrode gap flow field,reduce electrode passivation,and obtain high machining quality.The machining quality is obviously improved by increasing the main pulse frequency.The dimensional accuracy reaches 30-40 pro and the roughness attained is at 0.30-0.35 μm.High frequency group pulse electrochemical machining can be successfully used in machining micro-parts.

  6. X-ray machine for general radiology and mammography based on room temperature solid state detector coupled to photon-counting electronics. Evaluation of results

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Bayo, J.; Sentis, M.; Tortajada, M.; Ganau, S.; Tortajada, L. [UDIAT CD Sabadell, Barcelona (Spain); Chmeissani, M.; Blanchot, G.; Garcia, J.; Maiorino, M.; Puigdengoles, C. [Centre Inst. de Fisica d' altes energies, UAB Campus Bellaterra, Barcelona (Spain); Lozano, M.; Martinez, R.; Pellegrini, G.; Ullan, M. [CNM-CSIC UAB, Campus Bellaterra, Barcelona (Spain); Kainberger, F. [Univ. of Vienna, (Austria); Montage, J.P. [Hopital d' enfant Armand Trousseau, Paris (France)

    2007-06-15

    Dear-Mama (detection of early markers in mammography) is an EU-funded project (FP5) to develop an X-ray medical imaging device based on a room temperature solid-state pixel detector coupled to photon-counting readout electronics via bump bonding. The technology used allows signal-to-noise enhancing and thus enables detection of low-contrast anomalies such as micro-calcifications. In this paper we present the results of the preliminary clinical evaluation. (orig.)

  7. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  8. Parallelization of a beam dynamics code and first large scale radio frequency quadrupole simulations

    Directory of Open Access Journals (Sweden)

    J. Xu

    2007-01-01

    Full Text Available The design and operation support of hadron (proton and heavy-ion linear accelerators require substantial use of beam dynamics simulation tools. The beam dynamics code TRACK has been originally developed at Argonne National Laboratory (ANL to fulfill the special requirements of the rare isotope accelerator (RIA accelerator systems. From the beginning, the code has been developed to make it useful in the three stages of a linear accelerator project, namely, the design, commissioning, and operation of the machine. To realize this concept, the code has unique features such as end-to-end simulations from the ion source to the final beam destination and automatic procedures for tuning of a multiple charge state heavy-ion beam. The TRACK code has become a general beam dynamics code for hadron linacs and has found wide applications worldwide. Until recently, the code has remained serial except for a simple parallelization used for the simulation of multiple seeds to study the machine errors. To speed up computation, the TRACK Poisson solver has been parallelized. This paper discusses different parallel models for solving the Poisson equation with the primary goal to extend the scalability of the code onto 1024 and more processors of the new generation of supercomputers known as BlueGene (BG/L. Domain decomposition techniques have been adapted and incorporated into the parallel version of the TRACK code. To demonstrate the new capabilities of the parallelized TRACK code, the dynamics of a 45 mA proton beam represented by 10^{8} particles has been simulated through the 325 MHz radio frequency quadrupole and initial accelerator section of the proposed FNAL proton driver. The results show the benefits and advantages of large-scale parallel computing in beam dynamics simulations.

  9. The cognitive approach to conscious machines

    CERN Document Server

    Haikonen, Pentti O

    2003-01-01

    Could a machine have an immaterial mind? The author argues that true conscious machines can be built, but rejects artificial intelligence and classical neural networks in favour of the emulation of the cognitive processes of the brain-the flow of inner speech, inner imagery and emotions. This results in a non-numeric meaning-processing machine with distributed information representation and system reactions. It is argued that this machine would be conscious; it would be aware of its own existence and its mental content and perceive this as immaterial. Novel views on consciousness and the mind-

  10. Efficient DS-UWB MUD Algorithm Using Code Mapping and RVM

    Directory of Open Access Journals (Sweden)

    Pingyan Shi

    2016-01-01

    Full Text Available A hybrid multiuser detection (MUD using code mapping and a wrong code recognition based on relevance vector machine (RVM for direct sequence ultra wide band (DS-UWB system is developed to cope with the multiple access interference (MAI and the computational efficiency. A new MAI suppression mechanism is studied in the following steps: firstly, code mapping, an optimal decision function, is constructed and the output candidate code of the matched filter is mapped to a feature space by the function. In the feature space, simulation results show that the error codes caused by MAI and the single user mapped codes can be classified by a threshold which is related to SNR of the receiver. Then, on the base of code mapping, use RVM to distinguish the wrong codes from the right ones and finally correct them. Compared with the traditional MUD approaches, the proposed method can considerably improve the bit error ratio (BER performance due to its special MAI suppression mechanism. Simulation results also show that the proposed method can approximately achieve the BER performance of optimal multiuser detection (OMD and the computational complexity approximately equals the matched filter. Moreover, the proposed method is less sensitive to the number of users.

  11. Spectrum Assignment Algorithm for Cognitive Machine-to-Machine Networks

    Directory of Open Access Journals (Sweden)

    Soheil Rostami

    2016-01-01

    Full Text Available A novel aggregation-based spectrum assignment algorithm for Cognitive Machine-To-Machine (CM2M networks is proposed. The introduced algorithm takes practical constraints including interference to the Licensed Users (LUs, co-channel interference (CCI among CM2M devices, and Maximum Aggregation Span (MAS into consideration. Simulation results show clearly that the proposed algorithm outperforms State-Of-The-Art (SOTA algorithms in terms of spectrum utilisation and network capacity. Furthermore, the convergence analysis of the proposed algorithm verifies its high convergence rate.

  12. Remarks on generalized toric codes

    CERN Document Server

    Little, John B

    2011-01-01

    This note presents some new information on how the minimum distance of the generalized toric code corresponding to a fixed set of integer lattice points S in R^2 varies with the base field. The main results show that in some cases, over sufficiently large fields, the minimum distance of the code corresponding to a set S will be the same as that of the code corresponding to the convex hull of S. In an example, we will also discuss a [49,12,28] generalized toric code over GF(8), better than any previously known code according to M. Grassl's online tables, as of July 2011.

  13. On constructing disjoint linear codes

    Institute of Scientific and Technical Information of China (English)

    ZHANG Weiguo; CAI Mian; XIAO Guozhen

    2007-01-01

    To produce a highly nonlinear resilient function,the disjoint linear codes were originally proposed by Johansson and Pasalic in IEEE Trans.Inform.Theory,2003,49(2):494-501.In this paper,an effective method for finding a set of such disjoint linear codes is presented.When n≥2k,we can find a set of[n,k] disjoint linear codes with joint linear codes exists with cardinality at least 2.We also describe a result on constructing a set of [n,k] disjoint linear codes with minimum distance at least some fixed positive integer.

  14. Scalable motion vector coding

    Science.gov (United States)

    Barbarien, Joeri; Munteanu, Adrian; Verdicchio, Fabio; Andreopoulos, Yiannis; Cornelis, Jan P.; Schelkens, Peter

    2004-11-01

    Modern video coding applications require transmission of video data over variable-bandwidth channels to a variety of terminals with different screen resolutions and available computational power. Scalable video coding is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial domain motion compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of the state-of-the-art non-scalable H.264-codec. These codecs require scalable coding of the motion vectors in order to support a large range of bit-rates with optimal compression efficiency. Scalable motion vector coding algorithms based on the integer wavelet transform followed by embedded coding of the wavelet coefficients were recently proposed. In this paper, a new and fundamentally different scalable motion vector codec (MVC) using median-based motion vector prediction is proposed. Extensive experimental results demonstrate that the proposed MVC systematically outperforms the wavelet-based state-of-the-art solutions. To be able to take advantage of the proposed scalable MVC, a rate allocation mechanism capable of optimally dividing the available rate among texture and motion information is required. Two rate allocation strategies are proposed and compared. The proposed MVC and rate allocation schemes are incorporated into an SDMCTF-based video codec and the benefits of scalable motion vector coding are experimentally demonstrated.

  15. Game-powered machine learning.

    Science.gov (United States)

    Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert

    2012-04-24

    Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the "wisdom of the crowds." Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., "funky jazz with saxophone," "spooky electronica," etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data.

  16. Mining discriminative class codes for multi-class classification based on minimizing generalization errors

    Science.gov (United States)

    Eiadon, Mongkon; Pipanmaekaporn, Luepol; Kamonsantiroj, Suwatchai

    2016-07-01

    Error Correcting Output Code (ECOC) has emerged as one of promising techniques for solving multi-class classification. In the ECOC framework, a multi-class problem is decomposed into several binary ones with a coding design scheme. Despite this, the suitable multi-class decomposition scheme is still ongoing research in machine learning. In this work, we propose a novel multi-class coding design method to mine the effective and compact class codes for multi-class classification. For a given n-class problem, this method decomposes the classes into subsets by embedding a structure of binary trees. We put forward a novel splitting criterion based on minimizing generalization errors across the classes. Then, a greedy search procedure is applied to explore the optimal tree structure for the problem domain. We run experiments on many multi-class UCI datasets. The experimental results show that our proposed method can achieve better classification performance than the common ECOC design methods.

  17. Color-coded perfused blood volume imaging using multidetector CT: initial results of whole-brain perfusion analysis in acute cerebral ischemia

    Energy Technology Data Exchange (ETDEWEB)

    Kloska, Stephan P.; Fischer, Tobias; Fischbach, Roman; Heindel, Walter [University of Muenster, Department of Clinical Radiology, Muenster (Germany); Nabavi, Darius G.; Dittrich, Ralf; Ringelstein, E.B. [University of Muenster, Department of Neurology, Muenster (Germany); Ditt, Hendrik; Klotz, Ernst [Siemens AG, Medical Solutions, Forchheim (Germany)

    2007-09-15

    Computed tomography (CT) is still the primary imaging modality following acute stroke. To evaluate a prototype of software for the calculation of color-coded whole-brain perfused blood volume (PBV) images from CT angiography (CTA) and nonenhanced CT (NECT) scans, we studied 14 patients with suspected acute ischemia of the anterior cerebral circulation. PBV calculations were performed retrospectively. The detection rate of ischemic changes in the PBV images was compared with NECT. The volume of ischemic changes in PBV was correlated with the infarct volume on follow-up examination taking potential vessel recanalization into account. PBV demonstrated ischemic changes in 12/12 patients with proven infarction and was superior to NECT (8/12) in the detection of early ischemia. Moreover, PBV demonstrated the best correlation coefficient with the follow-up infarct volume (Pearson's R = 0.957; P = 0.003) for patients with proven recanalization of initially occluded cerebral arteries. In summary, PBV appears to be more accurate in the detection of early infarction compared to NECT and mainly visualizes the irreversibly damaged ischemic tissue. (orig.)

  18. Validation of 3D Code KATRIN For Fast Neutron Fluence Calculation of VVER-1000 Reactor Pressure Vessel by Ex-Vessel Measurements and Surveillance Specimens Results

    Science.gov (United States)

    Dzhalandinov, A.; Tsofin, V.; Kochkin, V.; Panferov, P.; Timofeev, A.; Reshetnikov, A.; Makhotin, D.; Erak, D.; Voloschenko, A.

    2016-02-01

    Usually the synthesis of two-dimensional and one-dimensional discrete ordinate calculations is used to evaluate neutron fluence on VVER-1000 reactor pressure vessel (RPV) for prognosis of radiation embrittlement. But there are some cases when this approach is not applicable. For example the latest projects of VVER-1000 have upgraded surveillance program. Containers with surveillance specimens are located on the inner surface of RPV with fast neutron flux maximum. Therefore, the synthesis approach is not suitable enough for calculation of local disturbance of neutron field in RPV inner surface behind the surveillance specimens because of their complicated and heterogeneous structure. In some cases the VVER-1000 core loading consists of fuel assemblies with different fuel height and the applicability of synthesis approach is also ambiguous for these fuel cycles. Also, the synthesis approach is not enough correct for the neutron fluence estimation at the RPV area above core top. Because of these reasons only the 3D neutron transport codes seem to be satisfactory for calculation of neutron fluence on the VVER-1000 RPV. The direct 3D calculations are also recommended by modern regulations.

  19. Validation of 3D Code KATRIN For Fast Neutron Fluence Calculation of VVER-1000 Reactor Pressure Vessel by Ex-Vessel Measurements and Surveillance Specimens Results

    Directory of Open Access Journals (Sweden)

    Dzhalandinov A.

    2016-01-01

    Full Text Available Usually the synthesis of two-dimensional and one-dimensional discrete ordinate calculations is used to evaluate neutron fluence on VVER-1000 reactor pressure vessel (RPV for prognosis of radiation embrittlement. But there are some cases when this approach is not applicable. For example the latest projects of VVER-1000 have upgraded surveillance program. Containers with surveillance specimens are located on the inner surface of RPV with fast neutron flux maximum. Therefore, the synthesis approach is not suitable enough for calculation of local disturbance of neutron field in RPV inner surface behind the surveillance specimens because of their complicated and heterogeneous structure. In some cases the VVER-1000 core loading consists of fuel assemblies with different fuel height and the applicability of synthesis approach is also ambiguous for these fuel cycles. Also, the synthesis approach is not enough correct for the neutron fluence estimation at the RPV area above core top. Because of these reasons only the 3D neutron transport codes seem to be satisfactory for calculation of neutron fluence on the VVER-1000 RPV. The direct 3D calculations are also recommended by modern regulations.

  20. ON CLASSICAL BCH CODES AND QUANTUM BCH CODES

    Institute of Scientific and Technical Information of China (English)

    Xu Yajie; Ma Zhi; Zhang Chunyuan

    2009-01-01

    It is a regular way of constructing quantum error-correcting codes via codes with self-orthogonal property, and whether a classical Bose-Chaudhuri-Hocquenghem (BCH) code is self-orthogonal can be determined by its designed distance. In this paper, we give the sufficient and necessary condition for arbitrary classical BCH codes with self-orthogonal property through algorithms. We also give a better upper bound of the designed distance of a classical narrow-sense BCH code which contains its Euclidean dual. Besides these, we also give one algorithm to compute the dimension of these codes. The complexity of all algorithms is analyzed. Then the results can be applied to construct a series of quantum BCH codes via the famous CSS constructions.

  1. Good Codes From Generalised Algebraic Geometry Codes

    CERN Document Server

    Jibril, Mubarak; Ahmed, Mohammed Zaki; Tjhai, Cen

    2010-01-01

    Algebraic geometry codes or Goppa codes are defined with places of degree one. In constructing generalised algebraic geometry codes places of higher degree are used. In this paper we present 41 new codes over GF(16) which improve on the best known codes of the same length and rate. The construction method uses places of small degree with a technique originally published over 10 years ago for the construction of generalised algebraic geometry codes.

  2. Stacked Extreme Learning Machines.

    Science.gov (United States)

    Zhou, Hongming; Huang, Guang-Bin; Lin, Zhiping; Wang, Han; Soh, Yeng Chai

    2015-09-01

    Extreme learning machine (ELM) has recently attracted many researchers' interest due to its very fast learning speed, good generalization ability, and ease of implementation. It provides a unified solution that can be used directly to solve regression, binary, and multiclass classification problems. In this paper, we propose a stacked ELMs (S-ELMs) that is specially designed for solving large and complex data problems. The S-ELMs divides a single large ELM network into multiple stacked small ELMs which are serially connected. The S-ELMs can approximate a very large ELM network with small memory requirement. To further improve the testing accuracy on big data problems, the ELM autoencoder can be implemented during each iteration of the S-ELMs algorithm. The simulation results show that the S-ELMs even with random hidden nodes can achieve similar testing accuracy to support vector machine (SVM) while having low memory requirements. With the help of ELM autoencoder, the S-ELMs can achieve much better testing accuracy than SVM and slightly better accuracy than deep belief network (DBN) with much faster training speed.

  3. Physics codes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Eltgroth, P.G.

    1985-12-04

    An effort is under way to develop physics codes which realize the potential of parallel machines. A new explicit algorithm for the computation of hydrodynamics has been developed which avoids global synchronization entirely. The approach, called the Independent Time Step Method (ITSM), allows each zone to advance at its own pace, determined by local information. The method, coded in FORTRAN, has demonstrated parallelism of greater than 20 on the Denelcor HEP machine. ITSM can also be used to replace current implicit treatments of problems involving diffusion and heat conduction. Four different approaches toward work distribution have been investigated and implemented for the one-dimensional code on the Denelcor HEP. They are ''self-scheduled'', an ASKFOR monitor, a ''queue of queues'' monitor, and a distributed ASKFOR monitor. The self-scheduled approach shows the lowest overhead but the poorest speedup. The distributed ASKFOR monitor shows the best speedup and the lowest execution times on the tested problems. 2 refs., 3 figs.

  4. A Machine Learning Framework for Plan Payment Risk Adjustment.

    Science.gov (United States)

    Rose, Sherri

    2016-12-01

    To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R(2) . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.

  5. Alpha Channeling in Mirror Machines

    Energy Technology Data Exchange (ETDEWEB)

    Fisch N.J.

    2005-10-19

    Because of their engineering simplicity, high-β, and steady-state operation, mirror machines and related open-trap machines such as gas dynamic traps, are an attractive concept for achieving controlled nuclear fusion. In these open-trap machines, the confinement occurs by means of magnetic mirroring, without the magnetic field lines closing upon themselves within the region of particle confinement. Unfortunately, these concepts have not achieved to date very spectacular laboratory results, and their reactor prospects are dimmed by the prospect of a low Q-factor, the ratio of fusion power produced to auxiliary power. Nonetheless, because of its engineering promise, over the years numerous improvements have been proposed to enhance the reactor prospects of mirror fusion, such as tandem designs, end-plugging, and electric potential barriers.

  6. Aerosols generated during beryllium machining.

    Science.gov (United States)

    Martyny, J W; Hoover, M D; Mroz, M M; Ellis, K; Maier, L A; Sheff, K L; Newman, L S

    2000-01-01

    Some beryllium processes, especially machining, are associated with an increased risk of beryllium sensitization and disease. Little is known about exposure characteristics contributing to risk, such as particle size. This study examined the characteristics of beryllium machining exposures under actual working conditions. Stationary samples, using eight-stage Lovelace Multijet Cascade Impactors, were taken at the process point of operation and at the closest point that the worker would routinely approach. Paired samples were collected at the operator's breathing zone by using a Marple Personal Cascade Impactor and a 35-mm closed-faced cassette. More than 50% of the beryllium machining particles in the breathing zone were less than 10 microns in aerodynamic diameter. This small particle size may result in beryllium deposition into the deepest portion of the lung and may explain elevated rates of sensitization among beryllium machinists.

  7. Scheduling Unrelated Machines of Few Different Types

    CERN Document Server

    Bonifaci, Vincenzo

    2012-01-01

    A very well-known machine model in scheduling allows the machines to be unrelated, modelling jobs that might have different characteristics on each machine. Due to its generality, many optimization problems of this form are very difficult to tackle and typically APX-hard. However, in many applications the number of different types of machines, such as processor cores, GPUs, etc. is very limited. In this paper, we address this point and study the assignment of jobs to unrelated machines in the case that each machine belongs to one of a fixed number of types and the machines of each type are identical. We present polynomial time approximation schemes (PTASs) for minimizing the makespan for multidimensional jobs with a fixed number of dimensions and for minimizing the L_p-norm. In particular, our results subsume and generalize the existing PTASs for a constant number of unrelated machines and for an arbitrary number of identical machines for these problems. We employ a number of techniques which go beyond the pr...

  8. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  9. Making extreme computations possible with virtual machines

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, J.; Chokoufe Nejad, B. [DESY, Hamburg (Germany). Theory Group; Ohl, T. [Wuerzburg Univ. (Germany)

    2016-02-15

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  10. Making extreme computations possible with virtual machines

    Science.gov (United States)

    Reuter, J.; Chokoufe Nejad, B.; Ohl, T.

    2016-10-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  11. MODAL CONTROL OF PILOTLESS FLYING MACHINE

    Directory of Open Access Journals (Sweden)

    V. A. Antanevich

    2010-01-01

    Full Text Available The paper considers a problem on synthesis of lateral movement control algorithms in a pilotless flying machine which is made on the basis of a modal control method providing a required root arrangement of a characteristic closed control system polynom. Results of the modeling at stabilization of a lateral pilotless flying machine co-ordinate are presented in the paper.

  12. Machine learning approaches in medical image analysis

    DEFF Research Database (Denmark)

    de Bruijne, Marleen

    2016-01-01

    Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols......, learning from weak labels, and interpretation and evaluation of results....

  13. Straightness measurement of large machine guideways

    Directory of Open Access Journals (Sweden)

    W. Ptaszyñski

    2011-10-01

    Full Text Available This paper shows the guideway types of large machines and describes problems with their straightness measurement. A short description of straightness measurement methods and the results of investigation in straightness of 10 meter long guideways of a CNC machine by means of the XL-10 Renishaw interferometer are also presented.

  14. Lossy Counter Machines Decidability Cheat Sheet

    Science.gov (United States)

    Schnoebelen, Philippe

    Lossy counter machines (LCM's) are a variant of Minsky counter machines based on weak (or unreliable) counters in the sense that they can decrease nondeterministically and without notification. This model, introduced by R. Mayr [TCS 297:337-354 (2003)], is not yet very well known, even though it has already proven useful for establishing hardness results.

  15. Numerically Controlled Machine Tools and Worker Skills.

    Science.gov (United States)

    Keefe, Jeffrey H.

    1991-01-01

    Analysis of data from "Industry Wage Surveys of Machinery Manufacturers" on the skill levels of 57 machining jobs found that introduction of numerically controlled machine tools has resulted in a very small reduction in skill levels or no significant change, supporting neither the deskilling argument nor argument that skill levels…

  16. Coil Optimization for High Temperature Superconductor Machines

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Abrahamsen, Asger Bech

    2011-01-01

    This paper presents topology optimization of HTS racetrack coils for large HTS synchronous machines. The topology optimization is used to acquire optimal coil designs for the excitation system of 3 T HTS machines. Several tapes are evaluated and the optimization results are discussed. The optimiz...

  17. Numerically Controlled Machine Tools and Worker Skills.

    Science.gov (United States)

    Keefe, Jeffrey H.

    1991-01-01

    Analysis of data from "Industry Wage Surveys of Machinery Manufacturers" on the skill levels of 57 machining jobs found that introduction of numerically controlled machine tools has resulted in a very small reduction in skill levels or no significant change, supporting neither the deskilling argument nor argument that skill levels…

  18. User's manual for seismic analysis code 'SONATINA-2V'

    Energy Technology Data Exchange (ETDEWEB)

    Hanawa, Satoshi; Iyoku, Tatsuo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    2001-08-01

    The seismic analysis code, SONATINA-2V, has been developed to analyze the behavior of the HTTR core graphite components under seismic excitation. The SONATINA-2V code is a two-dimensional computer program capable of analyzing the vertical arrangement of the HTTR graphite components, such as fuel blocks, replaceable reflector blocks, permanent reflector blocks, as well as their restraint structures. In the analytical model, each block is treated as rigid body and is restrained by dowel pins which restrict relative horizontal movement but allow vertical and rocking motions between upper and lower blocks. Moreover, the SONATINA-2V code is capable of analyzing the core vibration behavior under both simultaneous excitations of vertical and horizontal directions. The SONATINA-2V code is composed of the main program, pri-processor for making the input data to SONATINA-2V and post-processor for data processing and making the graphics from analytical results. Though the SONATINA-2V code was developed in order to work in the MSP computer system of Japan Atomic Energy Research Institute (JAERI), the computer system was abolished with the technical progress of computer. Therefore, improvement of this analysis code was carried out in order to operate the code under the UNIX machine, SR8000 computer system, of the JAERI. The users manual for seismic analysis code, SONATINA-2V, including pri- and post-processor is given in the present report. (author)

  19. Machinability of advanced materials

    CERN Document Server

    Davim, J Paulo

    2014-01-01

    Machinability of Advanced Materials addresses the level of difficulty involved in machining a material, or multiple materials, with the appropriate tooling and cutting parameters.  A variety of factors determine a material's machinability, including tool life rate, cutting forces and power consumption, surface integrity, limiting rate of metal removal, and chip shape. These topics, among others, and multiple examples comprise this research resource for engineering students, academics, and practitioners.

  20. Pattern recognition & machine learning

    CERN Document Server

    Anzai, Y

    1992-01-01

    This is the first text to provide a unified and self-contained introduction to visual pattern recognition and machine learning. It is useful as a general introduction to artifical intelligence and knowledge engineering, and no previous knowledge of pattern recognition or machine learning is necessary. Basic for various pattern recognition and machine learning methods. Translated from Japanese, the book also features chapter exercises, keywords, and summaries.

  1. Support vector machines applications

    CERN Document Server

    Guo, Guodong

    2014-01-01

    Support vector machines (SVM) have both a solid mathematical background and good performance in practical applications. This book focuses on the recent advances and applications of the SVM in different areas, such as image processing, medical practice, computer vision, pattern recognition, machine learning, applied statistics, business intelligence, and artificial intelligence. The aim of this book is to create a comprehensive source on support vector machine applications, especially some recent advances.

  2. Machining of titanium alloys

    CERN Document Server

    2014-01-01

    This book presents a collection of examples illustrating the resent research advances in the machining of titanium alloys. These materials have excellent strength and fracture toughness as well as low density and good corrosion resistance; however, machinability is still poor due to their low thermal conductivity and high chemical reactivity with cutting tool materials. This book presents solutions to enhance machinability in titanium-based alloys and serves as a useful reference to professionals and researchers in aerospace, automotive and biomedical fields.

  3. Students' perspectives on promoting healthful food choices from campus vending machines: a qualitative interview study.

    Science.gov (United States)

    Ali, Habiba I; Jarrar, Amjad H; Abo-El-Enen, Mostafa; Al Shamsi, Mariam; Al Ashqar, Huda

    2015-05-28

    Increasing the healthfulness of campus food environments is an important step in promoting healthful food choices among college students. This study explored university students' suggestions on promoting healthful food choices from campus vending machines. It also examined factors influencing students' food choices from vending machines. Peer-led semi-structured individual interviews were conducted with 43 undergraduate students (33 females and 10 males) recruited from students enrolled in an introductory nutrition course in a large national university in the United Arab Emirates. Interviews were audiotaped, transcribed, and coded to generate themes using N-Vivo software. Accessibility, peer influence, and busy schedules were the main factors influencing students' food choices from campus vending machines. Participants expressed the need to improve the nutritional quality of the food items sold in the campus vending machines. Recommendations for students' nutrition educational activities included placing nutrition tips on or beside the vending machines and using active learning methods, such as competitions on nutrition knowledge. The results of this study have useful applications in improving the campus food environment and nutrition education opportunities at the university to assist students in making healthful food choices.

  4. Semantic Vector Machines

    CERN Document Server

    Vincent, Etter

    2011-01-01

    We first present our work in machine translation, during which we used aligned sentences to train a neural network to embed n-grams of different languages into an $d$-dimensional space, such that n-grams that are the translation of each other are close with respect to some metric. Good n-grams to n-grams translation results were achieved, but full sentences translation is still problematic. We realized that learning semantics of sentences and documents was the key for solving a lot of natural language processing problems, and thus moved to the second part of our work: sentence compression. We introduce a flexible neural network architecture for learning embeddings of words and sentences that extract their semantics, propose an efficient implementation in the Torch framework and present embedding results comparable to the ones obtained with classical neural language models, while being more powerful.

  5. Beam Dynamics Studies in Recirculating Machines

    CERN Document Server

    Pellegrini, Dario; Latina, A

    The LHeC and the CLIC Drive Beam share not only the high-current beams that make them prone to show instabilities, but also unconventional lattice topologies and operational schemes in which the time sequence of the bunches varies along the machine. In order to asses the feasibility of these projects, realistic simulations taking into account the most worrisome effects and their interplays, are crucial. These include linear and non-linear optics with time dependent elements, incoherent and coherent synchrotron radiation, short and long-range wakefields, beam-beam effect and ion cloud. In order to investigate multi-bunch effects in recirculating machines, a new version of the tracking code PLACET has been developed from scratch. PLACET2, already integrates most of the effects mentioned before and can easily receive additional physics. Its innovative design allows to describe complex lattices and track one or more bunches accordingly to the machine operation, reproducing the bunch train splitting and recombinat...

  6. Modeling and Simulation of Process-Machine Interaction in Grinding of Cemented Carbide Indexable Inserts

    National Research Council Canada - National Science Library

    Feng, Wei; Yao, Bin; Chen, BinQiang; Zhang, DongSheng; Zhang, XiangLei; Shen, ZhiHuang

    2015-01-01

      Interaction of process and machine in grinding of hard and brittle materials such as cemented carbide may cause dynamic instability of the machining process resulting in machining errors and a decrease in productivity...

  7. Rotating electrical machines

    CERN Document Server

    Le Doeuff, René

    2013-01-01

    In this book a general matrix-based approach to modeling electrical machines is promulgated. The model uses instantaneous quantities for key variables and enables the user to easily take into account associations between rotating machines and static converters (such as in variable speed drives).   General equations of electromechanical energy conversion are established early in the treatment of the topic and then applied to synchronous, induction and DC machines. The primary characteristics of these machines are established for steady state behavior as well as for variable speed scenarios. I

  8. Chaotic Boltzmann machines.

    Science.gov (United States)

    Suzuki, Hideyuki; Imura, Jun-ichi; Horio, Yoshihiko; Aihara, Kazuyuki

    2013-01-01

    The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented.

  9. Tribology in machine design

    CERN Document Server

    Stolarski, Tadeusz

    1999-01-01

    ""Tribology in Machine Design is strongly recommended for machine designers, and engineers and scientists interested in tribology. It should be in the engineering library of companies producing mechanical equipment.""Applied Mechanics ReviewTribology in Machine Design explains the role of tribology in the design of machine elements. It shows how algorithms developed from the basic principles of tribology can be used in a range of practical applications within mechanical devices and systems.The computer offers today's designer the possibility of greater stringen

  10. Electrical machines & drives

    CERN Document Server

    Hammond, P

    1985-01-01

    Containing approximately 200 problems (100 worked), the text covers a wide range of topics concerning electrical machines, placing particular emphasis upon electrical-machine drive applications. The theory is concisely reviewed and focuses on features common to all machine types. The problems are arranged in order of increasing levels of complexity and discussions of the solutions are included where appropriate to illustrate the engineering implications. This second edition includes an important new chapter on mathematical and computer simulation of machine systems and revised discussions o

  11. Induction machine handbook

    CERN Document Server

    Boldea, Ion

    2002-01-01

    Often called the workhorse of industry, the advent of power electronics and advances in digital control are transforming the induction motor into the racehorse of industrial motion control. Now, the classic texts on induction machines are nearly three decades old, while more recent books on electric motors lack the necessary depth and detail on induction machines.The Induction Machine Handbook fills industry's long-standing need for a comprehensive treatise embracing the many intricate facets of induction machine analysis and design. Moving gradually from simple to complex and from standard to

  12. Compensation strategy for machining optical freeform surfaces by the combined on- and off-machine measurement.

    Science.gov (United States)

    Zhang, Xiaodong; Zeng, Zhen; Liu, Xianlei; Fang, Fengzhou

    2015-09-21

    Freeform surface is promising to be the next generation optics, however it needs high form accuracy for excellent performance. The closed-loop of fabrication-measurement-compensation is necessary for the improvement of the form accuracy. It is difficult to do an off-machine measurement during the freeform machining because the remounting inaccuracy can result in significant form deviations. On the other side, on-machine measurement may hides the systematic errors of the machine because the measuring device is placed in situ on the machine. This study proposes a new compensation strategy based on the combination of on-machine and off-machine measurement. The freeform surface is measured in off-machine mode with nanometric accuracy, and the on-machine probe achieves accurate relative position between the workpiece and machine after remounting. The compensation cutting path is generated according to the calculated relative position and shape errors to avoid employing extra manual adjustment or highly accurate reference-feature fixture. Experimental results verified the effectiveness of the proposed method.

  13. Machining Complex Oriented Compensation System for Generalized Kinematic Errors

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper puts forward a machining complex oriented compensation strategy for the generalized kinematic errors (GKEs).According to this strategy, the error map, which is constructed by using the off-line measuring information of the machined workpiece, is not oriented for the machine tool but for the machining complex to compensate the GKEs. The error map is derived by the proposed predictive learning control algorithm (PLCA), which is supported by the information model of machining complex. Experimental results show that the machining complex oriented GKEs compensation strategy and the information model based PLCA is effective.

  14. Design and manufacturing of abrasive jet machine for drilling operation

    Directory of Open Access Journals (Sweden)

    Mittal Divyansh

    2016-01-01

    Full Text Available Wide application of Abrasive Jet Machine (AJM is found in machining hard and brittle materials. Machining of brittle materials by AJM is due to brittle fracture and removal of micro chips from the work piece. Embedment of the abrasive particles in the brittle materials results in decrease of machining efficiency. In this paper design and manufacturing of AJM has been presented. Various parts of AJM have been designed using ANSYS 16.2 software. The parts are then manufactured indigenously as per designed parameters. The machine fabricated in this work will be used further for process optimization of AJM parameters for machining of glass and ceramics.

  15. High Temperature Superconductor Machine Prototype

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Træholt, Chresten

    2011-01-01

    A versatile testing platform for a High Temperature Superconductor (HTS) machine has been constructed. The stationary HTS field winding can carry up to 10 coils and it is operated at a temperature of 77K. The rotating armature is at room temperature. Test results and performance for the HTS field...

  16. Modular optimization code package: MOZAIK

    Science.gov (United States)

    Bekar, Kursat B.

    existing beam port configuration of the Penn State Breazeale Reactor (PSBR) was designed to test and validate the code package in its entirety, as well as its modules separately. The selected physics code, TORT, and the requisite data such as source distribution, cross-sections, and angular quadratures were comprehensively tested with these computational models. The modular feature and the parallel performance of the code package were also examined using these computational models. Another outcome of these computational models is to provide the necessary background information for determining the optimal shape of the D2O moderator tank for the new beam tube configurations for the PSBR's beam port facility. The first mission of the code package was completed successfully by determining the optimal tank shape which was sought for the current beam tube configuration and two new beam tube configurations for the PSBR's beam port facility. The performance of the new beam tube configurations and the current beam tube configuration were evaluated with the new optimal tank shapes determined by MOZAIK. Furthermore, the performance of the code package with the two different optimization strategies were analyzed showing that while GA is capable of achieving higher thermal beam intensity for a given beam tube setup, Min-max produces an optimal shape that is more amenable to machining and manufacturing. The optimal D2O moderator tank shape determined by MOZAIK with the current beam port configuration improves the thermal neutron beam intensity at the beam port exit end by 9.5%. Similarly, the new tangential beam port configuration (beam port near the core interface) with the optimal moderator tank shape determined by MOZAIK improves the thermal neutron beam intensity by a factor of 1.4 compared to the existing beam port configuration (with the existing D2O moderator tank). Another new beam port configuration, radial beam tube configuration, with the optimal moderator tank shape

  17. Finite-state machines as elements in control systems.

    Science.gov (United States)

    Burgin, G. H.; Walsh, M. J.

    1971-01-01

    Demonstration that approximate solutions to certain classes of differential and difference equations can be expressed in form of finite state machines. Based on this result, a finite-state machine model of an adaptive gain changer in an aircraft stability augmentation system is developed. Results of simulated flights using the finite-state machine gain changer are presented.

  18. Dependency graph for code analysis on emerging architectures

    Energy Technology Data Exchange (ETDEWEB)

    Shashkov, Mikhail Jurievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lipnikov, Konstantin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-08

    Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.

  19. Of Mice Moths and Men Machines

    Directory of Open Access Journals (Sweden)

    Susan Schuppli

    2008-10-01

    Full Text Available In 1947, Grace Murray Hopper a pioneer in early computing made an unusual entry into her daily logbook: lsquo;Relay #70 Panel F (moth in relay. First actual case of bug being found.rsquo; Accompanying this entry is an actual celluloid tape encrusted bug, or more specifically a moth, fastened to the page of the logbook. According to Hopper, one of the technicians in her team solved a glitch in the emHarvard Mark II/em computer by pulling an actual insect out from between the contacts of one of its relays. Word soon went out that they had lsquo;debugged the machinersquo; and the phrase quickly entered our lexicon. After languishing for years this mythic moth was eventually transported to the emSmithsonian/em where it now lies in archival state. The mothrsquo;s dynamic vitality had introduced a kind of surplus or aberrant code into the machine, which in effect pushed the machine towards a state of chaos and breakdown. Its failure to act as desired, to perform the coding sequences of its programmed history suggests that even a seemingly inert or lifeless machine can become lsquo;more and other than its historyrsquo;. (Elizabeth Grosz, 2005 Hopperrsquo;s bug is thus a material witness to the creative co-evolution of the machine with the living matter of the moth. Moreover, as a cipher for machinic defect the bug reminds us that mutations are in fact necessary for systems to change and evolve. The crisis introduced into a biological system or machine through the virulence of the bug is terminal only to the extent that it becomes the source for another kind of order, another kind of interaction. This is used as a casenbsp;study to argue that chaos is not only an animating force in the constitution of new systems but is necessary for the evolution of difference.

  20. Parametric programming of CNC machine tools

    Directory of Open Access Journals (Sweden)

    Gołębski Rafał

    2017-01-01

    Full Text Available The article presents the possibilities of parametric programming of CNC machine tools for the SINUMERIK 840D sl control system. The kinds and types of the definition of variables for the control system under discussion described. On the example of the longitudinal cutting cycle, parametric programming possibilities are shown. The program’s code and its implementation in the control system is described in detail. The principle of parametric programming in a high-level language is also explained.

  1. Good Applications for Crummy Machine Translation

    Science.gov (United States)

    1991-07-01

    multilingual 16 computational linguistics __.__n _____ODE 16. PRICE CODE 17. SECURITY CLASSIFICTION 18. SECURITY CLASSIFICATION 19. SECURITY...relatively few keystrokes. Peter Brown (personal communication) once remarked that such a super-fast typewriter ought to be possible in the monolingual case... Multilingual System under Development, Computational Linguistics, 11:2-3, pp. 155-169. [81 Kay, M. (1980) "The Proper Place of Men and Machines in Language

  2. Virtual machine vs Real Machine: Security Systems

    Directory of Open Access Journals (Sweden)

    Dr. C. Suresh Gnana Das

    2009-08-01

    Full Text Available This paper argues that the operating system and applications currently running on a real machine should relocate into a virtual machine. This structure enables services to be added below the operating system and to do so without trusting or modifying the operating system or applications. To demonstrate the usefulness of this structure, we describe three services that take advantage of it: secure logging, intrusion prevention and detection, and environment migration. In particular, we can provide services below the guest operating system without trusting or modifying it. We believe providing services at this layer are especially useful for enhancing security and mobility. This position paper describes the general benefits and challenges that arise from running most applications in a virtual machine, and then describes some example services and alternative ways to provide those services.

  3. THE STIRLING GAS REFRIGERATING MACHINE MECHANICAL DESIGN IMPROVING

    Directory of Open Access Journals (Sweden)

    V. V. Trandafilov

    2016-02-01

    Full Text Available To improve the mechanical design of the piston Stirling gas refrigeration machine the structural optimization of rotary vane Stirling gas refrigeration machine is carried out. This paper presents the results of theoretical research. Analysis and prospects of rotary vane Stirling gas refrigeration machine for domestic and industrial refrigeration purpose are represented. The results of a patent search by mechanisms of transformation of rotary vane machines are discussed

  4. THE STIRLING GAS REFRIGERATING MACHINE MECHANICAL DESIGN IMPROVING

    Directory of Open Access Journals (Sweden)

    V. V. Trandafilov

    2016-06-01

    Full Text Available To improve the mechanical design of the piston Stirling gas refrigeration machine the structural optimization of rotary vane Stirling gas refrigeration machine is carried out. This paper presents the results of theoretical research. Analysis and prospects of rotary vane Stirling gas refrigeration machine for domestic and industrial refrigeration purpose are represented. The results of a patent search by mechanisms of transformation of rotary vane machines are discussed.

  5. THE STIRLING GAS REFRIGERATING MACHINE MECHANICAL DESIGN IMPROVING

    OpenAIRE

    V. V. Trandafilov; M.G. Khmelniuk; O. Y.Yakovleva; A. V. Ostapenko

    2016-01-01

    To improve the mechanical design of the piston Stirling gas refrigeration machine the structural optimization of rotary vane Stirling gas refrigeration machine is carried out. This paper presents the results of theoretical research. Analysis and prospects of rotary vane Stirling gas refrigeration machine for domestic and industrial refrigeration purpose are represented. The results of a patent search by mechanisms of transformation of rotary vane machines are discussed.

  6. Proof-Carrying Code Based Tool for Secure Information Flow of Assembly Programs

    Directory of Open Access Journals (Sweden)

    Abdulrahman Muthana

    2009-01-01

    Full Text Available Problem statement: How a host (the code consumer can determine with certainty that a downloaded program received from untrusted source (the code producer will maintain the confidentiality of the data it manipulates and it is safe to install and execute. Approach: The approach adopted for verifying that a downloaded program will not leak confidential data to unauthorized parties was based on the concept of Proof-Carrying Code (PCC. A mobile program (in its assembly form was analyzed for information flow security based on the concept of proof-carrying code. The security policy was centered on a type system for analyzing information flows within assembly programs based on the notion of noninterference. Results: A verification tool for verifying assembly programs for information flow security was built. The tool certifies SPARC assembly programs for secure information flow by statically analyzing the program based on the idea of Proof-Carrying Code (PCC. The tool operated directly on the machine-code requiring only the inputs and outputs of the code annotated with security levels. The tool provided a windows user interface enabling the users to control the verification process. The proofs that untrusted program did not leak sensitive information were generated and checked on the host machine and if they are valid, then the untrusted program can be installed and executed safely. Conclusion: By basing proof-carrying code infrastructure on information flow analysis type-system, a sufficient assurance of protecting confidential data manipulated by the mobile program can be obtained. This assurance was come due to the fact that type systems provide a sufficient guarantee of protecting confidentiality.

  7. Binary Code Disassembly for Reverse Engineering

    Directory of Open Access Journals (Sweden)

    Marius Popa

    2013-01-01

    Full Text Available The disassembly of binary file is used to restore the software application code in a readable and understandable format for humans. Further, the assembly code file can be used in reverse engineering processes to establish the logical flows of the computer program or its vulnerabilities in real-world running environment. The paper highlights the features of the binary executable files under the x86 architecture and portable format, presents issues of disassembly process of a machine code file and intermediate code, disassembly algorithms which can be applied to a correct and complete reconstruction of the source file written in assembly language, and techniques and tools used in binary code disassembly.

  8. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  9. Training Restricted Boltzmann Machines

    DEFF Research Database (Denmark)

    Fischer, Asja

    Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can also be interpreted as stochastic neural networks. Training RBMs is known to be challenging. Computing the likelihood of the model parameters or its gradient is in general computationally intensive. Thus, training...... relies on sampling based approximations of the log-likelihood gradient. I will present an empirical and theoretical analysis of the bias of these approximations and show that the approximation error can lead to a distortion of the learning process. The bias decreases with increasing mixing rate...... of the applied sampling procedure and I will introduce a transition operator that leads to faster mixing. Finally, a different parametrisation of RBMs will be discussed that leads to better learning results and more robustness against changes in the data representation....

  10. sRNASVM——基于SVM方法构建大肠杆菌sRNA预测模型%sRNASVM: A MODEL FOR PREDICTION OF SMALL NON-CODING RNAS IN E. coli USING SUPPORT VECTOR MACHINES

    Institute of Scientific and Technical Information of China (English)

    王立贵; 应晓敏; 曹源; 查磊; 李伍举

    2009-01-01

    Identification of the bacterial small noncoding RNAs (sRNAs) that plays an important role in understanding interactions between bacteria and their environments. Here the authors introduced a scheme for constructing models for prediction of bacterial sRNAs through incorporating validated sRNAs into training dataset, and Escherichia coli (E. coli) K-12 was taken as an example to demonstrate the performance of the scheme. The results indicated that the 10-fold cross-validation classification accuracy of the constructed model, sRNASVM, was as high as 92.45%, which had better performance than two existing models. Therefore, the present work provides better support for experimental identification of bacterial sRNAs. The models and detailed results can be downloaded from the webpage http://ccb.bmi.ac.cn/smasvm/.%在理解细菌与环境的相互作用方面,细菌sRNA的识别发挥重要作用.文章介绍了一个通过增加训练集中实验证实的sRNA来构建细菌sRNA预测模型的策略,并以大肠杆菌K-12的sRNA预测为例来说明策略的可行性.结果表明,按此策略构建的模型sRNASVM的10倍交叉检验精度达到92.45%,高于目前文献中报道的精度.因此,构建的这一模型将为实验发现sRNA提供较好的生物信息学支持.有关模型和详细结果可以从网站http://ccb.bmi.ac.cn/smasvm/下载.

  11. New Codes for Spectral Amplitude Coding Optical CDMA Systems

    Directory of Open Access Journals (Sweden)

    Hassan Yousif Ahmed

    2011-03-01

    Full Text Available A new code structure with zero in-phase cross correlation for spectral amplitude coding optical code division multiple access (SAC-OCDMA system is proposed, and called zero vectors combinatorial (ZVC. This code is constructed in a simple algebraic way using Euclidean vectors and combinatorial theories based on the relationship between the number of users N and the weight W. One of the important properties of this code is that the maximum cross correlation (CC is always zero, which means that multi-user interference (MUI and phase induced intensity noise (PIIN are reduced. Bit error rate (BER performance is compared with previous reported codes. Therefore, theoretically, we demonstrate the performance of ZVC code with the related equations. In addition, the structure of the encoder/decoder based on fiber Bragg gratings (FBGs and the proposed system have been analyzed theoretically by taking into consideration the effects of some noises. The results characterizing BER with respect to the total number of active users show that ZVC code offers a significantly improved performance over previous reported codes by supporting large numbers of users at BER≥ 10-9. A comprehensive simulation study has been carried out using a commercial optical system simulator “VPI™”. Moreover, it was shown that the proposed code managed to reduce the hardware complexity and eventually the cost.

  12. Validation of numerical codes for the analysis of plasma discharges

    Energy Technology Data Exchange (ETDEWEB)

    Albanese, R. (Univ. di Salerno, Dipt. di Ingegneria Elettronica, Fisciano (Italy)); Bottura, L. (NET Team, Garching (Germany)); Chiocchio, S. (NET Team, Garching (Germany)); Coccorese, E. (Univ. di Reggio Calabria, Ist. di Ingegneria Elettronica (Italy)); Gernhardt, J. (Max Planck IPP, Garching (Germany)); Gruber, O. (Max Planck IPP, Garching (Germany)); Fresa, R. (Univ. di Salerno, Dipt. di Ingegneria Elettronica, Fisciano (Italy)); Martone, R. (Univ. di Salerno, Dipt. di Ingegneria Elettronica, Fisciano (Italy)); Portone, A. (NET Team, Garching (Germany)); Seidel, U. (Max Planck IPP, Garching (Germany))

    1994-01-01

    Electromagnetic aspects in the design of ITER-like reactors call for an extensive use of complex and advanced numerical codes. For this reason a strong attention has been paid within the NET-Team to the code development. In particular, through a cooperation with some Italian universities, during the last years a number of numerical procedures were developed and integrated. In order to assess the code reliability and to gain confidence on their predictions for next generation ITER-like reactors, the validation of the codes against experiments has to be considered as a strict requirement. Aim of this paper is to give a comprehensive presentation of this problem in the light of the results of a campaign of validation runs. The main outcome of this work is that the computational procedures, which have been developed for the NET project and then extensively used also for ITER studies, can be considered as experimentally validated in a sufficiently wide range of cases of interest. In particular, computed values are compared with experimental measurements made during some typical ASDEX-Upgrade discharges. From the electromagnetic point of view, many features of this machine are common to the ITER concept, so that the results of the validation can reasonably be extended to the ITER case. (orig.)

  13. Coded source neutron imaging

    Energy Technology Data Exchange (ETDEWEB)

    Bingham, Philip R [ORNL; Santos-Villalobos, Hector J [ORNL

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  14. Coded source neutron imaging

    Science.gov (United States)

    Bingham, Philip; Santos-Villalobos, Hector; Tobin, Ken

    2011-03-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100μm) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100μm and 10μm aperture hole diameters show resolutions matching the hole diameters.

  15. Stirling machine operating experience

    Energy Technology Data Exchange (ETDEWEB)

    Ross, B. [Stirling Technology Co., Richland, WA (United States); Dudenhoefer, J.E. [Lewis Research Center, Cleveland, OH (United States)

    1994-09-01

    Numerous Stirling machines have been built and operated, but the operating experience of these machines is not well known. It is important to examine this operating experience in detail, because it largely substantiates the claim that stirling machines are capable of reliable and lengthy operating lives. The amount of data that exists is impressive, considering that many of the machines that have been built are developmental machines intended to show proof of concept, and are not expected to operate for lengthy periods of time. Some Stirling machines (typically free-piston machines) achieve long life through non-contact bearings, while other Stirling machines (typically kinematic) have achieved long operating lives through regular seal and bearing replacements. In addition to engine and system testing, life testing of critical components is also considered. The record in this paper is not complete, due to the reluctance of some organizations to release operational data and because several organizations were not contacted. The authors intend to repeat this assessment in three years, hoping for even greater participation.

  16. Perpetual Motion Machine

    Directory of Open Access Journals (Sweden)

    D. Tsaousis

    2008-01-01

    Full Text Available Ever since the first century A.D. there have been relative descriptions of known devices as well as manufactures for the creation of perpetual motion machines. Although physics has led, with two thermodynamic laws, to the opinion that a perpetual motion machine is impossible to be manufactured, inventors of every age and educational level appear to claim that they have invented something «entirely new» or they have improved somebody else’s invention, which «will function henceforth perpetually»! However the fact of the failure in manufacturing a perpetual motion machine till now, it does not mean that countless historical elements for these fictional machines become indifferent. The discussion on every version of a perpetual motion machine on the one hand gives the chance to comprehend the inventor’s of each period level of knowledge and his way of thinking, and on the other hand, to locate the points where this «perpetual motion machine» clashes with the laws of nature and that’s why it is impossible to have been manufactured or have functioned. The presentation of a new «perpetual motion machine» has excited our interest to locate its weak points. According to the designer of it the machine functions with the work produced by the buoyant force

  17. Machine Intelligence and Explication

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1987-01-01

    This report is an MA ("doctoraal") thesis submitted to the department of philosophy, university of Amsterdam. It attempts to answer the question whether machines can think by conceptual analysis. Ideally. a conceptual analysis should give plausible explications of the concepts of "machine" and "inte

  18. Microsoft Azure machine learning

    CERN Document Server

    Mund, Sumit

    2015-01-01

    The book is intended for those who want to learn how to use Azure Machine Learning. Perhaps you already know a bit about Machine Learning, but have never used ML Studio in Azure; or perhaps you are an absolute newbie. In either case, this book will get you up-and-running quickly.

  19. Reactive Turing machines

    NARCIS (Netherlands)

    Baeten, J.C.M.; Luttik, B.; Tilburg, P.J.A. van

    2013-01-01

    We propose reactive Turing machines (RTMs), extending classical Turing machines with a process-theoretical notion of interaction, and use it to define a notion of executable transition system. We show that every computable transition system with a bounded branching degree is simulated modulo diverge

  20. Machine Intelligence and Explication

    NARCIS (Netherlands)

    Wieringa, Roel

    1987-01-01

    This report is an MA ("doctoraal") thesis submitted to the department of philosophy, university of Amsterdam. It attempts to answer the question whether machines can think by conceptual analysis. Ideally. a conceptual analysis should give plausible explications of the concepts of "machine" and "inte

  1. Coordinate measuring machines

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with three exercises of 2 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercises concern three aspects of coordinate measuring: 1) Measuring and verification of tolerances on coordinate measuring machines, 2) Traceability...

  2. Simple Machine Junk Cars

    Science.gov (United States)

    Herald, Christine

    2010-01-01

    During the month of May, the author's eighth-grade physical science students study the six simple machines through hands-on activities, reading assignments, videos, and notes. At the end of the month, they can easily identify the six types of simple machine: inclined plane, wheel and axle, pulley, screw, wedge, and lever. To conclude this unit,…

  3. Human Machine Learning Symbiosis

    Science.gov (United States)

    Walsh, Kenneth R.; Hoque, Md Tamjidul; Williams, Kim H.

    2017-01-01

    Human Machine Learning Symbiosis is a cooperative system where both the human learner and the machine learner learn from each other to create an effective and efficient learning environment adapted to the needs of the human learner. Such a system can be used in online learning modules so that the modules adapt to each learner's learning state both…

  4. Machine learning with R

    CERN Document Server

    Lantz, Brett

    2015-01-01

    Perhaps you already know a bit about machine learning but have never used R, or perhaps you know a little R but are new to machine learning. In either case, this book will get you up and running quickly. It would be helpful to have a bit of familiarity with basic programming concepts, but no prior experience is required.

  5. Autocatalysis, information and coding.

    Science.gov (United States)

    Wills, P R

    2001-01-01

    Autocatalytic self-construction in macromolecular systems requires the existence of a reflexive relationship between structural components and the functional operations they perform to synthesise themselves. The possibility of reflexivity depends on formal, semiotic features of the catalytic structure-function relationship, that is, the embedding of catalytic functions in the space of polymeric structures. Reflexivity is a semiotic property of some genetic sequences. Such sequences may serve as the basis for the evolution of coding as a result of autocatalytic self-organisation in a population of assignment catalysts. Autocatalytic selection is a mechanism whereby matter becomes differentiated in primitive biochemical systems. In the case of coding self-organisation, it corresponds to the creation of symbolic information. Prions are present-day entities whose replication through autocatalysis reflects aspects of biological semiotics less obvious than genetic coding.

  6. Adjoint code generator

    Institute of Scientific and Technical Information of China (English)

    CHENG Qiang; CAO JianWen; WANG Bin; ZHANG HaiBin

    2009-01-01

    The adjoint code generator (ADG) is developed to produce the adjoint codes, which are used to analytically calculate gradients and the Hessian-vector products with the costs independent of the number of the independent variables. Different from other automatic differentiation tools, the implementation of ADG has advantages of using the least program behavior decomposition method and several static dependence analysis techniques. In this paper we first address the concerned concepts and fundamentals, and then introduce the functionality and the features of ADG. In particular, we also discuss the design architecture of ADG and implementation details including the recomputation and storing strategy and several techniques for code optimization. Some experimental results in several applications are presented at the end.

  7. Code query by example

    Science.gov (United States)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  8. 15 CFR 700.31 - Metalworking machines.

    Science.gov (United States)

    2010-01-01

    ... Drilling and tapping machines Electrical discharge, ultrasonic and chemical erosion machines Forging..., power driven Machining centers and way-type machines Manual presses Mechanical presses, power...

  9. LHC Report: machine development

    CERN Multimedia

    Rogelio Tomás García for the LHC team

    2015-01-01

    Machine development weeks are carefully planned in the LHC operation schedule to optimise and further study the performance of the machine. The first machine development session of Run 2 ended on Saturday, 25 July. Despite various hiccoughs, it allowed the operators to make great strides towards improving the long-term performance of the LHC.   The main goals of this first machine development (MD) week were to determine the minimum beam-spot size at the interaction points given existing optics and collimation constraints; to test new beam instrumentation; to evaluate the effectiveness of performing part of the beam-squeezing process during the energy ramp; and to explore the limits on the number of protons per bunch arising from the electromagnetic interactions with the accelerator environment and the other beam. Unfortunately, a series of events reduced the machine availability for studies to about 50%. The most critical issue was the recurrent trip of a sextupolar corrector circuit –...

  10. Introduction to machine learning.

    Science.gov (United States)

    Baştanlar, Yalin; Ozuysal, Mustafa

    2014-01-01

    The machine learning field, which can be briefly defined as enabling computers make successful predictions using past experiences, has exhibited an impressive development recently with the help of the rapid increase in the storage capacity and processing power of computers. Together with many other disciplines, machine learning methods have been widely employed in bioinformatics. The difficulties and cost of biological analyses have led to the development of sophisticated machine learning approaches for this application area. In this chapter, we first review the fundamental concepts of machine learning such as feature assessment, unsupervised versus supervised learning and types of classification. Then, we point out the main issues of designing machine learning experiments and their performance evaluation. Finally, we introduce some supervised learning methods.

  11. Corner neutronic code

    Directory of Open Access Journals (Sweden)

    V.P. Bereznev

    2015-10-01

    An iterative solution process is used, including external iterations for the fission source and internal iterations for the scattering source. The paper presents the results of a cross-verification against the Monte Carlo MMK code [3] and on a model of the BN-800 reactor core.

  12. Spiritualist Writing Machines: Telegraphy, Typtology, Typewriting

    Directory of Open Access Journals (Sweden)

    Anthony Enns

    2015-09-01

    Full Text Available This paper examines how religious concepts both reflected and informed the development of new technologies for encoding, transmitting, and printing written information. While many spiritualist writing machines were based on existing technologies that were repurposed for spirit communication, others prefigured or even inspired more advanced technological innovations. The history of spiritualist writing machines thus not only represents a response to the rise of new media technologies in the nineteenth century, but it also reflects a set of cultural demands that helped to shape the development of new technologies, such as the need to replace handwriting with discrete, uniform lettering, which accelerated the speed of composition; the need to translate written information into codes, which could be transmitted across vast distances; and the need to automate the process of transmitting, translating, and transcribing written information, which seemed to endow the machines themselves with a certain degree of autonomy or even intelligence. While spiritualists and inventors were often (but not always motivated by different goals, the development of spiritualist writing machines and the development of technological writing machines were nevertheless deeply interrelated and interdependent.

  13. BILINEAR FORMS AND LINEAR CODES

    Institute of Scientific and Technical Information of China (English)

    高莹

    2004-01-01

    Abraham Lempel et al[1] made a connection between linear codes and systems of bilinear forms over finite fields. In this correspondence, a new simple proof of a theorem in [1] is presented; in addition, the encoding process and the decoding procedure of RS codes are simplified via circulant matrices. Finally, the results show that the correspondence between bilinear forms and linear codes is not unique.

  14. Oil and gas field code master list, 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-16

    This document contains data collected through October 1993 and provides standardized field name spellings and codes for all identified oil and/or gas fields in the United States. Other Federal and State government agencies, as well as industry, use the EIA Oil and Gas Field Code Master List as the standard for field identification. A machine-readable version of the Oil and Gas Field Code Master List is available from the National Technical Information Service.

  15. Five-axis rough machining for impellers

    Institute of Scientific and Technical Information of China (English)

    Ruolong QI; Weijun LIU; Hongyou BIAN; Lun LI

    2009-01-01

    The most important components used in aero-space, ships, and automobiles are designed with free form surfaces. An impeller is one of the most important components that is difficult to machine because of its twisted blades. Rough machining is recognized as the most crucial procedure influencing machining efficiency and is critical for the finishing process. An integrated rough machining course with detailed algorithms is presented in this paper. An algorithm for determining the minimum distance between two surfaces is applied to estimate the tool size. The space between two blades that will be cleared from the roughcast is divided to generate CC points. The tool axis vector is confirmed based on flank milling using a simple method that could eliminate global interference between the tool and the blades. The result proves that the machining methodology presented in this paper is useful and successful.

  16. Adaptive machine and its thermodynamic costs

    Science.gov (United States)

    Allahverdyan, Armen E.; Wang, Q. A.

    2013-03-01

    We study the minimal thermodynamically consistent model for an adaptive machine that transfers particles from a higher chemical potential reservoir to a lower one. This model describes essentials of the inhomogeneous catalysis. It is supposed to function with the maximal current under uncertain chemical potentials: if they change, the machine tunes its own structure fitting it to the maximal current under new conditions. This adaptation is possible under two limitations: (i) The degree of freedom that controls the machine's structure has to have a stored energy (described via a negative temperature). The origin of this result is traced back to the Le Chatelier principle. (ii) The machine has to malfunction at a constant environment due to structural fluctuations, whose relative magnitude is controlled solely by the stored energy. We argue that several features of the adaptive machine are similar to those of living organisms (energy storage, aging).

  17. Reverse hypothesis machine learning a practitioner's perspective

    CERN Document Server

    Kulkarni, Parag

    2017-01-01

    This book introduces a paradigm of reverse hypothesis machines (RHM), focusing on knowledge innovation and machine learning. Knowledge- acquisition -based learning is constrained by large volumes of data and is time consuming. Hence Knowledge innovation based learning is the need of time. Since under-learning results in cognitive inabilities and over-learning compromises freedom, there is need for optimal machine learning. All existing learning techniques rely on mapping input and output and establishing mathematical relationships between them. Though methods change the paradigm remains the same—the forward hypothesis machine paradigm, which tries to minimize uncertainty. The RHM, on the other hand, makes use of uncertainty for creative learning. The approach uses limited data to help identify new and surprising solutions. It focuses on improving learnability, unlike traditional approaches, which focus on accuracy. The book is useful as a reference book for machine learning researchers and professionals as ...

  18. [Psychic experience of pathological machine gamblers].

    Science.gov (United States)

    Avtonomov, D A

    2011-01-01

    The author presents results of the psychopathological phenomena and subjective experience study of 38 patients with the verified diagnosis "Pathological addiction to gambling" (F63.0) without psychotic disorders. In 84,2% cases, the patients preferred slot machine gambling. The causes of such preferences were analyzed. The phenomenology of the psychic experience of the patients who are slot machine gamblers is presented. With the formation of the addiction, the gamblers began to think about slot machines as human beings (creatures), feel attachment to them, see the individuality in them, and experience slot machines as live and real partners in imaginative or even verbal dialogs. Two main "forms of contact" with slot machines were elicited and described: verbal and non-verbal. The gambler has been gradually depleted the image of himself and experiences the "loss of contact" with his own features, qualities, wishes, and intentions. The data obtained may be helpful in psychotherapeutic and rehabilitative work with such patients.

  19. Quantum cloning machines and the applications

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Heng, E-mail: hfan@iphy.ac.cn [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Collaborative Innovation Center of Quantum Matter, Beijing 100190 (China); Wang, Yi-Nan; Jing, Li [School of Physics, Peking University, Beijing 100871 (China); Yue, Jie-Dong [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Shi, Han-Duo; Zhang, Yong-Liang; Mu, Liang-Zhu [School of Physics, Peking University, Beijing 100871 (China)

    2014-11-20

    No-cloning theorem is fundamental for quantum mechanics and for quantum information science that states an unknown quantum state cannot be cloned perfectly. However, we can try to clone a quantum state approximately with the optimal fidelity, or instead, we can try to clone it perfectly with the largest probability. Thus various quantum cloning machines have been designed for different quantum information protocols. Specifically, quantum cloning machines can be designed to analyze the security of quantum key distribution protocols such as BB84 protocol, six-state protocol, B92 protocol and their generalizations. Some well-known quantum cloning machines include universal quantum cloning machine, phase-covariant cloning machine, the asymmetric quantum cloning machine and the probabilistic quantum cloning machine. In the past years, much progress has been made in studying quantum cloning machines and their applications and implementations, both theoretically and experimentally. In this review, we will give a complete description of those important developments about quantum cloning and some related topics. On the other hand, this review is self-consistent, and in particular, we try to present some detailed formulations so that further study can be taken based on those results.

  20. Strong Trinucleotide Circular Codes

    Directory of Open Access Journals (Sweden)

    Christian J. Michel

    2011-01-01

    Full Text Available Recently, we identified a hierarchy relation between trinucleotide comma-free codes and trinucleotide circular codes (see our previous works. Here, we extend our hierarchy with two new classes of codes, called DLD and LDL codes, which are stronger than the comma-free codes. We also prove that no circular code with 20 trinucleotides is a DLD code and that a circular code with 20 trinucleotides is comma-free if and only if it is a LDL code. Finally, we point out the possible role of the symmetric group ∑4 in the mathematical study of trinucleotide circular codes.

  1. Practices in Code Discoverability: Astrophysics Source Code Library

    CERN Document Server

    Allen, Alice; Nemiroff, Robert J; Shamir, Lior

    2012-01-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL (http://ascl.net) has on average added 19 new codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available ei...

  2. Performance of a Folded-Strip Toroidally Wound Induction Machine

    DEFF Research Database (Denmark)

    Jensen, Bogi Bech; Jack, Alan G.; Atkinson, Glynn J.

    2011-01-01

    This paper presents the measured experimental results from a four-pole toroidally wound induction machine, where the stator is constructed as a pre-wound foldable strip. It shows that if the machine is axially restricted in length, the toroidally wound induction machine can have substantially...

  3. On Using Very Large Target Vocabulary for Neural Machine Translation

    OpenAIRE

    Jean, Sébastien; Cho, Kyunghyun; Memisevic, Roland; Bengio, Yoshua

    2014-01-01

    Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrase-based statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method that allows us ...

  4. Audio Signal Generator System Based On State Machines

    Institute of Scientific and Technical Information of China (English)

    王维喜

    2009-01-01

    A state machine can make program designing quicker, simpler and more efficient. This paper describes in detail the model for a state machine and the idea for its designing and gives the design process of the state machine through an example of audio signal generator system based on Labview. The result shows that the introduction of the state machine can make complex design processes more clear and the revision of programs easier.

  5. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  6. One- and two-dimensional Stirling machine simulation using experimentally generated reversing flow turbuulence models

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, L.F. [Univ. of Minnesota, Minneapolis, MN (United States)

    1990-08-01

    The activities described in this report do not constitute a continuum but rather a series of linked smaller investigations in the general area of one- and two-dimensional Stirling machine simulation. The initial impetus for these investigations was the development and construction of the Mechanical Engineering Test Rig (METR) under a grant awarded by NASA to Dr. Terry Simon at the Department of Mechanical Engineering, University of Minnesota. The purpose of the METR is to provide experimental data on oscillating turbulent flows in Stirling machine working fluid flow path components (heater, cooler, regenerator, etc.) with particular emphasis on laminar/turbulent flow transitions. Hence, the initial goals for the grant awarded by NASA were, broadly, to provide computer simulation backup for the design of the METR and to analyze the results produced. This was envisaged in two phases: First, to apply an existing one-dimensional Stirling machine simulation code to the METR and second, to adapt a two-dimensional fluid mechanics code which had been developed for simulating high Rayleigh number buoyant cavity flows to the METR. The key aspect of this latter component was the development of an appropriate turbulence model suitable for generalized application to Stirling simulation. A final-step was then to apply the two-dimensional code to an existing Stirling machine for which adequate experimental data exist. The work described herein was carried out over a period of three years on a part-time basis. Forty percent of the first year`s funding was provided as a match to the NASA funds by the Underground Space Center, University of Minnesota, which also made its computing facilities available to the project at no charge.

  7. Machine Code and Metaphysics: A Perspective on Software Engineering

    OpenAIRE

    2015-01-01

    A major, but too-little-considered problem for Software Engineering (SE) is a lack of consensus concerning Computer Science (CS) and how this relates to developing unpredictable computing technology. We consider some implications for SE of computer systems differing scientific basis, exemplified with the International Standard Organisations Open Systems Interconnection (ISO-OSI) layered architectural model. An architectural view allows comparison of computing technology components facilitatin...

  8. Bionic machines and systems

    Energy Technology Data Exchange (ETDEWEB)

    Halme, A.; Paanajaervi, J. (eds.)

    2004-07-01

    Introduction Biological systems form a versatile and complex entirety on our planet. One evolutionary branch of primates, called humans, has created an extraordinary skill, called technology, by the aid of which it nowadays dominate life on the planet. Humans use technology for producing and harvesting food, healthcare and reproduction, increasing their capability to commute and communicate, defending their territory etc., and to develop more technology. As a result of this, humans have become much technology dependent, so that they have been forced to form a specialized class of humans, called engineers, who take care of the knowledge of technology developing it further and transferring it to later generations. Until now, technology has been relatively independent from biology, although some of its branches, e.g. biotechnology and biomedical engineering, have traditionally been in close contact with it. There exist, however, an increasing interest to expand the interface between technology and biology either by directly utilizing biological processes or materials by combining them with 'dead' technology, or by mimicking in technological solutions the biological innovations created by evolution. The latter theme is in focus of this report, which has been written as the proceeding of the post-graduate seminar 'Bionic Machines and Systems' held at HUT Automation Technology Laboratory in autumn 2003. The underlaying idea of the seminar was to analyze biological species by considering them as 'robotic machines' having various functional subsystems, such as for energy, motion and motion control, perception, navigation, mapping and localization. We were also interested about intelligent capabilities, such as learning and communication, and social structures like swarming behavior and its mechanisms. The word 'bionic machine' comes from the book which was among the initial material when starting our mission to the fascinating world

  9. An Approach for Implementing State Machines with Online Testability

    Directory of Open Access Journals (Sweden)

    P. K. Lala

    2010-01-01

    Full Text Available During the last two decades, significant amount of research has been performed to simplify the detection of transient or soft errors in VLSI-based digital systems. This paper proposes an approach for implementing state machines that uses 2-hot code for state encoding. State machines designed using this approach allow online detection of soft errors in registers and output logic. The 2-hot code considerably reduces the number of required flip-flops and leads to relatively straightforward implementation of next state and output logic. A new way of designing output logic for online fault detection has also been presented.

  10. Machine Learning and Radiology

    Science.gov (United States)

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  11. The basic anaesthesia machine.

    Science.gov (United States)

    Gurudatt, Cl

    2013-09-01

    After WTG Morton's first public demonstration in 1846 of use of ether as an anaesthetic agent, for many years anaesthesiologists did not require a machine to deliver anaesthesia to the patients. After the introduction of oxygen and nitrous oxide in the form of compressed gases in cylinders, there was a necessity for mounting these cylinders on a metal frame. This stimulated many people to attempt to construct the anaesthesia machine. HEG Boyle in the year 1917 modified the Gwathmey's machine and this became popular as Boyle anaesthesia machine. Though a lot of changes have been made for the original Boyle machine still the basic structure remains the same. All the subsequent changes which have been brought are mainly to improve the safety of the patients. Knowing the details of the basic machine will make the trainee to understand the additional improvements. It is also important for every practicing anaesthesiologist to have a thorough knowledge of the basic anaesthesia machine for safe conduct of anaesthesia.

  12. The basic anaesthesia machine

    Directory of Open Access Journals (Sweden)

    C L Gurudatt

    2013-01-01

    Full Text Available After WTG Morton′s first public demonstration in 1846 of use of ether as an anaesthetic agent, for many years anaesthesiologists did not require a machine to deliver anaesthesia to the patients. After the introduction of oxygen and nitrous oxide in the form of compressed gases in cylinders, there was a necessity for mounting these cylinders on a metal frame. This stimulated many people to attempt to construct the anaesthesia machine. HEG Boyle in the year 1917 modified the Gwathmey′s machine and this became popular as Boyle anaesthesia machine. Though a lot of changes have been made for the original Boyle machine still the basic structure remains the same. All the subsequent changes which have been brought are mainly to improve the safety of the patients. Knowing the details of the basic machine will make the trainee to understand the additional improvements. It is also important for every practicing anaesthesiologist to have a thorough knowledge of the basic anaesthesia machine for safe conduct of anaesthesia.

  13. XII International Conference on the Theory of Machines and Mechanisms

    CERN Document Server

    Bílek, Martin; Žabka, Petr

    2017-01-01

    This book presents the most recent advances in the research of machines and mechanisms. It collects 54 reviewed papers presented at the XII International Conference on the Theory of Machines and mechanisms (TMM 2016) held in Liberec, Czech Republic, September 6-8, 2016. This volume offers an international selection of the most important new results and developments, grouped in six different parts, representing a well-balanced overview, and spanning the general theory of machines and mechanisms, through analysis and synthesis of planar and spatial mechanisms, linkages and cams, robots and manipulators, dynamics of machines and mechanisms, rotor dynamics, computational mechanics, vibration and noise in machines, optimization of mechanisms and machines, mechanisms of textile machines, mechatronics to the control and monitoring systems of machines. This conference is traditionally organised every four year under the auspices of the international organisation IFToMM and the Czech Society for Mechanics.

  14. Simulation and Optimization of Turning-Milling Complex Machining

    Directory of Open Access Journals (Sweden)

    Shihong Guo

    2013-05-01

    Full Text Available In this study, the turning-milling complex processing simulation platform is established based on the simulation and optimization platform of VERICUT NC machining, with WFL M65 turning-milling complex machining center as the research object; taking barrel body parts as an example, the simulation machining and related process issues checking in machining process is made and the analysis and optimization of effect factors is made for processing efficiency. The application indicates that: the research results effectively realize the simulation of the turning-milling complex machining process and the correctness verification and process optimization of the NC machining program, improve the processing efficiency and the processing quality, well improve the application level of enterprise turning-milling complex machining center, promote the development of the turning-milling complex machining technology.

  15. Laboratory directed research and development final report: Intelligent tools for on-machine acceptance of precision machined components

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, N.G.; Harwell, L.D.; Hazelton, A.

    1997-02-01

    On-Machine Acceptance (OMA) is an agile manufacturing concept being developed for machine tools at SNL. The concept behind OMA is the integration of product design, fabrication, and qualification processes by using the machining center as a fabrication and inspection tool. This report documents the final results of a Laboratory Directed Research and Development effort to qualify OMA.

  16. Parallelization and performance tuning of molecular dynamics code with OpenMP

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An OpenMP approach was proposed to parallelize the sequential molecular dynamics (MD) code on shared memory machines. When a code is converted from the sequential form to the parallel form, data dependence is a main problem. A traditional sequential molecular dynamics code is anatomized to find the data dependence segments in it, and the two different methods, i.e. , recover method and backward mapping method were used to eliminate those data dependencies in order to realize the parallelization of this sequential MD code. The performance of the parallelized MD code was analyzed by using some performance analysis tools. The results of the test show that the computing size of this code increases sharply form 1 million atoms before parallelization to 20 million atoms after parallelization, and the wall clock during computing is reduced largely. Some hot-spots in this code are found and optimized by improved algorithm. The efficiency of parallel computing is 30% higher than that of before, and the calculation time is saved and larger scale calculation problems are solved.

  17. (Almost) practical tree codes

    KAUST Repository

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  18. Part Machinability Evaluation System

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    In the early design period, estimation of the part or the whole product machinability is useful to consider the function and process request of the product at the same time so as to globally optimize the design decision. This paper presents a part machinability evaluation system, discusses the general restrictions of part machinability, and realizes the inspection of these restrictions with the relation between tool scan space and part model. During the system development, the expansibility and understandability were considered, and an independent restriction algorithm library and a general function library were set up. Additionally, the system has an interpreter and a knowledge manager.

  19. Machine Tool Software

    Science.gov (United States)

    1988-01-01

    A NASA-developed software package has played a part in technical education of students who major in Mechanical Engineering Technology at William Rainey Harper College. Professor Hack has been using (APT) Automatically Programmed Tool Software since 1969 in his CAD/CAM Computer Aided Design and Manufacturing curriculum. Professor Hack teaches the use of APT programming languages for control of metal cutting machines. Machine tool instructions are geometry definitions written in APT Language to constitute a "part program." The part program is processed by the machine tool. CAD/CAM students go from writing a program to cutting steel in the course of a semester.

  20. Analysis of synchronous machines

    CERN Document Server

    Lipo, TA

    2012-01-01

    Analysis of Synchronous Machines, Second Edition is a thoroughly modern treatment of an old subject. Courses generally teach about synchronous machines by introducing the steady-state per phase equivalent circuit without a clear, thorough presentation of the source of this circuit representation, which is a crucial aspect. Taking a different approach, this book provides a deeper understanding of complex electromechanical drives. Focusing on the terminal rather than on the internal characteristics of machines, the book begins with the general concept of winding functions, describing the placeme

  1. Database machine performance

    Energy Technology Data Exchange (ETDEWEB)

    Cesarini, F.; Salza, S.

    1987-01-01

    This book is devoted to the important problem of database machine performance evaluation. The book presents several methodological proposals and case studies, that have been developed within an international project supported by the European Economic Community on Database Machine Evaluation Techniques and Tools in the Context of the Real Time Processing. The book gives an overall view of the modeling methodologies and the evaluation strategies that can be adopted to analyze the performance of the database machine. Moreover, it includes interesting case studies and an extensive bibliography.

  2. Virtual Machine Introspection

    Directory of Open Access Journals (Sweden)

    S C Rachana

    2014-06-01

    Full Text Available Cloud computing is an Internet-based computing solution which provides the resources in an effective manner. A very serious issue in cloud computing is security which is a major obstacle for the adoption of cloud. The most important threats of cloud computing are Multitenancy, Availability, Loss of control, Loss of Data, outside attacks, DOS attacks, malicious insiders, etc. Among many security issues in cloud, the Virtual Machine Security is one of the very serious issues. Thus, monitoring of virtual machine is essential. The paper proposes a Virtual Network Introspection [VMI] System to secure the Virtual machines from Distributed Denial of Service [DDOS] and Zombie attacks.

  3. Virtual Machine Introspection

    Directory of Open Access Journals (Sweden)

    S C Rachana

    2015-11-01

    Full Text Available Cloud computing is an Internet-based computing solution which provides the resources in an effective manner. A very serious issue in cloud computing is security which is a major obstacle for the adoption of cloud. The most important threats of cloud computing are Multitenancy, Availability, Loss of control, Loss of Data, outside attacks, DOS attacks, malicious insiders, etc. Among many security issues in cloud, the Virtual Machine Security is one of the very serious issues. Thus, monitoring of virtual machine is essential. The paper proposes a Virtual Network Introspection [VMI] System to secure the Virtual machines from Distributed Denial of Service [DDOS] and Zombie attacks.

  4. Machine Learning for Hackers

    CERN Document Server

    Conway, Drew

    2012-01-01

    If you're an experienced programmer interested in crunching data, this book will get you started with machine learning-a toolkit of algorithms that enables computers to train themselves to automate useful tasks. Authors Drew Conway and John Myles White help you understand machine learning and statistics tools through a series of hands-on case studies, instead of a traditional math-heavy presentation. Each chapter focuses on a specific problem in machine learning, such as classification, prediction, optimization, and recommendation. Using the R programming language, you'll learn how to analyz

  5. Evaluating Arabic to English Machine Translation

    Directory of Open Access Journals (Sweden)

    Laith S. Hadla

    2014-11-01

    Full Text Available Online text machine translation systems are widely used throughout the world freely. Most of these systems use statistical machine translation (SMT that is based on a corpus full with translation examples to learn from them how to translate correctly. Online text machine translation systems differ widely in their effectiveness, and therefore we have to fairly evaluate their effectiveness. Generally the manual (human evaluation of machine translation (MT systems is better than the automatic evaluation, but it is not feasible to be used. The distance or similarity of MT candidate output to a set of reference translations are used by many MT evaluation approaches. This study presents a comparison of effectiveness of two free online machine translation systems (Google Translate and Babylon machine translation system to translate Arabic to English. There are many automatic methods used to evaluate different machine translators, one of these methods; Bilingual Evaluation Understudy (BLEU method. BLEU is used to evaluate translation quality of two free online machine translation systems under consideration. A corpus consists of more than 1000 Arabic sentences with two reference English translations for each Arabic sentence is used in this study. This corpus of Arabic sentences and their English translations consists of 4169 Arabic words, where the number of unique Arabic words is 2539. This corpus is released online to be used by researchers. These Arabic sentences are distributed among four basic sentence functions (declarative, interrogative, exclamatory, and imperative. The experimental results show that Google machine translation system is better than Babylon machine translation system in terms of precision of translation from Arabic to English.

  6. PENGEMBANGAN PROGRAM APLIKASI ENHANCED MACHINE CONTROL DENGAN PYTHON UNTUK METODE INTERPOLASI NEWTON

    Directory of Open Access Journals (Sweden)

    Alexander Agung Santoso Gunawan

    2012-05-01

    machine is EMC (Enhanced Machine Control and GUI (Graphical User Interface AXIS on the operating system Linux Ubuntu. The Newton interpolation is used to create a curve based on several point determined by user. By converting this curve into G-Code, which could be read by CNC machine, the machine can move according to curve designed by user. This research is an initial study to customize the CNC machine and will continue to fulfill the user needs. This research obtained a program that is able to run well up to 4 input pairs. The higher number inputs will cause the oscillation in the interpolation curve.

  7. Reliable Software Development for Machine Protection Systems

    CERN Document Server

    Anderson, D; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Misiowiec, K; Stamos, K; Zerlauth, M

    2014-01-01

    The Controls software for the Large Hadron Collider (LHC) at CERN, with more than 150 millions lines of code, resides amongst the largest known code bases in the world1. Industry has been applying Agile software engineering techniques for more than two decades now, and the advantages of these techniques can no longer be ignored to manage the code base for large projects within the accelerator community. Furthermore, CERN is a particular environment due to the high personnel turnover and manpower limitations, where applying Agile processes can improve both, the codebase management as well as its quality. This paper presents the successful application of the Agile software development process Scrum for machine protection systems at CERN, the quality standards and infrastructure introduced together with the Agile process as well as the challenges encountered to adapt it to the CERN environment.

  8. Model Children's Code.

    Science.gov (United States)

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  9. Characterization of machining quality attributes based on spindle probe, coordinate measuring machine, and surface roughness data

    Directory of Open Access Journals (Sweden)

    Tzu-Liang Bill Tseng

    2014-04-01

    Full Text Available This study investigates the effects of machining parameters as they relate to the quality characteristics of machined features. Two most important quality characteristics are set as the dimensional accuracy and the surface roughness. Before any newly acquired machine tool is put to use for production, it is important to test the machine in a systematic way to find out how different parameter settings affect machining quality. The empirical verification was made by conducting a Design of Experiment (DOE with 3 levels and 3 factors on a state-of-the-art Cincinnati Hawk Arrow 750 Vertical Machining Center (VMC. Data analysis revealed that the significant factor was the Hardness of the material and the significant interaction effect was the Hardness + Feed for dimensional accuracy, while the significant factor was Speed for surface roughness. Since the equally important thing is the capability of the instruments from which the quality characteristics are being measured, a comparison was made between the VMC touch probe readings and the measurements from a Mitutoyo coordinate measuring machine (CMM on bore diameters. A machine mounted touch probe has gained a wide acceptance in recent years, as it is more suitable for the modern manufacturing environment. The data vindicated that the VMC touch probe has the capability that is suitable for the production environment. The test results can be incorporated in the process plan to help maintain the machining quality in the subsequent runs.

  10. Distributed space-time coding

    CERN Document Server

    Jing, Yindi

    2014-01-01

    Distributed Space-Time Coding (DSTC) is a cooperative relaying scheme that enables high reliability in wireless networks. This brief presents the basic concept of DSTC, its achievable performance, generalizations, code design, and differential use. Recent results on training design and channel estimation for DSTC and the performance of training-based DSTC are also discussed.

  11. Transversal Clifford gates on folded surface codes

    Science.gov (United States)

    Moussa, Jonathan E.

    2016-10-01

    Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surface codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. The specific application of these codes to universal quantum computation based on qubit fusion is also discussed.

  12. Some relations between quantum Turing machines and Turing machines

    CERN Document Server

    Sicard, A; Sicard, Andrés; Vélez, Mario

    1999-01-01

    For quantum Turing machines we present three elements: Its components, its time evolution operator and its local transition function. The components are related with deterministic Turing machines, the time evolution operator is related with reversible Turing machines and the local transition function is related with probabilistic and reversible Turing machines.

  13. Machining of hard-to-machine materials

    OpenAIRE

    2016-01-01

    Bakalářská práce se zabývá studiem obrábění těžkoobrobitelných materiálů. V první části jsou rozděleny těžkoobrobitelné materiály a následuje jejich analýza. V další části se práce zaměřuje na problematiku obrobitelnosti jednotlivých slitin. Závěrečná část práce je věnovaná experimentu, jeho statistickému zpracování a nakonec následnému vyhodnocení. This bachelor thesis studies the machining of hard-to-machine materials. The first part of the thesis considers hard-to-machine materials and ...

  14. Toward Automating HIV Identification: Machine Learning for Rapid Identification of HIV-Related Social Media Data.

    Science.gov (United States)

    Young, Sean D; Yu, Wenchao; Wang, Wei

    2017-02-01

    "Social big data" from technologies such as social media, wearable devices, and online searches continue to grow and can be used as tools for HIV research. Although researchers can uncover patterns and insights associated with HIV trends and transmission, the review process is time consuming and resource intensive. Machine learning methods derived from computer science might be used to assist HIV domain experts by learning how to rapidly and accurately identify patterns associated with HIV from a large set of social data. Using an existing social media data set that was associated with HIV and coded by an HIV domain expert, we tested whether 4 commonly used machine learning methods could learn the patterns associated with HIV risk behavior. We used the 10-fold cross-validation method to examine the speed and accuracy of these models in applying that knowledge to detect HIV content in social media data. Logistic regression and random forest resulted in the highest accuracy in detecting HIV-related social data (85.3%), whereas the Ridge Regression Classifier resulted in the lowest accuracy. Logistic regression yielded the fastest processing time (16.98 seconds). Machine learning can enable social big data to become a new and important tool in HIV research, helping to create a new field of "digital HIV epidemiology." If a domain expert can identify patterns in social data associated with HIV risk or HIV transmission, machine learning models could quickly and accurately learn those associations and identify potential HIV patterns in large social data sets.

  15. Machine (bulk) harvest

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This is a summary of machine harvesting activities on Neal Smith National Wildlife Refuge between 1991 and 2008. Information is provided for each year about...

  16. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  17. Tests of Machine Intelligence

    CERN Document Server

    Legg, Shane

    2007-01-01

    Although the definition and measurement of intelligence is clearly of fundamental importance to the field of artificial intelligence, no general survey of definitions and tests of machine intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of machine intelligence that have been proposed.

  18. mlpy: Machine Learning Python

    CERN Document Server

    Albanese, Davide; Merler, Stefano; Riccadonna, Samantha; Jurman, Giuseppe; Furlanello, Cesare

    2012-01-01

    mlpy is a Python Open Source Machine Learning library built on top of NumPy/SciPy and the GNU Scientific Libraries. mlpy provides a wide range of state-of-the-art machine learning methods for supervised and unsupervised problems and it is aimed at finding a reasonable compromise among modularity, maintainability, reproducibility, usability and efficiency. mlpy is multiplatform, it works with Python 2 and 3 and it is distributed under GPL3 at the website http://mlpy.fbk.eu.

  19. Human-machine interactions

    Science.gov (United States)

    Forsythe, J. Chris; Xavier, Patrick G.; Abbott, Robert G.; Brannon, Nathan G.; Bernard, Michael L.; Speed, Ann E.

    2009-04-28

    Digital technology utilizing a cognitive model based on human naturalistic decision-making processes, including pattern recognition and episodic memory, can reduce the dependency of human-machine interactions on the abilities of a human user and can enable a machine to more closely emulate human-like responses. Such a cognitive model can enable digital technology to use cognitive capacities fundamental to human-like communication and cooperation to interact with humans.

  20. Machine Learning with Distances

    Science.gov (United States)

    2015-02-16

    and demonstrated their usefulness in experiments. 1 Introduction The goal of machine learning is to find useful knowledge behind data. Many machine...212, 172]. However, direct divergence approximators still suffer from the curse of dimensionality. A possible cure for this problem is to combine them...obtain the global optimal solution or even a good local solution without any prior knowledge . For this reason, we decided to introduce the unit-norm