WorldWideScience

Sample records for element code architecture

  1. Elements of Architecture

    DEFF Research Database (Denmark)

    Elements of Architecture explores new ways of engaging architecture in archaeology. It conceives of architecture both as the physical evidence of past societies and as existing beyond the physical environment, considering how people in the past have not just dwelled in buildings but have existed...

  2. Research and Design in Unified Coding Architecture for Smart Grids

    Directory of Open Access Journals (Sweden)

    Gang Han

    2013-09-01

    Full Text Available Standardized and sharing information platform is the foundation of the Smart Grids. In order to improve the dispatching center information integration of the power grids and achieve efficient data exchange, sharing and interoperability, a unified coding architecture is proposed. The architecture includes coding management layer, coding generation layer, information models layer and application system layer. Hierarchical design makes the whole coding architecture to adapt to different application environments, different interfaces, loosely coupled requirements, which can realize the integration model management function of the power grids. The life cycle and evaluation method of survival of unified coding architecture is proposed. It can ensure the stability and availability of the coding architecture. Finally, the development direction of coding technology of the Smart Grids in future is prospected.

  3. Building code challenging the ethics behind adobe architecture in North Cyprus.

    Science.gov (United States)

    Hurol, Yonca; Yüceer, Hülya; Şahali, Öznem

    2015-04-01

    Adobe masonry is part of the vernacular architecture of Cyprus. Thus, it is possible to use this technology in a meaningful way on the island. On the other hand, although adobe architecture is more sustainable in comparison to other building technologies, the use of it is diminishing in North Cyprus. The application of Turkish building code in the north of the island has created complications in respect of the use of adobe masonry, because this building code demands that reinforced concrete vertical tie-beams are used together with adobe masonry. The use of reinforced concrete elements together with adobe masonry causes problems in relation to the climatic response of the building as well as causing other technical and aesthetic problems. This situation makes the design of adobe masonry complicated and various types of ethical problems also emerge. The objective of this article is to analyse the ethical problems which arise as a consequence of the restrictive character of the building code, by analysing two case studies and conducting an interview with an architect who was involved with the use of adobe masonry in North Cyprus. According to the results of this article there are ethical problems at various levels in the design of both case studies. These problems are connected to the responsibilities of architects in respect of the social benefit, material production, aesthetics and affordability of the architecture as well as presenting distrustful behaviour where the obligations of architects to their clients is concerned.

  4. High Efficiency EBCOT with Parallel Coding Architecture for JPEG2000

    Directory of Open Access Journals (Sweden)

    Chiang Jen-Shiun

    2006-01-01

    Full Text Available This work presents a parallel context-modeling coding architecture and a matching arithmetic coder (MQ-coder for the embedded block coding (EBCOT unit of the JPEG2000 encoder. Tier-1 of the EBCOT consumes most of the computation time in a JPEG2000 encoding system. The proposed parallel architecture can increase the throughput rate of the context modeling. To match the high throughput rate of the parallel context-modeling architecture, an efficient pipelined architecture for context-based adaptive arithmetic encoder is proposed. This encoder of JPEG2000 can work at 180 MHz to encode one symbol each cycle. Compared with the previous context-modeling architectures, our parallel architectures can improve the throughput rate up to 25%.

  5. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  6. Reversible machine code and its abstract processor architecture

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert; Yokoyama, Tetsuo

    2007-01-01

    A reversible abstract machine architecture and its reversible machine code are presented and formalized. For machine code to be reversible, both the underlying control logic and each instruction must be reversible. A general class of machine instruction sets was proven to be reversible, building...

  7. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  8. Novel power saving architecture for FBG based OCDMA code generation

    Science.gov (United States)

    Osadola, Tolulope B.; Idris, Siti K.; Glesk, Ivan

    2013-10-01

    A novel architecture for generating incoherent, 2-dimensional wavelength hopping-time spreading optical CDMA codes is presented. The architecture is designed to facilitate the reuse of optical source signal that is unused after an OCDMA code has been generated using fiber Bragg grating based encoders. Effective utilization of available optical power is therefore achieved by cascading several OCDMA encoders thereby enabling 3dB savings in optical power.

  9. Researching on knowledge architecture of design by analysis based on ASME code

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2003-01-01

    The quality of knowledge-based system's knowledge architecture is one of decisive factors of knowledge-based system's validity and rationality. For designing the ASME code knowledge based system, this paper presents a knowledge acquisition method which is extracting knowledge through document analysis consulted domain experts' knowledge. Then the paper describes knowledge architecture of design by analysis based on the related rules in ASME code. The knowledge of the knowledge architecture is divided into two categories: one is empirical knowledge, and another is ASME code knowledge. Applied as the basement of the knowledge architecture, a general procedural process of design by analysis that is met the engineering design requirements and designers' conventional mode is generalized and explained detailed in the paper. For the sake of improving inference efficiency and concurrent computation of KBS, a kind of knowledge Petri net (KPN) model is proposed and adopted in expressing the knowledge architecture. Furthermore, for validating and verifying of the empirical rules, five knowledge validation and verification theorems are given in the paper. Moreover the research production is applicable to design the knowledge architecture of ASME codes or other engineering standards. (author)

  10. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    Science.gov (United States)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  11. Computing element evolution towards Exascale and its impact on legacy simulation codes

    International Nuclear Information System (INIS)

    Colin de Verdiere, Guillaume J.L.

    2015-01-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes. (orig.)

  12. Computing element evolution towards Exascale and its impact on legacy simulation codes

    Science.gov (United States)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.

  13. Particle In Cell Codes on Highly Parallel Architectures

    Science.gov (United States)

    Tableman, Adam

    2014-10-01

    We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.

  14. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  15. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  16. A unified architecture of transcriptional regulatory elements

    DEFF Research Database (Denmark)

    Andersson, Robin; Sandelin, Albin Gustav; Danko, Charles G.

    2015-01-01

    Gene expression is precisely controlled in time and space through the integration of signals that act at gene promoters and gene-distal enhancers. Classically, promoters and enhancers are considered separate classes of regulatory elements, often distinguished by histone modifications. However...... and enhancers are considered a single class of functional element, with a unified architecture for transcription initiation. The context of interacting regulatory elements and the surrounding sequences determine local transcriptional output as well as the enhancer and promoter activities of individual elements....

  17. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  18. Architectural and Algorithmic Requirements for a Next-Generation System Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    V.A. Mousseau

    2010-05-01

    This document presents high-level architectural and system requirements for a next-generation system analysis code (NGSAC) to support reactor safety decision-making by plant operators and others, especially in the context of light water reactor plant life extension. The capabilities of NGSAC will be different from those of current-generation codes, not only because computers have evolved significantly in the generations since the current paradigm was first implemented, but because the decision-making processes that need the support of next-generation codes are very different from the decision-making processes that drove the licensing and design of the current fleet of commercial nuclear power reactors. The implications of these newer decision-making processes for NGSAC requirements are discussed, and resulting top-level goals for the NGSAC are formulated. From these goals, the general architectural and system requirements for the NGSAC are derived.

  19. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  20. CONDOR: a database resource of developmentally associated conserved non-coding elements

    Directory of Open Access Journals (Sweden)

    Smith Sarah

    2007-08-01

    Full Text Available Abstract Background Comparative genomics is currently one of the most popular approaches to study the regulatory architecture of vertebrate genomes. Fish-mammal genomic comparisons have proved powerful in identifying conserved non-coding elements likely to be distal cis-regulatory modules such as enhancers, silencers or insulators that control the expression of genes involved in the regulation of early development. The scientific community is showing increasing interest in characterizing the function, evolution and language of these sequences. Despite this, there remains little in the way of user-friendly access to a large dataset of such elements in conjunction with the analysis and the visualization tools needed to study them. Description Here we present CONDOR (COnserved Non-coDing Orthologous Regions available at: http://condor.fugu.biology.qmul.ac.uk. In an interactive and intuitive way the website displays data on > 6800 non-coding elements associated with over 120 early developmental genes and conserved across vertebrates. The database regularly incorporates results of ongoing in vivo zebrafish enhancer assays of the CNEs carried out in-house, which currently number ~100. Included and highlighted within this set are elements derived from duplication events both at the origin of vertebrates and more recently in the teleost lineage, thus providing valuable data for studying the divergence of regulatory roles between paralogs. CONDOR therefore provides a number of tools and facilities to allow scientists to progress in their own studies on the function and evolution of developmental cis-regulation. Conclusion By providing access to data with an approachable graphics interface, the CONDOR database presents a rich resource for further studies into the regulation and evolution of genes involved in early development.

  1. NASA Lewis Steady-State Heat Pipe Code Architecture

    Science.gov (United States)

    Mi, Ye; Tower, Leonard K.

    2013-01-01

    NASA Glenn Research Center (GRC) has developed the LERCHP code. The PC-based LERCHP code can be used to predict the steady-state performance of heat pipes, including the determination of operating temperature and operating limits which might be encountered under specified conditions. The code contains a vapor flow algorithm which incorporates vapor compressibility and axially varying heat input. For the liquid flow in the wick, Darcy s formula is employed. Thermal boundary conditions and geometric structures can be defined through an interactive input interface. A variety of fluid and material options as well as user defined options can be chosen for the working fluid, wick, and pipe materials. This report documents the current effort at GRC to update the LERCHP code for operating in a Microsoft Windows (Microsoft Corporation) environment. A detailed analysis of the model is presented. The programming architecture for the numerical calculations is explained and flowcharts of the key subroutines are given

  2. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  3. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  4. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    Science.gov (United States)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  5. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    Science.gov (United States)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  6. INGEN: a general-purpose mesh generator for finite element codes

    International Nuclear Information System (INIS)

    Cook, W.A.

    1979-05-01

    INGEN is a general-purpose mesh generator for two- and three-dimensional finite element codes. The basic parts of the code are surface and three-dimensional region generators that use linear-blending interpolation formulas. These generators are based on an i, j, k index scheme that is used to number nodal points, construct elements, and develop displacement and traction boundary conditions. This code can generate truss elements (2 modal points); plane stress, plane strain, and axisymmetry two-dimensional continuum elements (4 to 8 nodal points); plate elements (4 to 8 nodal points); and three-dimensional continuum elements (8 to 21 nodal points). The traction loads generated are consistent with the element generated. The expansion--contraction option is of special interest. This option makes it possible to change an existing mesh such that some regions are refined and others are made coarser than the original mesh. 9 figures

  7. Reply to "Comments on Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes"

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available This is a reply to the comments by Gunnam et al. "Comments on 'Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes'", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 704174 on our recent work "Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 723465.

  8. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    Science.gov (United States)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  9. A finite element code for electric motor design

    Science.gov (United States)

    Campbell, C. Warren

    1994-01-01

    FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.

  10. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  11. Architecture proposal for the use of QR code in supply chain management

    Directory of Open Access Journals (Sweden)

    Dalton Matsuo Tavares

    2012-01-01

    Full Text Available Supply chain traceability and visibility are key concerns for many companies. Radio-Frequency Identification (RFID is an enabling technology that allows identification of objects in a fully automated manner via radio waves. Nevertheless, this technology has limited acceptance and high costs. This paper presents a research effort undertaken to design a track and trace solution in supply chains, using quick response code (or QR Code for short as a less complex and cost-effective alternative for RFID in supply chain management (SCM. A first architecture proposal using open source software will be presented as a proof of concept. The system architecture is presented in order to achieve tag generation, the image acquisition and pre-processing, product inventory and tracking. A prototype system for the tag identification is developed and discussed at the end of the paper to demonstrate its feasibility.

  12. Periodic Boundary Conditions in the ALEGRA Finite Element Code

    International Nuclear Information System (INIS)

    Aidun, John B.; Robinson, Allen C.; Weatherby, Joe R.

    1999-01-01

    This document describes the implementation of periodic boundary conditions in the ALEGRA finite element code. ALEGRA is an arbitrary Lagrangian-Eulerian multi-physics code with both explicit and implicit numerical algorithms. The periodic boundary implementation requires a consistent set of boundary input sets which are used to describe virtual periodic regions. The implementation is noninvasive to the majority of the ALEGRA coding and is based on the distributed memory parallel framework in ALEGRA. The technique involves extending the ghost element concept for interprocessor boundary communications in ALEGRA to additionally support on- and off-processor periodic boundary communications. The user interface, algorithmic details and sample computations are given

  13. Analysis of central enterprise architecture elements in models of six eHealth projects.

    Science.gov (United States)

    Virkanen, Hannu; Mykkänen, Juha

    2014-01-01

    Large-scale initiatives for eHealth services have been established in many countries on regional or national level. The use of Enterprise Architecture has been suggested as a methodology to govern and support the initiation, specification and implementation of large-scale initiatives including the governance of business changes as well as information technology. This study reports an analysis of six health IT projects in relation to Enterprise Architecture elements, focusing on central EA elements and viewpoints in different projects.

  14. Propel: A Discontinuous-Galerkin Finite Element Code for Solving the Reacting Navier-Stokes Equations

    Science.gov (United States)

    Johnson, Ryan; Kercher, Andrew; Schwer, Douglas; Corrigan, Andrew; Kailasanath, Kazhikathra

    2017-11-01

    This presentation focuses on the development of a Discontinuous Galerkin (DG) method for application to chemically reacting flows. The in-house code, called Propel, was developed by the Laboratory of Computational Physics and Fluid Dynamics at the Naval Research Laboratory. It was designed specifically for developing advanced multi-dimensional algorithms to run efficiently on new and innovative architectures such as GPUs. For these results, Propel solves for convection and diffusion simultaneously with detailed transport and thermodynamics. Chemistry is currently solved in a time-split approach using Strang-splitting with finite element DG time integration of chemical source terms. Results presented here show canonical unsteady reacting flow cases, such as co-flow and splitter plate, and we report performance for higher order DG on CPU and GPUs.

  15. The effect of traditional architecture elements on architectureal and planning forming develop and raise the efficency of using the traditional energy (study case Crater/Aden, Yemen)

    International Nuclear Information System (INIS)

    Ghanem, Wadee Ahmed

    2006-01-01

    This paper discuss the role of architecture in Center city-Aden, Republic of Yemen which has a historical traditional architecture which is a unique sample with many elements that make the building of this city as an effective helper in keeping the sources traditional energy. This architecture could be meritoriously described as courtyards, high ceiling for suitable air circling are used as well as the main building material used are local and environmental such as stones, wood and lime stone (Pumic). The research aim at studying and analyzing the planning forming and architectural specification of this city through studying some examples of its buildings to recognize the traditional building role in saving the traditional energy by studying the building material, ventilation system, orientation and opening, for using these elements to raise the efficiency of using the resources of traditional sources. The research is abbreviated to several results such as: 1. Urbanization planning side: a. Elements of urban planning represented in the mass and opening their environmental role. b. Method of forming the urban planning. c. Series in arrangement of elements of urban planning. 2. Architectural side: a. Ratio between solid and void. b. opening shapes. c. internal courtyards. d. Unique architectural elements (Mashrabiyas (Oriels), sky lines, opening covering...etc). e. Building material used . f. building construction methods. g. Kind of walls.(Author)

  16. Recent progress of an integrated implosion code and modeling of element physics

    International Nuclear Information System (INIS)

    Nagatomo, H.; Takabe, H.; Mima, K.; Ohnishi, N.; Sunahara, A.; Takeda, T.; Nishihara, K.; Nishiguchu, A.; Sawada, K.

    2001-01-01

    Physics of the inertial fusion is based on a variety of elements such as compressible hydrodynamics, radiation transport, non-ideal equation of state, non-LTE atomic process, and relativistic laser plasma interaction. In addition, implosion process is not in stationary state and fluid dynamics, energy transport and instabilities should be solved simultaneously. In order to study such complex physics, an integrated implosion code including all physics important in the implosion process should be developed. The details of physics elements should be studied and the resultant numerical modeling should be installed in the integrated code so that the implosion can be simulated with available computer within realistic CPU time. Therefore, this task can be basically separated into two parts. One is to integrate all physics elements into a code, which is strongly related to the development of hydrodynamic equation solver. We have developed 2-D integrated implosion code which solves mass, momentum, electron energy, ion energy, equation of states, laser ray-trace, laser absorption radiation, surface tracing and so on. The reasonable results in simulating Rayleigh-Taylor instability and cylindrical implosion are obtained using this code. The other is code development on each element physics and verification of these codes. We had progress in developing a nonlocal electron transport code and 2 and 3 dimension radiation hydrodynamic code. (author)

  17. Kine-Mould : Manufacturing technology for curved architectural elements in concrete

    NARCIS (Netherlands)

    Schipper, H.R.; Eigenraam, P.; Grünewald, S.; Soru, M.; Nap, P.; Van Overveld, B.; Vermeulen, J.

    2015-01-01

    The production of architectural elements with complex geometry is challenging for concrete manufacturers. Computer-numerically-controlled (CNC) milled foam moulds have been applied frequently in the last decades, resulting in good aesthetical performance. However, still the costs are high and a

  18. Architecture and program structures for a special purpose finite element computer

    Energy Technology Data Exchange (ETDEWEB)

    Norrie, D.H.; Norrie, C.W.

    1983-01-01

    The development of very large scale integration (VLSI) has made special-purpose computers economically possible. With such a machine, the loss of flexibility compared with a general-purpose computer can be offset by the increased speed which can be obtained by tailoring the architecture to the particular problem or class of problem. The first kind of special-purpose machine has its architecture modelled on the physical structure of the problem and the second kind has its design tailored to the computational algorithm used. The parallel finite element machine (PARFEM) being designed at the University of Calgary for the solution of finite element problems is of the second kind. Its conceptual design is described and progress to date outlined. 14 references.

  19. FINELM: a multigroup finite element diffusion code. Part I

    International Nuclear Information System (INIS)

    Davierwalla, D.M.

    1980-12-01

    The author presents a two dimensional code for multigroup diffusion using the finite element method. It was realized that the extensive connectivity which contributes significantly to the accuracy, results in a matrix which, although symmetric and positive definite, is wide band and possesses an irregular profile. Hence, it was decided to introduce sparsity techniques into the code. The introduction of the R-Z geometry lead to a great deal of changes in the code since the rotational invariance of the removal matrices in X-Y geometry did not carry over in R-Z geometry. Rectangular elements were introduced to remedy the inability of the triangles to model essentially one dimensional problems such as slab geometry. The matter is discussed briefly in the text in the section on benchmark problems. This report is restricted to the general theory of the triangular elements and to the sparsity techniques viz. incomplete disections. The latter makes the size of the problem that can be handled independent of core memory and dependent only on disc storage capacity which is virtually unlimited. (Auth.)

  20. FINELM: a multigroup finite element diffusion code. Part II

    International Nuclear Information System (INIS)

    Davierwalla, D.M.

    1981-05-01

    The author presents the axisymmetric case in cylindrical coordinates for the finite element multigroup neutron diffusion code, FINELM. The numerical acceleration schemes incorporated viz. the Lebedev extrapolations and the coarse mesh rebalancing, space collapsing, are discussed. A few benchmark computations are presented as validation of the code. (Auth.)

  1. A self-organized internal models architecture for coding sensory-motor schemes

    Directory of Open Access Journals (Sweden)

    Esaú eEscobar Juárez

    2016-04-01

    Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.

  2. Archaeometric characterization and provenance determination of sculptures and architectural elements from Gerasa, Jordan

    Science.gov (United States)

    Al-Bashaireh, Khaled

    2018-02-01

    This study aims at the identification of the provenance of white marble sculptures and architectural elements uncovered from the archaeological site of Gerasa and neighboring areas, north Jordan. Most of the marbles are probably of the Roman or Byzantine periods. Optical microscopy, X-ray diffraction, and mass spectrometry were used to investigate petrographic, mineralogical and isotopic characteristics of the samples, respectively. Analytical results were compared with the main reference databases of known Mediterranean marble quarries exploited in antiquity. The collected data show that the most likely main sources of the sculptures were the Greek marble quarries of Paros-2 (Lakkoi), Penteli (Mount Pentelikon, Attica), and Thasos-3 (Thasos Island, Cape Vathy, Aliki); the Asia Minor marble quarries of Proconessus-1 (Marmara) and Aphrodisias (Aphrodisias); and the Italian quarry of Carrara (Apuan Alps). Similarly, the Asia Minor quarries of the fine-grained Docimium (Afyon) and the coarse-grained Proconessus-1 (Marmara) and Thasos-3 are the most likely sources of the architectural elements. The results agree with published data on the wide use of these marbles for sculpture and architectural elements.

  3. Modeling Architectural Patterns Using Architectural Primitives

    NARCIS (Netherlands)

    Zdun, Uwe; Avgeriou, Paris

    2005-01-01

    Architectural patterns are a key point in architectural documentation. Regrettably, there is poor support for modeling architectural patterns, because the pattern elements are not directly matched by elements in modeling languages, and, at the same time, patterns support an inherent variability that

  4. Methodology for bus layout for topological quantum error correcting codes

    Energy Technology Data Exchange (ETDEWEB)

    Wosnitzka, Martin; Pedrocchi, Fabio L.; DiVincenzo, David P. [RWTH Aachen University, JARA Institute for Quantum Information, Aachen (Germany)

    2016-12-15

    Most quantum computing architectures can be realized as two-dimensional lattices of qubits that interact with each other. We take transmon qubits and transmission line resonators as promising candidates for qubits and couplers; we use them as basic building elements of a quantum code. We then propose a simple framework to determine the optimal experimental layout to realize quantum codes. We show that this engineering optimization problem can be reduced to the solution of standard binary linear programs. While solving such programs is a NP-hard problem, we propose a way to find scalable optimal architectures that require solving the linear program for a restricted number of qubits and couplers. We apply our methods to two celebrated quantum codes, namely the surface code and the Fibonacci code. (orig.)

  5. The architecture of cartilage: Elemental maps and scanning transmission ion microscopy/tomography

    International Nuclear Information System (INIS)

    Reinert, Tilo; Reibetanz, Uta; Schwertner, Michael; Vogt, Juergen; Butz, Tilman; Sakellariou, Arthur

    2002-01-01

    Articular cartilage is not just a jelly-like cover of the bone within the joints but a highly sophisticated architecture of hydrated macromolecules, collagen fibrils and cartilage cells. Influences on the physiological balance due to age-related or pathological changes can lead to malfunction and subsequently to degradation of the cartilage. Many activities in cartilage research are dealing with the architecture of joint cartilage but have limited access to elemental distributions. Nuclear microscopy is able to yield spatially resolved elemental concentrations, provides density information and can visualise the arrangement of the collagen fibres. The distribution of the cartilage matrix can be deduced from the elemental and density maps. The findings showed a varying content of collagen and proteoglycan between zones of different cell maturation. Zones of higher collagen content are characterised by aligned collagen fibres that can form tubular structures. Recently we focused on STIM tomography to investigate the three dimensional arrangement of the collagen structures

  6. Architecture for the Elderly and Frail People, Well-Being Elements Realizations and Outcomes

    DEFF Research Database (Denmark)

    Knudstrup, Mary-Ann

    2011-01-01

    -being elements in the nursing home environments that contribute to enhancing the well-being of the elderly and how these elements is ensured attention during a decision making process related to the design and the establishing of nursing homes. With basis in four Danish representative case studies, various case...... data from the decision making process are collected, covering the planning, the design and the realization of four newly built nursing homes in Denmark. The case studies clearly shows that the architectural well-being elements appear weak in the decision making process, when they are conflicting......The relationship between architecture, housing and well-being of elderly and frail people is a topic of growing interest to consultants and political decision makers working on welfare solutions for elderly citizens. The objective of the research presented here is to highlight which well...

  7. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    Science.gov (United States)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  8. Neural codes of seeing architectural styles.

    Science.gov (United States)

    Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B

    2017-01-10

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.

  9. CONDOR: neutronic code for fuel elements calculation with rods

    International Nuclear Information System (INIS)

    Villarino, E.A.

    1990-01-01

    CONDOR neutronic code is used for the calculation of fuel elements formed by fuel rods. The method employed to obtain the neutronic flux is that of collision probabilities in a multigroup scheme on two-dimensional geometry. This code utilizes new calculation algorithms and normalization of such collision probabilities. Burn-up calculations can be made before the alternative of applying variational methods for response flux calculations or those corresponding to collision normalization. (Author) [es

  10. Determination of Major and Minor Elements in the Code River Sediments

    International Nuclear Information System (INIS)

    Sri Murniasih; Sukirno; Bambang Irianto

    2007-01-01

    Analyze major and minor elements in the Code river sediments has been done. The aim of this research is to determine the concentration of major and minor elements in the Code river sediments from upstream to downstream. The instrument used were X-ray Fluorescence using Si(Li) detector. The results show that major elements were Fe (1.66 ± 0.1% - 4.20 ± 0.7%) and Ca (4.43 ± 0.6% - 9.08 ± 1.3%); while minor elements were Ba (178.791 ± 21.1 ppm - 616.56 ± 59.4 ppm); Sr (148.22 ± 21.9 ppm - 410.25 ± 30.5 ppm); and Zr (9.71 ± 1.1 ppm - 22.11 ± 3.4 ppm). ANAVA method (confidence level of α 0.05 ) for statistic test was used. It was showed that there were significant influence of the sampling location difference on the concentration of major and minor elements in the sediment samples. (author)

  11. TACO: a finite element heat transfer code

    International Nuclear Information System (INIS)

    Mason, W.E. Jr.

    1980-02-01

    TACO is a two-dimensional implicit finite element code for heat transfer analysis. It can perform both linear and nonlinear analyses and can be used to solve either transient or steady state problems. Either plane or axisymmetric geometries can be analyzed. TACO has the capability to handle time or temperature dependent material properties and materials may be either isotropic or orthotropic. A variety of time and temperature dependent loadings and boundary conditions are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additionally, TACO has some specialized features such as internal surface conditions (e.g., contact resistance), bulk nodes, enclosure radiation with view factor calculations, and chemical reactive kinetics. A user subprogram feature allows for any type of functional representation of any independent variable. A bandwidth and profile minimization option is also available in the code. Graphical representation of data generated by TACO is provided by a companion post-processor named POSTACO. The theory on which TACO is based is outlined, the capabilities of the code are explained, the input data required to perform an analysis with TACO are described. Some simple examples are provided to illustrate the use of the code

  12. Software Abstractions and Methodologies for HPC Simulation Codes on Future Architectures

    Directory of Open Access Journals (Sweden)

    Anshu Dubey

    2014-07-01

    Full Text Available Simulations with multi-physics modeling have become crucial to many science and engineering fields, and multi-physics capable scientific software is as important to these fields as instruments and facilities are to experimental sciences. The current generation of mature multi-physics codes would have sustainably served their target communities with modest amount of ongoing investment for enhancing capabilities. However, the revolution occurring in the hardware architecture has made it necessary to tackle the parallelism and performance management in these codes at multiple levels. The requirements of various levels are often at cross-purposes with one another, and therefore hugely complicate the software design. All of these considerations make it essential to approach this challenge cooperatively as a community. We conducted a series of workshops under an NSF-SI2 conceptualization grant to get input from various stakeholders, and to identify broad approaches that might lead to a solution. In this position paper we detail the major concerns articulated by the application code developers, and emerging trends in utilization of programming abstractions that we found through these workshops.

  13. Modeling of PHWR fuel elements using FUDA code

    International Nuclear Information System (INIS)

    Tripathi, Rahul Mani; Soni, Rakesh; Prasad, P.N.; Pandarinathan, P.R.

    2008-01-01

    The computer code FUDA (Fuel Design Analysis) is used for modeling PHWR fuel bundle operation history and carry out fuel element thermo-mechanical analysis. The radial temperature profile across fuel and sheath, fission gas release, internal gas pressure, sheath stress and strains during the life of fuel bundle are estimated

  14. Mechanical modelling of PCI with FRAGEMA and CEA finite element codes

    International Nuclear Information System (INIS)

    Joseph, J.; Bernard, Ph.; Atabek, R.; Chantant, M.

    1983-01-01

    In the framework of their common program, CEA and FRAGEMA have undertaken the mechanical modelization of PCI. In the first step two different codes, TITUS and VERDON, have been tested by FRAGEMA and CEA respectively. Whereas the two codes use a finite element method to describe the thermomechanical behaviour of a fuel element, input models are not the same for the two codes: to take into account the presence of cracks in UO 2 , an axisymmetric two dimensional mesh pattern and the Druecker-Prager criterion are used in VERDON and a 3D equivalent method in TITUS. Two rods have been studied with these two methods: PRISCA 04bis and PRISCA 104 which were ramped in SILOE. The results show that the stresses and strains are the same with the two codes. These methods are further applied to the complete series of the common ramp test rods program of FRAGEMA and CEA. (author)

  15. Three-dimensional modeling with finite element codes

    Energy Technology Data Exchange (ETDEWEB)

    Druce, R.L.

    1986-01-17

    This paper describes work done to model magnetostatic field problems in three dimensions. Finite element codes, available at LLNL, and pre- and post-processors were used in the solution of the mathematical model, the output from which agreed well with the experimentally obtained data. The geometry used in this work was a cylinder with ports in the periphery and no current sources in the space modeled. 6 refs., 8 figs.

  16. FPGA-Based Channel Coding Architectures for 5G Wireless Using High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Swapnil Mhaske

    2017-01-01

    Full Text Available We propose strategies to achieve a high-throughput FPGA architecture for quasi-cyclic low-density parity-check codes based on circulant-1 identity matrix construction. By splitting the node processing operation in the min-sum approximation algorithm, we achieve pipelining in the layered decoding schedule without utilizing additional hardware resources. High-level synthesis compilation is used to design and develop the architecture on the FPGA hardware platform. To validate this architecture, an IEEE 802.11n compliant 608 Mb/s decoder is implemented on the Xilinx Kintex-7 FPGA using the LabVIEW FPGA Compiler in the LabVIEW Communication System Design Suite. Architecture scalability was leveraged to accomplish a 2.48 Gb/s decoder on a single Xilinx Kintex-7 FPGA. Further, we present rapidly prototyped experimentation of an IEEE 802.16 compliant hybrid automatic repeat request system based on the efficient decoder architecture developed. In spite of the mixed nature of data processing—digital signal processing and finite-state machines—LabVIEW FPGA Compiler significantly reduced time to explore the system parameter space and to optimize in terms of error performance and resource utilization. A 4x improvement in the system throughput, relative to a CPU-based implementation, was achieved to measure the error-rate performance of the system over large, realistic data sets using accelerated, in-hardware simulation.

  17. Infill architecture: Design approaches for in-between buildings and 'bond' as integrative element

    Directory of Open Access Journals (Sweden)

    Alfirević Đorđe

    2015-01-01

    Full Text Available The aim of the paper is to draw attention to the view that the two key elements in achieving good quality of architecture infill in immediate, current surroundings, are the selection of optimal creative method of infill architecture and adequate application of 'the bond' as integrative element, The success of achievement and the quality of architectural infill mainly depend on the assessment of various circumstances, but also on the professionalism, creativity, sensibility, and finally innovativeness of the architect, In order for the infill procedure to be carried out adequately, it is necessary to carry out the assessment of quality of the current surroundings that the object will be integrated into, and then to choose the creative approach that will allow the object to establish an optimal dialogue with its surroundings, On a wider scale, both theory and the practice differentiate thee main creative approaches to infill objects: amimetic approach (mimesis, bassociative approach and ccontrasting approach, Which of the stated approaches will be chosen depends primarily on the fact whether the existing physical structure into which the object is being infilled is 'distinct', 'specific' or 'indistinct', but it also depends on the inclination of the designer, 'The bond' is a term which in architecture denotes an element or zone of one object, but in some instances it can refer to the whole object which has been articulated in a specific way, with an aim of reaching the solution for the visual conflict as is often the case in situations when there is a clash between the existing objects and the newly designed or reconstructed object, This paper provides in-depth analysis of different types of bonds, such as 'direction as bond', 'cornice as bond', 'structure as bond', 'texture as bond' and 'material as bond', which indicate complexity and multiple layers of the designing process of object interpolation.

  18. A code for obtaining temperature distribution by finite element method

    International Nuclear Information System (INIS)

    Bloch, M.

    1984-01-01

    The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt

  19. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  20. Cognitive Architectures for Multimedia Learning

    Science.gov (United States)

    Reed, Stephen K.

    2006-01-01

    This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio's dual coding theory,…

  1. Porting the 3D Gyrokinetic Particle-in-cell Code GTC to the CRAY/NEC SX-6 Vector Architecture: Perspectives and Challenges

    International Nuclear Information System (INIS)

    Ethier, S.; Lin, Z.

    2003-01-01

    Several years of optimization on the super-scalar architecture has made it more difficult to port the current version of the 3D particle-in-cell code GTC to the CRAY/NEC SX-6 vector architecture. This paper explains the initial work that has been done to port this code to the SX-6 computer and to optimize the most time consuming parts. Early performance results are shown and compared to the same test done on the IBM SP Power 3 and Power 4 machines

  2. Modeling approach for annular-fuel elements using the ASSERT-PV subchannel code

    International Nuclear Information System (INIS)

    Dominguez, A.N.; Rao, Y.

    2012-01-01

    The internally and externally cooled annular fuel (hereafter called annular fuel) is under consideration for a new high burn-up fuel bundle design in Atomic Energy of Canada Limited (AECL) for its current, and its Generation IV reactor. An assessment of different options to model a bundle fuelled with annular fuel elements is presented. Two options are discussed: 1) Modify the subchannel code ASSERT-PV to handle multiple types of elements in the same bundle, and 2) coupling ASSERT-PV with an external application. Based on this assessment, the selected option is to couple ASSERT-PV with the thermalhydraulic system code CATHENA. (author)

  3. WARP3D-Release 10.8: Dynamic Nonlinear Analysis of Solids using a Preconditioned Conjugate Gradient Software Architecture

    Science.gov (United States)

    Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.

    1998-01-01

    This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves

  4. FLASH: A finite element computer code for variably saturated flow

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A

  5. Evaluation of computational endomicroscopy architectures for minimally-invasive optical biopsy

    Science.gov (United States)

    Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.

    2017-02-01

    We are investigating compressive sensing architectures for applications in endomicroscopy, where the narrow diameter probes required for tissue access can limit the achievable spatial resolution. We hypothesize that the compressive sensing framework can be used to overcome the fundamental pixel number limitation in fiber-bundle based endomicroscopy by reconstructing images with more resolvable points than fibers in the bundle. An experimental test platform was assembled to evaluate and compare two candidate architectures, based on introducing a coded amplitude mask at either a conjugate image or Fourier plane within the optical system. The benchtop platform consists of a common illumination and object path followed by separate imaging arms for each compressive architecture. The imaging arms contain a digital micromirror device (DMD) as a reprogrammable mask, with a CCD camera for image acquisition. One arm has the DMD positioned at a conjugate image plane ("IP arm"), while the other arm has the DMD positioned at a Fourier plane ("FP arm"). Lenses were selected and positioned within each arm to achieve an element-to-pixel ratio of 16 (230,400 mask elements mapped onto 14,400 camera pixels). We discuss our mathematical model for each system arm and outline the importance of accounting for system non-idealities. Reconstruction of a 1951 USAF resolution target using optimization-based compressive sensing algorithms produced images with higher spatial resolution than bicubic interpolation for both system arms when system non-idealities are included in the model. Furthermore, images generated with image plane coding appear to exhibit higher spatial resolution, but more noise, than images acquired through Fourier plane coding.

  6. Semantic enrichment of medical forms - semi-automated coding of ODM-elements via web services.

    Science.gov (United States)

    Breil, Bernhard; Watermann, Andreas; Haas, Peter; Dziuballe, Philipp; Dugas, Martin

    2012-01-01

    Semantic interoperability is an unsolved problem which occurs while working with medical forms from different information systems or institutions. Standards like ODM or CDA assure structural homogenization but in order to compare elements from different data models it is necessary to use semantic concepts and codes on an item level of those structures. We developed and implemented a web-based tool which enables a domain expert to perform semi-automated coding of ODM-files. For each item it is possible to inquire web services which result in unique concept codes without leaving the context of the document. Although it was not feasible to perform a totally automated coding we have implemented a dialog based method to perform an efficient coding of all data elements in the context of the whole document. The proportion of codable items was comparable to results from previous studies.

  7. Improvement of implicit finite element code performance in deep drawing simulations by dynamics contributions

    NARCIS (Netherlands)

    Meinders, Vincent T.; van den Boogaard, Antonius H.; Huetink, Han

    2003-01-01

    To intensify the use of implicit finite element codes for solving large scale problems, the computation time of these codes has to be decreased drastically. A method is developed which decreases the computational time of implicit codes by factors. The method is based on introducing inertia effects

  8. A framework for developing finite element codes for multi-disciplinary applications.

    OpenAIRE

    Dadvand, Pooyan

    2007-01-01

    The world of computing simulation has experienced great progresses in recent years and requires more exigent multidisciplinary challenges to satisfy the new upcoming demands. Increasing the importance of solving multi-disciplinary problems makes developers put more attention to these problems and deal with difficulties involved in developing software in this area. Conventional finite element codes have several difficulties in dealing with multi-disciplinary problems. Many of these codes are d...

  9. Performance Analysis of an Astrophysical Simulation Code on the Intel Xeon Phi Architecture

    OpenAIRE

    Noormofidi, Vahid; Atlas, Susan R.; Duan, Huaiyu

    2015-01-01

    We have developed the astrophysical simulation code XFLAT to study neutrino oscillations in supernovae. XFLAT is designed to utilize multiple levels of parallelism through MPI, OpenMP, and SIMD instructions (vectorization). It can run on both CPU and Xeon Phi co-processors based on the Intel Many Integrated Core Architecture (MIC). We analyze the performance of XFLAT on configurations with CPU only, Xeon Phi only and both CPU and Xeon Phi. We also investigate the impact of I/O and the multi-n...

  10. Finite element methods in a simulation code for offshore wind turbines

    Science.gov (United States)

    Kurz, Wolfgang

    1994-06-01

    Offshore installation of wind turbines will become important for electricity supply in future. Wind conditions above sea are more favorable than on land and appropriate locations on land are limited and restricted. The dynamic behavior of advanced wind turbines is investigated with digital simulations to reduce time and cost in development and design phase. A wind turbine can be described and simulated as a multi-body system containing rigid and flexible bodies. Simulation of the non-linear motion of such a mechanical system using a multi-body system code is much faster than using a finite element code. However, a modal representation of the deformation field has to be incorporated in the multi-body system approach. The equations of motion of flexible bodies due to deformation are generated by finite element calculations. At Delft University of Technology the simulation code DUWECS has been developed which simulates the non-linear behavior of wind turbines in time domain. The wind turbine is divided in subcomponents which are represented by modules (e.g. rotor, tower etc.).

  11. Relational Architecture

    DEFF Research Database (Denmark)

    Reeh, Henrik

    2018-01-01

    in a scholarly institution (element #3), as well as the certified PhD scholar (element #4) and the architectural profession, notably its labour market (element #5). This first layer outlines the contemporary context which allows architectural research to take place in a dynamic relationship to doctoral education...... a human and institutional development going on since around 1990 when the present PhD institution was first implemented in Denmark. To be sure, the model is centred around the PhD dissertation (element #1). But it involves four more components: the PhD candidate (element #2), his or her supervisor...... and interrelated fields in which history, place, and sound come to emphasize architecture’s relational qualities rather than the apparent three-dimensional solidity of constructed space. A third layer of relational architecture is at stake in the professional experiences after the defence of the authors...

  12. FEAST: a two-dimensional non-linear finite element code for calculating stresses

    International Nuclear Information System (INIS)

    Tayal, M.

    1986-06-01

    The computer code FEAST calculates stresses, strains, and displacements. The code is two-dimensional. That is, either plane or axisymmetric calculations can be done. The code models elastic, plastic, creep, and thermal strains and stresses. Cracking can also be simulated. The finite element method is used to solve equations describing the following fundamental laws of mechanics: equilibrium; compatibility; constitutive relations; yield criterion; and flow rule. FEAST combines several unique features that permit large time-steps in even severely non-linear situations. The features include a special formulation for permitting many finite elements to simultaneously cross the boundary from elastic to plastic behaviour; accomodation of large drops in yield-strength due to changes in local temperature and a three-step predictor-corrector method for plastic analyses. These features reduce computing costs. Comparisons against twenty analytical solutions and against experimental measurements show that predictions of FEAST are generally accurate to ± 5%

  13. Neural codes of seeing architectural styles

    OpenAIRE

    Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.

    2017-01-01

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people′s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding sugges...

  14. Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture

    Science.gov (United States)

    Weber, Raphael; Rettberg, Achim

    A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.

  15. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    Science.gov (United States)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  16. Development of three-dimensional transport code by the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1985-01-01

    Development of a three-dimensional neutron transport code by the double finite element method is described. Both of the Galerkin and variational methods are adopted to solve the problem, and then the characteristics of them are compared. Computational results of the collocation method, developed as a technique for the vaviational one, are illustrated in comparison with those of an Ssub(n) code. (author)

  17. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    International Nuclear Information System (INIS)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-01-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated

  18. Analysis of piping systems by finite element method using code SAP-IV

    International Nuclear Information System (INIS)

    Cizelj, L.; Ogrizek, D.

    1987-01-01

    Due to extensive and multiple use of the computer code SAP-IV we have decided to install it on VAX 11/750 machine. Installation required a large quantity of programming due to great discrepancies between the CDC (the original program version) and the VAX. Testing was performed basically in the field of pipe elements, based on a comparison between results obtained with the codes PSAFE2, DOCIJEV, PIPESD and SAP -V. Besides, the model of reactor pressure vessel with 3-D thick shell elements was done. The capabilities show good agreement with the results of other programs mentioned above. Along with the package installation, the graphical postprocessors being developed for mesh plotting. (author)

  19. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Science.gov (United States)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  20. Software architecture evolution

    DEFF Research Database (Denmark)

    Barais, Olivier; Le Meur, Anne-Francoise; Duchien, Laurence

    2008-01-01

    Software architectures must frequently evolve to cope with changing requirements, and this evolution often implies integrating new concerns. Unfortunately, when the new concerns are crosscutting, existing architecture description languages provide little or no support for this kind of evolution....... The software architect must modify multiple elements of the architecture manually, which risks introducing inconsistencies. This chapter provides an overview, comparison and detailed treatment of the various state-of-the-art approaches to describing and evolving software architectures. Furthermore, we discuss...... one particular framework named Tran SAT, which addresses the above problems of software architecture evolution. Tran SAT provides a new element in the software architecture descriptions language, called an architectural aspect, for describing new concerns and their integration into an existing...

  1. Tri-Lab Co-Design Milestone: In-Depth Performance Portability Analysis of Improved Integrated Codes on Advanced Architecture.

    Energy Technology Data Exchange (ETDEWEB)

    Hoekstra, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Simon David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Richards, David [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bergen, Ben [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-01

    This milestone is a tri-lab deliverable supporting ongoing Co-Design efforts impacting applications in the Integrated Codes (IC) program element Advanced Technology Development and Mitigation (ATDM) program element. In FY14, the trilabs looked at porting proxy application to technologies of interest for ATS procurements. In FY15, a milestone was completed evaluating proxy applications in multiple programming models and in FY16, a milestone was completed focusing on the migration of lessons learned back into production code development. This year, the co-design milestone focuses on extracting the knowledge gained and/or code revisions back into production applications.

  2. Coded Network Function Virtualization

    DEFF Research Database (Denmark)

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  3. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    Directory of Open Access Journals (Sweden)

    Sandeep Kakde

    2017-12-01

    Full Text Available For binary field and long code lengths, Low Density Parity Check (LDPC code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algorithm. VLSI Architecture is proposed which uses the value re-use property of min-sum algorithm and gives high throughput. The proposed work has been implemented and tested on Xilinx Virtex 5 FPGA. The MATLAB result of LDPC decoder for low bit error rate (BER gives bit error rate in the range of 10-1 to 10-3.5 at SNR=1 to 2 for 20 no of iterations. So it gives good bit error rate performance. The latency of the parallel design of LDPC decoder has also reduced. It has accomplished 141.22 MHz maximum frequency and throughput of 2.02 Gbps while consuming less area of the design.

  4. Axisym finite element code: modifications for pellet-cladding mechanical interaction analysis

    International Nuclear Information System (INIS)

    Engelman, G.P.

    1978-10-01

    Local strain concentrations in nuclear fuel rods are known to be potential sites for failure initiation. Assessment of such strain concentrations requires a two-dimensional analysis of stress and strain in both the fuel and the cladding during pellet-cladding mechanical interaction. To provide such a capability in the FRAP (Fuel Rod Analysis Program) codes, the AXISYM code (a small finite element program developed at the Idaho National Engineering Laboratory) was modified to perform a detailed fuel rod deformation analysis. This report describes the modifications which were made to the AXISYM code to adapt it for fuel rod analysis and presents comparisons made between the two-dimensional AXISYM code and the FRACAS-II code. FRACAS-II is the one-dimensional (generalized plane strain) fuel rod mechanical deformation subcode used in the FRAP codes. Predictions of these two codes should be comparable away from the fuel pellet free ends if the state of deformation at the pellet midplane is near that of generalized plane strain. The excellent agreement obtained in these comparisons checks both the correctness of the AXISYM code modifications as well as the validity of the assumption of generalized plane strain upon which the FRACAS-II subcode is based

  5. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    Science.gov (United States)

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  6. STAT, GAPS, STRAIN, DRWDIM: a system of computer codes for analyzing HTGR fuel test element metrology data. User's manual

    Energy Technology Data Exchange (ETDEWEB)

    Saurwein, J.J.

    1977-08-01

    A system of computer codes has been developed to statistically reduce Peach Bottom fuel test element metrology data and to compare the material strains and fuel rod-fuel hole gaps computed from these data with HTGR design code predictions. The codes included in this system are STAT, STRAIN, GAPS, and DRWDIM. STAT statistically evaluates test element metrology data yielding fuel rod, fuel body, and sleeve irradiation-induced strains; fuel rod anisotropy; and additional data characterizing each analyzed fuel element. STRAIN compares test element fuel rod and fuel body irradiation-induced strains computed from metrology data with the corresponding design code predictions. GAPS compares test element fuel rod, fuel hole heat transfer gaps computed from metrology data with the corresponding design code predictions. DRWDIM plots the measured and predicted gaps and strains. Although specifically developed to expedite the analysis of Peach Bottom fuel test elements, this system can be applied, without extensive modification, to the analysis of Fort St. Vrain or other HTGR-type fuel test elements.

  7. Implementation of collisions on GPU architecture in the Vorpal code

    Science.gov (United States)

    Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John

    2017-10-01

    The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.

  8. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide-Semiconductor Image Sensors.

    Science.gov (United States)

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-05-02

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.

  9. Software architecture 2

    CERN Document Server

    Oussalah, Mourad Chabanne

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural templa

  10. Software architecture 1

    CERN Document Server

    Oussalah , Mourad Chabane

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural template

  11. SAFE: A computer code for the steady-state and transient thermal analysis of LMR fuel elements

    International Nuclear Information System (INIS)

    Hayes, S.L.

    1993-12-01

    SAFE is a computer code developed for both the steady-state and transient thermal analysis of single LMR fuel elements. The code employs a two-dimensional control-volume based finite difference methodology with fully implicit time marching to calculate the temperatures throughout a fuel element and its associated coolant channel for both the steady-state and transient events. The code makes no structural calculations or predictions whatsoever. It does, however, accept as input structural parameters within the fuel such as the distributions of porosity and fuel composition, as well as heat generation, to allow a thermal analysis to be performed on a user-specified fuel structure. The code was developed with ease of use in mind. An interactive input file generator and material property correlations internal to the code are available to expedite analyses using SAFE. This report serves as a complete design description of the code as well as a user's manual. A sample calculation made with SAFE is included to highlight some of the code's features. Complete input and output files for the sample problem are provided

  12. Eigensolution of finite element problems in a completely connected parallel architecture

    Science.gov (United States)

    Akl, Fred A.; Morel, Michael R.

    1989-01-01

    A parallel algorithm for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi)=(M)(phi)(omega), where (K) and (M) are of order N, and (omega) is of order q is presented. The parallel algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm has been successfully implemented on a tightly coupled multiple-instruction-multiple-data (MIMD) parallel processing computer, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor, or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macro-tasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18 and 3.61 are achieved on two, four, six and eight processors, respectively.

  13. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  14. A comparison of two three-dimensional shell-element transient electromagnetics codes

    International Nuclear Information System (INIS)

    Yugo, J.J.; Williamson, D.E.

    1992-01-01

    Electromagnetic forces due to eddy currents strongly influence the design of components for the next generation of fusion devices. An effort has been made to benchmark two computer programs used to generate transient electromagnetic loads: SPARK and EddyCuFF. Two simple transient field problems were analyzed, both of which had been previously analyzed by the SPARK code with results recorded in the literature. A third problem that uses an ITER inboard blanket benchmark model was analyzed as well. This problem was driven with a self-consistent, distributed multifilament plasma model generated by an axisymmetric physics code. The benchmark problems showed good agreement between the two shell-element codes. Variations in calculated eddy currents of 1--3% have been found for similar, finely meshed models. A difference of 8% was found in induced current and 20% in force for a coarse mesh and complex, multifilament field driver. Because comparisons were made to results obtained from literature, model preparation and code execution times were not evaluated

  15. Development of a three-dimensional neutron transport code DFEM based on the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1996-01-01

    A three-dimensional neutron transport code DFEM has been developed by the double finite element method to analyze reactor cores with complex geometry as large fast reactors. Solution algorithm is based on the double finite element method in which the space and angle finite elements are employed. A reactor core system can be divided into some triangular and/or quadrangular prism elements, and the spatial distribution of neutron flux in each element is approximated with linear basis functions. As for the angular variables, various basis functions are applied, and their characteristics were clarified by comparison. In order to enhance the accuracy, a general method is derived to remedy the truncation errors at reflective boundaries, which are inherent in the conventional FEM. An adaptive acceleration method and the source extrapolation method were applied to accelerate the convergence of the iterations. The code structure is outlined and explanations are given on how to prepare input data. A sample input list is shown for reference. The eigenvalue and flux distribution for real scale fast reactors and the NEA benchmark problems were presented and discussed in comparison with the results of other transport codes. (author)

  16. Industrial applications of N3S finite element code

    International Nuclear Information System (INIS)

    Chabard, J.P.; Pot, G.; Martin, A.

    1993-12-01

    The Research and Development Division of EDF (French utilities) has been working since 1982 on N3S, a 3D finite element code for simulating turbulent incompressible flows (Chabard et al., 1992) which has many applications nowadays dealing with internal flows, thermal hydraulics (Delenne and Pot, 1993), turbomachinery (Combes and Rieutord, 1992). The size of these applications is larger and larger: calculations until 350 000 nodes are in progress (around 2 000 000 unknowns). To achieve so large applications, an important work has been done on the choice of efficient algorithms and on their implementation in order to reduce CPU time and memory allocation. The paper presents the central algorithm of the code, focusing on time and memory optimization. As an illustration, validation test cases and a recent industrial application are discussed. (authors). 11 figs., 2 tabs., 11 refs

  17. Convenience of Statistical Approach in Studies of Architectural Ornament and Other Decorative Elements Specific Application

    Science.gov (United States)

    Priemetz, O.; Samoilov, K.; Mukasheva, M.

    2017-11-01

    An ornament is an actual phenomenon of the architecture modern theory, a common element in the practice of design and construction. It has been an important aspect of shaping for millennia. The description of the methods of its application occupies a large place in the studies on the theory and practice of architecture. However, the problem of the saturation of compositions with ornamentation, the specificity of its themes and forms have not been sufficiently studied yet. This aspect requires accumulation of additional knowledge. The application of quantitative methods for the plastic solutions types and a thematic diversity of facade compositions of buildings constructed in different periods creates another tool for an objective analysis of ornament development. It demonstrates the application of this approach for studying the features of the architectural development in Kazakhstan at the end of the XIX - XXI centuries.

  18. Calculation of normal tissue complication probability and dose-volume histogram reduction schemes for tissues with a critical element architecture

    International Nuclear Information System (INIS)

    Niemierko, Andrzej; Goitein, Michael

    1991-01-01

    The authors investigate a model of normal tissue complication probability for tissues that may be represented by a critical element architecture. They derive formulas for complication probability that apply to both a partial volume irradiation and to an arbitrary inhomogeneous dose distribution. The dose-volume isoeffect relationship which is a consequence of a critical element architecture is discussed and compared to the empirical power law relationship. A dose-volume histogram reduction scheme for a 'pure' critical element model is derived. In addition, a point-based algorithm which does not require precomputation of a dose-volume histogram is derived. The existing published dose-volume histogram reduction algorithms are analyzed. The authors show that the existing algorithms, developed empirically without an explicit biophysical model, have a close relationship to the critical element model at low levels of complication probability. However, it is also showed that they have aspects which are not compatible with a critical element model and the authors propose a modification to one of them to circumvent its restriction to low complication probabilities. (author). 26 refs.; 7 figs

  19. Finite Macro-Element Mesh Deformation in a Structured Multi-Block Navier-Stokes Code

    Science.gov (United States)

    Bartels, Robert E.

    2005-01-01

    A mesh deformation scheme is developed for a structured multi-block Navier-Stokes code consisting of two steps. The first step is a finite element solution of either user defined or automatically generated macro-elements. Macro-elements are hexagonal finite elements created from a subset of points from the full mesh. When assembled, the finite element system spans the complete flow domain. Macro-element moduli vary according to the distance to the nearest surface, resulting in extremely stiff elements near a moving surface and very pliable elements away from boundaries. Solution of the finite element system for the imposed boundary deflections generally produces smoothly varying nodal deflections. The manner in which distance to the nearest surface has been found to critically influence the quality of the element deformation. The second step is a transfinite interpolation which distributes the macro-element nodal deflections to the remaining fluid mesh points. The scheme is demonstrated for several two-dimensional applications.

  20. Looking back on 10 years of the ATLAS Metadata Interface. Reflections on architecture, code design and development methods

    International Nuclear Information System (INIS)

    Fulachier, J; Albrand, S; Lambert, F; Aidel, O

    2014-01-01

    The 'ATLAS Metadata Interface' framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we describe briefly the architecture and the main uses of the framework within the experiment (TagCollector for release management and Dataset Discovery). These two applications, which share almost 2000 registered users, are superficially quite different, however much of the code is shared and they have been developed and maintained over a decade almost completely by the same team of 3 people. We discuss how the architectural principles established at the beginning of the project have allowed us to continue both to integrate the new technologies and to respond to the new metadata use cases which inevitably appear over such a time period.

  1. Systemic Architecture

    DEFF Research Database (Denmark)

    Poletto, Marco; Pasquero, Claudia

    -up or tactical design, behavioural space and the boundary of the natural and the artificial realms within the city and architecture. A new kind of "real-time world-city" is illustrated in the form of an operational design manual for the assemblage of proto-architectures, the incubation of proto-gardens...... and the coding of proto-interfaces. These prototypes of machinic architecture materialize as synthetic hybrids embedded with biological life (proto-gardens), computational power, behavioural responsiveness (cyber-gardens), spatial articulation (coMachines and fibrous structures), remote sensing (FUNclouds...

  2. Iraqi architecture in mogul period

    Directory of Open Access Journals (Sweden)

    Hasan Shatha

    2018-01-01

    Full Text Available Iraqi architecture have many periods passed through it until now, each on from these periods have it is architectural style, also through time these styles interacted among us, to creating kind of space forming, space relationships, and architectural elements (detailed treatments, the research problem being from the multi interacted architectural styles causing some of confused of general characteristic to every style, that we could distinguish by it. Research tries to study architecture style through Mogul Conquest to Baghdad. Aim of research follow main characteristic for this architectural style in the Mogul periods on the level of form, elements, and treatments. Research depending on descriptive and analytical all buildings belong to this period, so from analyzing there style by, general form for building, architectural elements, and it architectural treatment, therefore; repeating this procedures to every building we get some similarities, from these similarities we can making conclusion about pure characteristic of the style of these period. Other side, we also discover some Dissimilar in the building periods, these will lead research to make what interacting among styles in this period, after all that we can drew clearly main characteristic of Architectural Style for Mogul Conquest in Baghdad

  3. A sliding point contact model for the finite element structures code EURDYN

    International Nuclear Information System (INIS)

    Smith, B.L.

    1986-01-01

    A method is developed by which sliding point contact between two moving deformable structures may be incorporated within a lumped mass finite element formulation based on displacements. The method relies on a simple mechanical interpretation of the contact constraint in terms of equivalent nodal forces and avoids the use of nodal connectivity via a master slave arrangement or pseudo contact element. The methodology has been iplemented into the EURDYN finite element program for the (2D axisymmetric) version coupled to the hydro code SEURBNUK. Sample calculations are presented illustrating the use of the model in various contact situations. Effects due to separation and impact of structures are also included. (author)

  4. Parallel eigenanalysis of finite element models in a completely connected architecture

    Science.gov (United States)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  5. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Yun

    2015-11-21

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this paper, I will present the code architecture, physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.

  6. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    Science.gov (United States)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  7. FEHMN 1.0: Finite element heat and mass transfer code

    International Nuclear Information System (INIS)

    Zyvoloski, G.; Dash, Z.; Kelkar, S.

    1991-04-01

    A computer code is described which can simulate non-isothermal multiphase multicomponent flow in porous media. It is applicable to natural-state studies of geothermal systems and ground-water flow. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved using the finite element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat and mass transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. A summary of the equations in the model and the numerical solution procedure are provided in this report. A user's guide and sample problems are also included. The main use of FEHMN will be to assist in the understanding of flow fields in the saturated zone below the proposed Yucca Mountain Repository. 33 refs., 27 figs., 12 tabs

  8. A surface code quantum computer in silicon

    Science.gov (United States)

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  9. A surface code quantum computer in silicon.

    Science.gov (United States)

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  10. Establishing Base Elements of Perspective in Order to Reconstruct Architectural Buildings from Photographs

    Science.gov (United States)

    Dzwierzynska, Jolanta

    2017-12-01

    The use of perspective images, especially historical photographs for retrieving information about presented architectural environment is a fast developing field recently. The photography image is a perspective image with secure geometrical connection with reality, therefore it is possible to reverse this process. The aim of the herby study is establishing requirements which a photographic perspective representation should meet for a reconstruction purpose, as well as determination of base elements of perspective such as a horizon line and a circle of depth, which is a key issue in any reconstruction. The starting point in the reconstruction process is geometrical analysis of the photograph, especially determination of the kind of perspective projection applied, which is defined by the building location towards a projection plane. Next, proper constructions can be used. The paper addresses the problem of establishing base elements of perspective on the basis of the photograph image in the case when camera calibration is impossible to establish. It presents different geometric construction methods selected dependently on the starting assumptions. Therefore, the methods described in the paper seem to be universal. Moreover, they can be used even in the case of poor quality photographs with poor perspective geometry. Such constructions can be realized with computer aid when the photographs are in digital form as it is presented in the paper. The accuracy of the applied methods depends on the photography image accuracy, as well as drawing accuracy, however, it is sufficient for further reconstruction. Establishing base elements of perspective presented in the paper is especially useful in difficult cases of reconstruction, when one lacks information about reconstructed architectural form and it is necessary to lean on solid geometry.

  11. Toward Measures for Software Architectures

    National Research Council Canada - National Science Library

    Chastek, Gary; Ferguson, Robert

    2006-01-01

    .... Defining these architectural measures is very difficult. The software architecture deeply affects subsequent development and project management decisions, such as the breakdown of the coding tasks and the definition of the development increments...

  12. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  13. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    Science.gov (United States)

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  14. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications

    International Nuclear Information System (INIS)

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J.

    2004-01-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1x10 8 or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8x10 8 histories. For a smaller number of histories (1x10 8 ) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1x10 8 histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy

  15. A dual origin of the Xist gene from a protein-coding gene and a set of transposable elements.

    Directory of Open Access Journals (Sweden)

    Eugeny A Elisaphenko

    2008-06-01

    Full Text Available X-chromosome inactivation, which occurs in female eutherian mammals is controlled by a complex X-linked locus termed the X-inactivation center (XIC. Previously it was proposed that genes of the XIC evolved, at least in part, as a result of pseudogenization of protein-coding genes. In this study we show that the key XIC gene Xist, which displays fragmentary homology to a protein-coding gene Lnx3, emerged de novo in early eutherians by integration of mobile elements which gave rise to simple tandem repeats. The Xist gene promoter region and four out of ten exons found in eutherians retain homology to exons of the Lnx3 gene. The remaining six Xist exons including those with simple tandem repeats detectable in their structure have similarity to different transposable elements. Integration of mobile elements into Xist accompanies the overall evolution of the gene and presumably continues in contemporary eutherian species. Additionally we showed that the combination of remnants of protein-coding sequences and mobile elements is not unique to the Xist gene and is found in other XIC genes producing non-coding nuclear RNA.

  16. Self-Contained Cross-Cutting Pipeline Software Architecture

    OpenAIRE

    Patwardhan, Amol; Patwardhan, Rahul; Vartak, Sumalini

    2016-01-01

    Layered software architecture contains several intra-layer and inter-layer dependencies. Each layer depends on shared components making it difficult to release a code change, bug fix or feature without exhaustive testing and having to build the entire software code base. This paper proposed self-contained, cross-cutting pipeline architecture (SCPA) that is independent of existing layers. We chose 2 open source projects and 3 internal intern projects that used n-tier architecture and applied t...

  17. Governance of extended lifecycle in large-scale eHealth initiatives: analyzing variability of enterprise architecture elements.

    Science.gov (United States)

    Mykkänen, Juha; Virkanen, Hannu; Tuomainen, Mika

    2013-01-01

    The governance of large eHealth initiatives requires traceability of many requirements and design decisions. We provide a model which we use to conceptually analyze variability of several enterprise architecture (EA) elements throughout the extended lifecycle of development goals using interrelated projects related to the national ePrescription in Finland.

  18. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A

  19. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    Energy Technology Data Exchange (ETDEWEB)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.

  20. Fault-tolerant architectures for superconducting qubits

    International Nuclear Information System (INIS)

    DiVincenzo, David P

    2009-01-01

    In this short review, I draw attention to new developments in the theory of fault tolerance in quantum computation that may give concrete direction to future work in the development of superconducting qubit systems. The basics of quantum error-correction codes, which I will briefly review, have not significantly changed since their introduction 15 years ago. But an interesting picture has emerged of an efficient use of these codes that may put fault-tolerant operation within reach. It is now understood that two-dimensional surface codes, close relatives of the original toric code of Kitaev, can be adapted as shown by Raussendorf and Harrington to effectively perform logical gate operations in a very simple planar architecture, with error thresholds for fault-tolerant operation simulated to be 0.75%. This architecture uses topological ideas in its functioning, but it is not 'topological quantum computation'-there are no non-abelian anyons in sight. I offer some speculations on the crucial pieces of superconducting hardware that could be demonstrated in the next couple of years that would be clear stepping stones towards this surface-code architecture.

  1. Architectural elements of hybrid navigation systems for future space transportation

    Science.gov (United States)

    Trigo, Guilherme F.; Theil, Stephan

    2017-12-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  2. Architectural elements of hybrid navigation systems for future space transportation

    Science.gov (United States)

    Trigo, Guilherme F.; Theil, Stephan

    2018-06-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  3. PRIAM: A self consistent finite element code for particle simulation in electromagnetic fields

    International Nuclear Information System (INIS)

    Le Meur, G.; Touze, F.

    1990-06-01

    A 2 1/2 dimensional, relativistic particle simulation code is described. A short review of the used mixed finite element method is given. The treatment of the driving terms (charge and current densities), initial, boundary conditions are exposed. Graphical results are shown

  4. Software Architecture Reconstruction Method, a Survey

    OpenAIRE

    Zainab Nayyar; Nazish Rafique

    2014-01-01

    Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules a...

  5. A CORBA BASED ARCHITECTURE FOR ACCESSING REUSABLE SOFTWARE COMPONENTS ON THE WEB.

    Directory of Open Access Journals (Sweden)

    R. Cenk ERDUR

    2003-01-01

    Full Text Available In a very near future, as a result of the continious growth of Internet and advances in networking technologies, Internet will become the common software repository for people and organizations who employ component based reuse approach in their software development life cycles. In order to use the reusable components such as source codes, analysis, designs, design patterns during new software development processes, environments that support the identification of the components over Internet are needed. Basic elements of such an environment are the coordinator programs which deliver user requests to appropriate component libraries, user interfaces for querying, and programs that wrap the component libraries. First, a CORBA based architecture is proposed for such an environment. Then, an alternative architecture that is based on the Java 2 platform technologies is given for the same environment. Finally, the two architectures are compared.

  6. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Architectural freedom and industrialized architecture. Inge Vestergaard, Associate Professor, Cand. Arch. Aarhus School of Architecture, Denmark Noerreport 20, 8000 Aarhus C Telephone +45 89 36 0000 E-mai l inge.vestergaard@aarch.dk Based on the repetitive architecture from the "building boom" 1960...... customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performed expression in direct relation to the given context. Through the last couple of years we have in Denmark been focusing a more sustainable and low energy building technique, which also include...... to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...

  7. Une approche de coloriage d’arrêtes pour la conception d’architectures parallèles d’entrelaceurs matériels

    OpenAIRE

    Awais Hussein , Sani

    2012-01-01

    Nowadays, Turbo and LDPC codes are two families of codes that are extensively used in current communication standards due to their excellent error correction capabilities. However, hardware design of coders and decoders for high data rate applications is not a straightforward process. For high data rates, decoders are implemented on parallel architectures in which more than one processing elements decode the received data. To achieve high memory bandwidth, the main memory is divided into smal...

  8. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...... to this systematic thinking of the building technique we get a diverse and functional architecture. Creating a new and clearer story telling about new and smart system based thinking behind the architectural expression....

  9. Concepts and diagram elements for architectural knowledge management

    NARCIS (Netherlands)

    Orlic, B.; Mak, R.H.; David, I.; Lukkien, J.J.

    2011-01-01

    Capturing architectural knowledge is very important for the evolution of software products. There is increasing awareness that an essential part of this knowledge is in fact the very process of architectural reasoning and decision making, and not just its end results. Therefore, a conceptual

  10. Design and performance of coded aperture optical elements for the CESR-TA x-ray beam size monitor

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, J.P.; Chatterjee, A.; Conolly, C.; Edwards, E.; Ehrlichman, M.P. [Cornell University, Ithaca, NY 14853 (United States); Flanagan, J.W. [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Department of Accelerator Science, Graduate University for Advanced Studies (SOKENDAI), Tsukuba (Japan); Fontes, E. [Cornell University, Ithaca, NY 14853 (United States); Heltsley, B.K., E-mail: bkh2@cornell.edu [Cornell University, Ithaca, NY 14853 (United States); Lyndaker, A.; Peterson, D.P.; Rider, N.T.; Rubin, D.L.; Seeley, R.; Shanks, J. [Cornell University, Ithaca, NY 14853 (United States)

    2014-12-11

    We describe the design and performance of optical elements for an x-ray beam size monitor (xBSM), a device measuring e{sup +} and e{sup −} beam sizes in the CESR-TA storage ring. The device can measure vertical beam sizes of 10–100μm on a turn-by-turn, bunch-by-bunch basis at e{sup ±} beam energies of ∼2–5GeV. x-rays produced by a hard-bend magnet pass through a single- or multiple-slit (coded aperture) optical element onto a detector. The coded aperture slit pattern and thickness of masking material forming that pattern can both be tuned for optimal resolving power. We describe several such optical elements and show how well predictions of simple models track measured performances. - Highlights: • We characterize optical element performance of an e{sup ±} x-ray beam size monitor. • We standardize beam size resolving power measurements to reference conditions. • Standardized resolving power measurements compare favorably to model predictions. • Key model features include simulation of photon-counting statistics and image fitting. • Results validate a coded aperture design optimized for the x-ray spectrum encountered.

  11. An Investigation of the Methods of Logicalizing the Code-Checking System for Architectural Design Review in New Taipei City

    Directory of Open Access Journals (Sweden)

    Wei-I Lee

    2016-12-01

    Full Text Available The New Taipei City Government developed a Code-checking System (CCS using Building Information Modeling (BIM technology to facilitate an architectural design review in 2014. This system was intended to solve problems caused by cognitive gaps between designer and reviewer in the design review process. Along with considering information technology, the most important issue for the system’s development has been the logicalization of literal building codes. Therefore, to enhance the reliability and performance of the CCS, this study uses the Fuzzy Delphi Method (FDM on the basis of design thinking and communication theory to investigate the semantic difference and cognitive gaps among participants in the design review process and to propose the direction of system development. Our empirical results lead us to recommend grouping multi-stage screening and weighted assisted logicalization of non-quantitative building codes to improve the operability of CCS. Furthermore, CCS should integrate the Expert Evaluation System (EES to evaluate the design value under qualitative building codes.

  12. Architecture and Film

    OpenAIRE

    Mohammad Javaheri, Saharnaz

    2016-01-01

    Film does not exist without architecture. In every movie that has ever been made throughout history, the cinematic image of architecture is embedded within the picture. Throughout my studies and research, I began to see that there is no director who can consciously or unconsciously deny the use of architectural elements in his or her movies. Architecture offers a strong profile to distinguish characters and story. In the early days, films were shot in streets surrounde...

  13. Contribution of transposable elements and distal enhancers to evolution of human-specific features of interphase chromatin architecture in embryonic stem cells.

    Science.gov (United States)

    Glinsky, Gennadi V

    2018-03-01

    Transposable elements have made major evolutionary impacts on creation of primate-specific and human-specific genomic regulatory loci and species-specific genomic regulatory networks (GRNs). Molecular and genetic definitions of human-specific changes to GRNs contributing to development of unique to human phenotypes remain a highly significant challenge. Genome-wide proximity placement analysis of diverse families of human-specific genomic regulatory loci (HSGRL) identified topologically associating domains (TADs) that are significantly enriched for HSGRL and designated rapidly evolving in human TADs. Here, the analysis of HSGRL, hESC-enriched enhancers, super-enhancers (SEs), and specific sub-TAD structures termed super-enhancer domains (SEDs) has been performed. In the hESC genome, 331 of 504 (66%) of SED-harboring TADs contain HSGRL and 68% of SEDs co-localize with HSGRL, suggesting that emergence of HSGRL may have rewired SED-associated GRNs within specific TADs by inserting novel and/or erasing existing non-coding regulatory sequences. Consequently, markedly distinct features of the principal regulatory structures of interphase chromatin evolved in the hESC genome compared to mouse: the SED quantity is 3-fold higher and the median SED size is significantly larger. Concomitantly, the overall TAD quantity is increased by 42% while the median TAD size is significantly decreased (p = 9.11E-37) in the hESC genome. Present analyses illustrate a putative global role for transposable elements and HSGRL in shaping the human-specific features of the interphase chromatin organization and functions, which are facilitated by accelerated creation of novel transcription factor binding sites and new enhancers driven by targeted placement of HSGRL at defined genomic coordinates. A trend toward the convergence of TAD and SED architectures of interphase chromatin in the hESC genome may reflect changes of 3D-folding patterns of linear chromatin fibers designed to enhance both

  14. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  15. Evaluation of finite element codes for demonstrating the performance of radioactive material packages in hypothetical accident drop scenarios

    International Nuclear Information System (INIS)

    Tso, C.F.; Hueggenberg, R.

    2004-01-01

    Drop testing and analysis are the two methods for demonstrating the performance of packages in hypothetical drop accident scenarios. The exact purpose of the tests and the analyses, and the relative prominence of the two in the license application, may depend on the Competent Authority and will vary between countries. The Finite Element Method (FEM) is a powerful analysis tool. A reliable finite element (FE) code when used correctly and appropriately, will allow a package's behaviour to be simulated reliably. With improvements in computing power, and in sophistication and reliability of FE codes, it is likely that FEM calculations will increasingly be used as evidence of drop test performance when seeking Competent Authority approval. What is lacking at the moment, however, is a standardised method of assessing a FE code in order to determine whether it is sufficiently reliable or pessimistic. To this end, the project Evaluation of Codes for Analysing the Drop Test Performance of Radioactive Material Transport Containers, funded by the European Commission Directorate-General XVII (now Directorate-General for Energy and Transport) and jointly performed by Arup and Gesellschaft fuer Nuklear-Behaelter mbH, was carried out in 1998. The work consisted of three components: Survey of existing finite element software, with a view to finding codes that may be capable of analysing drop test performance of radioactive material packages, and to produce an inventory of them. Develop a set of benchmark problems to evaluate software used for analysing the drop test performance of packages. Evaluate the finite element codes by testing them against the benchmarks This paper presents a summary of this work

  16. Evaluation of finite element codes for demonstrating the performance of radioactive material packages in hypothetical accident drop scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Tso, C.F. [Arup (United Kingdom); Hueggenberg, R. [Gesellschaft fuer Nuklear-Behaelter mbH (Germany)

    2004-07-01

    Drop testing and analysis are the two methods for demonstrating the performance of packages in hypothetical drop accident scenarios. The exact purpose of the tests and the analyses, and the relative prominence of the two in the license application, may depend on the Competent Authority and will vary between countries. The Finite Element Method (FEM) is a powerful analysis tool. A reliable finite element (FE) code when used correctly and appropriately, will allow a package's behaviour to be simulated reliably. With improvements in computing power, and in sophistication and reliability of FE codes, it is likely that FEM calculations will increasingly be used as evidence of drop test performance when seeking Competent Authority approval. What is lacking at the moment, however, is a standardised method of assessing a FE code in order to determine whether it is sufficiently reliable or pessimistic. To this end, the project Evaluation of Codes for Analysing the Drop Test Performance of Radioactive Material Transport Containers, funded by the European Commission Directorate-General XVII (now Directorate-General for Energy and Transport) and jointly performed by Arup and Gesellschaft fuer Nuklear-Behaelter mbH, was carried out in 1998. The work consisted of three components: Survey of existing finite element software, with a view to finding codes that may be capable of analysing drop test performance of radioactive material packages, and to produce an inventory of them. Develop a set of benchmark problems to evaluate software used for analysing the drop test performance of packages. Evaluate the finite element codes by testing them against the benchmarks This paper presents a summary of this work.

  17. Elements of neurogeometry functional architectures of vision

    CERN Document Server

    Petitot, Jean

    2017-01-01

    This book describes several mathematical models of the primary visual cortex, referring them to a vast ensemble of experimental data and putting forward an original geometrical model for its functional architecture, that is, the highly specific organization of its neural connections. The book spells out the geometrical algorithms implemented by this functional architecture, or put another way, the “neurogeometry” immanent in visual perception. Focusing on the neural origins of our spatial representations, it demonstrates three things: firstly, the way the visual neurons filter the optical signal is closely related to a wavelet analysis; secondly, the contact structure of the 1-jets of the curves in the plane (the retinal plane here) is implemented by the cortical functional architecture; and lastly, the visual algorithms for integrating contours from what may be rather incomplete sensory data can be modelled by the sub-Riemannian geometry associated with this contact structure. As such, it provides rea...

  18. Modeling in architectural-planning solutions of agrarian technoparks as elements of the infrastructure

    Science.gov (United States)

    Abdrassilova, Gulnara S.

    2017-09-01

    In the context of development of the agriculture as the driver of the economy of Kazakhstan it is imperative to study new types of agrarian constructions (agroparks, agrotourists complexes, "vertical" farms, conservatories, greenhouses) that can be combined into complexes - agrarian technoparks. Creation of agrarian technoparks as elements of the infrastructure of the agglomeration shall ensure the breakthrough in the field of agrarian goods production, storing and recycling. Modeling of architectural-planning solutions of agrarian technoparks supports development of the theory and practice of designing objects based on innovative approaches.

  19. Current status of the transient integral fuel element performance code URANUS

    International Nuclear Information System (INIS)

    Preusser, T.; Lassmann, K.

    1983-01-01

    To investigate the behavior of fuel pins during normal and off-normal operation, the integral fuel rod code URANUS has been extended to include a transient version. The paper describes the current status of the program system including a presentation of newly developed models for hypothetical accident investigation. The main objective of current development work is to improve the modelling of fuel and clad material behavior during fast transients. URANUS allows detailed analysis of experiments until the onset of strong material transport phenomena. Transient fission gas analysis is carried out due to the coupling with a special version of the LANGZEIT-KURZZEIT-code (KfK). Fuel restructuring and grain growth kinetics models have been improved recently to better characterize pre-experimental steady-state operation; transient models are under development. Extensive verification of the new version has been carried out by comparison with analytical solutions, experimental evidence, and code-to-code evaluation studies. URANUS, with all these improvements, has been successfully applied to difficult fast breeder fuel rod analysis including TOP, LOF, TUCOP, local coolant blockage and specific carbide fuel experiments. Objective of further studies is the description of transient PCMI. It is expected that the results of these developments will contribute significantly to the understanding of fuel element structural behavior during severe transients. (orig.)

  20. Architecture Descriptions. A Contribution to Modeling of Production System Architecture

    DEFF Research Database (Denmark)

    Jepsen, Allan Dam; Hvam, Lars

    a proper understanding of the architecture phenomenon and the ability to describe it in a manner that allow the architecture to be communicated to and handled by stakeholders throughout the company. Despite the existence of several design philosophies in production system design such as Lean, that focus...... a diverse set of stakeholder domains and tools in the production system life cycle. To support such activities, a contribution is made to the identification and referencing of production system elements within architecture descriptions as part of the reference architecture framework. The contribution...

  1. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  2. The Political Economy of Architectural Research : Dutch Architecture, Architects and the City, 2000-2012

    NARCIS (Netherlands)

    Djalali, A.

    2016-01-01

    The status of architectural research has not yet been clearly defined. Nevertheless, architectural research has surely become a core element in the profession of architecture. In fact, the tendency seem for architects to be less and less involved with building design and construction services, which

  3. Architectural Physics: Lighting.

    Science.gov (United States)

    Hopkinson, R. G.

    The author coordinates the many diverse branches of knowledge which have dealt with the field of lighting--physiology, psychology, engineering, physics, and architectural design. Part I, "The Elements of Architectural Physics", discusses the physiological aspects of lighting, visual performance, lighting design, calculations and measurements of…

  4. 38 CFR 39.22 - Architectural design standards.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Architectural design...-16-10) Standards and Requirements for Project § 39.22 Architectural design standards. The..., Ontario, CA 91761-2816. (a) Architectural and structural requirements—(1) Life Safety Code. Standards must...

  5. Generic programming for deterministic neutron transport codes

    International Nuclear Information System (INIS)

    Plagne, L.; Poncot, A.

    2005-01-01

    This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)

  6. Impacts of traditional architecture on the use of wood as an element of facade covering in Serbian contemporary architecture

    Directory of Open Access Journals (Sweden)

    Ivanović-Šekularac Jelena

    2011-01-01

    Full Text Available The world trend of re-use of wood and wood products as materials for construction and covering of architectural structures is present not only because of the need to meet the aesthetic, artistic and formal requirements or to seek inspiration in the return to the tradition and nature, but also because of its ecological, economic and energetic feasibility. Furthermore, the use of wood fits into contemporary trends of sustainable development and application of modern technical and technological solutions in the production of materials, in order to maintain a connection to nature, environment and tradition. In this study the author focuses on wood and wood products as an element of facade covering on buildings in our country, in order to extend knowledge about possibilities and limitations of their use and create a base for their greater and correct application. The subject of this research is to examine the application of wood and wood products as an element covering the exterior in combination with other materials applied in our traditional and contemporary homes with the emphasis on functional, representational art and the various possibilities of wood. In this study all the factors that affect the application of wood and wood products have been analyzed and the conclusions have been drawn about the manner of their implementation and the types of wood and wood products protection. The development of modern technological solutions in wood processing led to the production of composite materials based on wood that are highly resistant, stable and much longer lasting than wood. Those materials have maintained in an aesthetic sense all the characteristics of wood that make it unique and inimitable. This is why modern facade coating based on wood should be applied as a facade covering in the exterior of modern architectural buildings in Serbia, and the use wood reduced to a minimum.

  7. Criticality analysis of the storage tubes for irradiated fuel elements from the IEA-R1 with the MCNP code

    International Nuclear Information System (INIS)

    Maragni, M.G.; Moreira, J.M.L.

    1992-01-01

    A criticality safety analysis has been carried out for the storage tubes for irradiated fuel elements from the IEA-R1 research reactor. The analysis utilized the MCNP computer code which allows exact simulations of complex geometries. Aiming reducing the amount of input data, the fuel element cross-sections have been spatially smeared out. The earth material interstice between fuel elements has been approximated conservatively as concrete because its composition was unknown. The storage tubes have been found subcritical for the most adverse conditions (water flooding and un-irradiated fuel elements). A similar analysis with the KENO-IV computer code overestimated the KEF result but still confirmed the criticality safety of the storage tubes. (author)

  8. Validation of the 3D finite element transport theory code EVENT for shielding applications

    International Nuclear Information System (INIS)

    Warner, Paul; Oliveira, R.E. de

    2000-01-01

    This paper is concerned with the validation of the 3D deterministic neutral-particle transport theory code EVENT for shielding applications. The code is based on the finite element-spherical harmonics (FE-P N ) method which has been extensively developed over the last decade. A general multi-group, anisotropic scattering formalism enables the code to address realistic steady state and time dependent, multi-dimensional coupled neutron/gamma radiation transport problems involving high scattering and deep penetration alike. The powerful geometrical flexibility and competitive computational effort makes the code an attractive tool for shielding applications. In recognition of this, EVENT is currently in the process of being adopted by the UK nuclear industry. The theory behind EVENT is described and its numerical implementation is outlined. Numerical results obtained by the code are compared with predictions of the Monte Carlo code MCBEND and also with the results from benchmark shielding experiments. In particular, results are presented for the ASPIS experimental configuration for both neutron and gamma ray calculations using the BUGLE 96 nuclear data library. (author)

  9. Understanding Epistatic Interactions between Genes Targeted by Non-coding Regulatory Elements in Complex Diseases

    Directory of Open Access Journals (Sweden)

    Min Kyung Sung

    2014-12-01

    Full Text Available Genome-wide association studies have proven the highly polygenic architecture of complex diseases or traits; therefore, single-locus-based methods are usually unable to detect all involved loci, especially when individual loci exert small effects. Moreover, the majority of associated single-nucleotide polymorphisms resides in non-coding regions, making it difficult to understand their phenotypic contribution. In this work, we studied epistatic interactions associated with three common diseases using Korea Association Resource (KARE data: type 2 diabetes mellitus (DM, hypertension (HT, and coronary artery disease (CAD. We showed that epistatic single-nucleotide polymorphisms (SNPs were enriched in enhancers, as well as in DNase I footprints (the Encyclopedia of DNA Elements [ENCODE] Project Consortium 2012, which suggested that the disruption of the regulatory regions where transcription factors bind may be involved in the disease mechanism. Accordingly, to identify the genes affected by the SNPs, we employed whole-genome multiple-cell-type enhancer data which discovered using DNase I profiles and Cap Analysis Gene Expression (CAGE. Assigned genes were significantly enriched in known disease associated gene sets, which were explored based on the literature, suggesting that this approach is useful for detecting relevant affected genes. In our knowledge-based epistatic network, the three diseases share many associated genes and are also closely related with each other through many epistatic interactions. These findings elucidate the genetic basis of the close relationship between DM, HT, and CAD.

  10. Numerical experiments in finite element analysis of thermoelastoplastic behaviour of materials. Further developments of the PLASTEF code

    International Nuclear Information System (INIS)

    Basombrio, F.G.; Sarmiento, G.S.

    1980-01-01

    In a previous paper the finite element code PLASTEF for the numerical simulation of thermoelastoplastic behaviour of materials was presented in its general outline. This code employs an initial stress incremental procedure for given histories of loads and temperature. It has been formulated for medium sized computers. The present work is an extension of the previous paper to consider additional aspects of the variable temperature case. Non-trivial tests of this type of situation are described. Finally, details are given of some concrete applications to the prediction of thermoelastoplastic collapse of nuclear fuel element cladding. (author)

  11. Thermomechanical DART code improvements for LEU VHD dispersion and monolithic fuel element analysis

    International Nuclear Information System (INIS)

    Taboada, H.; Saliba, R.; Moscarda, M.V.; Rest, J.

    2005-01-01

    A collaboration agreement between ANL/US DOE and CNEA Argentina in the area of Low Enriched Uranium Advanced Fuels has been in place since October 16, 1997 under the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy. An annex concerning DART code optimization has been operative since February 8, 1999. Previously, as a part of this annex a visual FASTDART version and also a DART THERMAL version were presented during RERTR 2000, 2002 and RERTR 2003 Meetings. During this past year the following activities were completed: Optimization of DART TM code Al diffusion parameters by testing predictions against reliable data from RERTR experiments. Improvements on the 3-D thermo-mechanical version of the code for modeling the irradiation behavior of LEU U-Mo monolithic fuel. Concerning the first point, by means of an optimization of parameters of the Al diffusion through the interaction product theoretical expression, a reasonable agreement between DART temperature calculations with reliable RERTR PIE data was reached. The 3-D thermomechanical code complex is based upon a finite element thermal-elastic code named TERMELAS, and irradiation behavior provided by the DART code. An adequate and progressive process of coupling calculations of both codes at each time step is currently developed. Compatible thermal calculation between both codes was reached. This is the first stage to benchmark and validate against RERTR PIE data the coupling process. (author)

  12. A non-linear, finite element, heat conduction code to calculate temperatures in solids of arbitrary geometry

    International Nuclear Information System (INIS)

    Tayal, M.

    1987-01-01

    Structures often operate at elevated temperatures. Temperature calculations are needed so that the design can accommodate thermally induced stresses and material changes. A finite element computer called FEAT has been developed to calculate temperatures in solids of arbitrary shapes. FEAT solves the classical equation for steady state conduction of heat. The solution is obtained for two-dimensional (plane or axisymmetric) or for three-dimensional problems. Gap elements are use to simulate interfaces between neighbouring surfaces. The code can model: conduction; internal generation of heat; prescribed convection to a heat sink; prescribed temperatures at boundaries; prescribed heat fluxes on some surfaces; and temperature-dependence of material properties like thermal conductivity. The user has a option of specifying the detailed variation of thermal conductivity with temperature. For convenience to the nuclear fuel industry, the user can also opt for pre-coded values of thermal conductivity, which are obtained from the MATPRO data base (sponsored by the U.S. Nuclear Regulatory Commission). The finite element method makes FEAT versatile, and enables it to accurately accommodate complex geometries. The optional link to MATPRO makes it convenient for the nuclear fuel industry to use FEAT, without loss of generality. Special numerical techniques make the code inexpensive to run, for the type of material non-linearities often encounter in the analysis of nuclear fuel. The code, however, is general, and can be used for other components of the reactor, or even for non-nuclear systems. The predictions of FEAT have been compared against several analytical solutions. The agreement is usually better than 5%. Thermocouple measurements show that the FEAT predictions are consistent with measured changes in temperatures in simulated pressure tubes. FEAT was also found to predict well, the axial variations in temperatures in the end-pellets(UO 2 ) of two fuel elements irradiated

  13. Enterprise architecture evaluation using architecture framework and UML stereotypes

    Directory of Open Access Journals (Sweden)

    Narges Shahi

    2014-08-01

    Full Text Available There is an increasing need for enterprise architecture in numerous organizations with complicated systems with various processes. Support for information technology, organizational units whose elements maintain complex relationships increases. Enterprise architecture is so effective that its non-use in organizations is regarded as their institutional inability in efficient information technology management. The enterprise architecture process generally consists of three phases including strategic programing of information technology, enterprise architecture programing and enterprise architecture implementation. Each phase must be implemented sequentially and one single flaw in each phase may result in a flaw in the whole architecture and, consequently, in extra costs and time. If a model is mapped for the issue and then it is evaluated before enterprise architecture implementation in the second phase, the possible flaws in implementation process are prevented. In this study, the processes of enterprise architecture are illustrated through UML diagrams, and the architecture is evaluated in programming phase through transforming the UML diagrams to Petri nets. The results indicate that the high costs of the implementation phase will be reduced.

  14. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  15. Implementing the Freight Transportation Data Architecture : Data Element Dictionary

    Science.gov (United States)

    2015-01-01

    NCFRP Report 9: Guidance for Developing a Freight Data Architecture articulates the value of establishing architecture for linking data across modes, subjects, and levels of geography to obtain essential information for decision making. Central to th...

  16. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L. [Oak Ridge National Lab., TN (United States); Sartori, E. [OCDE/OECD NEA Data Bank, Issy-les-Moulineaux (France); Viedma, L.G. de [Consejo de Seguridad Nuclear, Madrid (Spain)

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  17. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    International Nuclear Information System (INIS)

    Kirk, B.L.; Sartori, E.; Viedma, L.G. de

    1997-01-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee's Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community's computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management

  18. The Influence of Building Codes on Recreation Facility Design.

    Science.gov (United States)

    Morrison, Thomas A.

    1989-01-01

    Implications of building codes upon design and construction of recreation facilities are investigated (national building codes, recreation facility standards, and misperceptions of design requirements). Recreation professionals can influence architectural designers to correct past deficiencies, but they must understand architectural and…

  19. SP_Ace: a new code to derive stellar parameters and elemental abundances

    Science.gov (United States)

    Boeche, C.; Grebel, E. K.

    2016-03-01

    Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters

  20. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell; Heinecke, Alexander; Pabst, Hans; Henry, Greg; Parsani, Matteo; Keyes, David E.

    2016-01-01

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  1. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell

    2016-06-14

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  2. ABAQUS/EPGEN - a general purpose finite element code with emphasis on nonlinear applications

    International Nuclear Information System (INIS)

    Hibbitt, H.D.

    1984-01-01

    The article contains a summary description of ABAQUS, a finite element program designed for general use in nonlinear as well as linear structural problems, in the context of its application to nuclear structural integrity analysis. The article begins with a discussion of the design criteria and methods upon which the code development has been based. The engineering modelling capabilities, currently implemented in the program - elements, constitutive models and analysis procedures - are then described. Finally, a few demonstration examples are presented, to illustrate some of the program's features that are of interest in structural integrity analysis associated with nuclear power plants. (orig.)

  3. The Walk-Man Robot Software Architecture

    Directory of Open Access Journals (Sweden)

    Mirko Ferrati

    2016-05-01

    Full Text Available A software and control architecture for a humanoid robot is a complex and large project, which involves a team of developers/researchers to be coordinated and requires many hard design choices. If such project has to be done in a very limited time, i.e., less than 1 year, more constraints are added and concepts, such as modular design, code reusability, and API definition, need to be used as much as possible. In this work, we describe the software architecture developed for Walk-Man, a robot participant at the Darpa Robotics Challenge. The challenge required the robot to execute many different tasks, such as walking, driving a car, and manipulating objects. These tasks need to be solved by robotics specialists in their corresponding research field, such as humanoid walking, motion planning, or object manipulation. The proposed architecture was developed in 10 months, provided boilerplate code for most of the functionalities required to control a humanoid robot and allowed robotics researchers to produce their control modules for DRC tasks in a short time. Additional capabilities of the architecture include firmware and hardware management, mixing of different middlewares, unreliable network management, and operator control station GUI. All the source code related to the architecture and some control modules have been released as open source projects.

  4. Ultrafast all-optical code-division multiple-access networks

    Science.gov (United States)

    Kwong, Wing C.; Prucnal, Paul R.; Liu, Yanming

    1992-12-01

    In optical code-division multiple access (CDMA), the architecture of optical encoders/decoders is another important factor that needs to be considered, besides the correlation properties of those already extensively studied optical codes. The architecture of optical encoders/decoders affects, for example, the amount of power loss and length of optical delays that are associated with code sequence generation and correlation, which, in turn, affect the power budget, size, and cost of an optical CDMA system. Various CDMA coding architectures are studied in the paper. In contrast to the encoders/decoders used in prime networks (i.e., prime encodes/decoders), which generate, select, and correlate code sequences by a parallel combination of fiber-optic delay-lines, and in 2n networks (i.e., 2n encoders/decoders), which generate and correlate code sequences by a serial combination of 2 X 2 passive couplers and fiber delays with sequence selection performed in a parallel fashion, the modified 2n encoders/decoders generate, select, and correlate code sequences by a serial combination of directional couplers and delays. The power and delay- length requirements of the modified 2n encoders/decoders are compared to that of the prime and 2n encoders/decoders. A 100 Mbit/s optical CDMA experiment in free space demonstrating the feasibility of the all-serial coding architecture using a serial combination of 50/50 beam splitters and retroreflectors at 10 Tchip/s (i.e., 100,000 chip/bit) with 100 fs laser pulses is reported.

  5. Latest improvements on TRACPWR six-equations thermohydraulic code

    International Nuclear Information System (INIS)

    Rivero, N.; Batuecas, T.; Martinez, R.; Munoz, J.; Lenhardt, G.; Serrano, P.

    1999-01-01

    The paper presents the latest improvements on TRACPWR aimed at adapting the code to present trends on computer platforms, architectures and training requirements as well as extending the scope of the code itself and its applicability to other technologies different from Westinghouse PWR one. Firstly major features of TRACPWR as best estimate and real time simulation code are summed, then the areas where TRACPWR is being improved are presented. These areas comprising: (1) Architecture: integrating TRACPWR and RELAP5 codes, (2) Code scope enhancement: modelling the Mid-Loop operation, (3) Code speed-up: applying parallelization techniques, (4) Code platform downswing: porting to Windows N1 platform, (5) On-line performance: allowing simulation initialisation from a Plant Process Computer, and (6) Code scope extension: using the code for modelling VVER and PHWR technology. (author)

  6. Dynamic Weather Routes Architecture Overview

    Science.gov (United States)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  7. Free material stiffness design of laminated composite structures using commercial finite element analysis codes

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup; Lindgaard, Esben; Lund, Erik

    2015-01-01

    In this work optimum stiffness design of laminated composite structures is performed using the commercially available programs ANSYS and MATLAB. Within these programs a Free Material Optimization algorithm is implemented based on an optimality condition and a heuristic update scheme. The heuristic...... update scheme is needed because commercially available finite element analysis software is used. When using a commercial finite element analysis code it is not straight forward to implement a computationally efficient gradient based optimization algorithm. Examples considered in this work are a clamped......, where full access to the finite element analysis core is granted. This comparison displays the possibility of using commercially available programs for stiffness design of laminated composite structures....

  8. Architectural Prototyping in Industrial Practice

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2008-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system......, in addressing issues regarding quality attributes, in addressing architectural risks, and in addressing the problem of knowledge transfer and conformance. Little work has been reported so far on the actual industrial use of architectural prototyping. In this paper, we report from an ethnographical study...... and focus group involving architects from four companies in which we have focused on architectural prototypes. Our findings conclude that architectural prototypes play an important role in resolving problems experimentally, but less so in exploring alternative solutions. Furthermore, architectural...

  9. Performance Analysis of Multiradio Transmitter with Polar or Cartesian Architectures Associated with High Efficiency Switched-Mode Power Amplifiers (invited paper

    Directory of Open Access Journals (Sweden)

    F. Robert

    2010-12-01

    Full Text Available This paper deals with wireless multi-radio transmitter architectures operating in the frequency band of 800 MHz – 6 GHz. As a consequence of the constant evolution in the communication systems, mobile transmitters must be able to operate at different frequency bands and modes according to existing standards specifications. The concept of a unique multiradio architecture is an evolution of the multistandard transceiver characterized by a parallelization of circuits for each standard. Multi-radio concept optimizes surface and power consumption. Transmitter architectures using sampling techniques and baseband ΣΔ or PWM coding of signals before their amplification appear as good candidates for multiradio transmitters for several reasons. They allow using high efficiency power amplifiers such as switched-mode PAs. They are highly flexible and easy to integrate because of their digital nature. But when the transmitter efficiency is considered, many elements have to be taken into account: signal coding efficiency, PA efficiency, RF filter. This paper investigates the interest of these architectures for a multiradio transmitter able to support existing wireless communications standards between 800 MHz and 6 GHz. It evaluates and compares the different possible architectures for WiMAX and LTE standards in terms of signal quality and transmitter power efficiency.

  10. The analysis of cultural architectural trends in Crisan locality

    Directory of Open Access Journals (Sweden)

    SELA Florentina

    2010-09-01

    Full Text Available The paper presents data about the identification and analyse of the traditional architectural elements in Crisan locality knowing that the tourism activity is in a continuous development. The field research (during November 2007 enabled us to develop a qualitative and quantitative analysis in terms of identification of traditional architecture elements, their conservation status, and frequency of traditional building materials use, decorative elements and specificcolors used in construction architecture. Further, based on collected data, was realized the chart - Distribution for TraditionalArchitecture Index (TAI on the distance from the center of Crisan locality, showing that in Crisan locality the houses were and are built without taking into account any rule, destroying thus traditional architecture.

  11. Finite element study of scaffold architecture design and culture conditions for tissue engineering.

    Science.gov (United States)

    Olivares, Andy L; Marsal, Elia; Planell, Josep A; Lacroix, Damien

    2009-10-01

    Tissue engineering scaffolds provide temporary mechanical support for tissue regeneration and transfer global mechanical load to mechanical stimuli to cells through its architecture. In this study the interactions between scaffold pore morphology, mechanical stimuli developed at the cell microscopic level, and culture conditions applied at the macroscopic scale are studied on two regular scaffold structures. Gyroid and hexagonal scaffolds of 55% and 70% porosity were modeled in a finite element analysis and were submitted to an inlet fluid flow or compressive strain. A mechanoregulation theory based on scaffold shear strain and fluid shear stress was applied for determining the influence of each structures on the mechanical stimuli on initial conditions. Results indicate that the distribution of shear stress induced by fluid perfusion is very dependent on pore distribution within the scaffold. Gyroid architectures provide a better accessibility of the fluid than hexagonal structures. Based on the mechanoregulation theory, the differentiation process in these structures was more sensitive to inlet fluid flow than axial strain of the scaffold. This study provides a computational approach to determine the mechanical stimuli at the cellular level when cells are cultured in a bioreactor and to relate mechanical stimuli with cell differentiation.

  12. Minimalism in architecture: Architecture as a language of its identity

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2012-01-01

    Full Text Available Every architectural work is created on the principle that includes the meaning, and then this work is read like an artifact of the particular meaning. Resources by which the meaning is built primarily, susceptible to transformation, as well as routing of understanding (decoding messages carried by a work of architecture, are subject of semiotics and communication theories, which have played significant role for the architecture and the architect. Minimalism in architecture, as a paradigm of the XXI century architecture, means searching for essence located in the irreducible minimum. Inspired use of architectural units (archetypical elements, trough the fatasm of simplicity, assumes the primary responsibility for providing the object identity, because it participates in language formation and therefore in its reading. Volume is form by clean language that builds the expression of the fluid areas liberated of recharge needs. Reduced architectural language is appropriating to the age marked by electronic communications.

  13. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  14. Modeling turbine-missile impacts using the HONDO finite-element code

    International Nuclear Information System (INIS)

    Schuler, K.W.

    1981-11-01

    Calculations have been performed using the dynamic finite element code HONDO to simulate a full scale rocket sled test. In the test a rocket sled was used to launch at a velocity of 150 m/s (490 ft/s), a 1527 kg (3366 lb) fragment of a steam turbine rotor disk into a structure which was a simplified model of a steam turbine casing. In the calculations the material behavior of and boundary conditions on the target structure were varied to assess its energy absorbing characteristics. Comparisons are made between the calculations and observations of missile velocity and strain histories of various points of the target structure

  15. Coded aperture subreflector array for high resolution radar imaging

    Science.gov (United States)

    Lynch, Jonathan J.; Herrault, Florian; Kona, Keerti; Virbila, Gabriel; McGuire, Chuck; Wetzel, Mike; Fung, Helen; Prophet, Eric

    2017-05-01

    HRL Laboratories has been developing a new approach for high resolution radar imaging on stationary platforms. High angular resolution is achieved by operating at 235 GHz and using a scalable tile phased array architecture that has the potential to realize thousands of elements at an affordable cost. HRL utilizes aperture coding techniques to minimize the size and complexity of the RF electronics needed for beamforming, and wafer level fabrication and integration allow tiles containing 1024 elements to be manufactured with reasonable costs. This paper describes the results of an initial feasibility study for HRL's Coded Aperture Subreflector Array (CASA) approach for a 1024 element micromachined antenna array with integrated single-bit phase shifters. Two candidate electronic device technologies were evaluated over the 170 - 260 GHz range, GaN HEMT transistors and GaAs Schottky diodes. Array structures utilizing silicon micromachining and die bonding were evaluated for etch and alignment accuracy. Finally, the overall array efficiency was estimated to be about 37% (not including spillover losses) using full wave array simulations and measured device performance, which is a reasonable value at 235 GHz. Based on the measured data we selected GaN HEMT devices operated passively with 0V drain bias due to their extremely low DC power dissipation.

  16. A Study on Architecture of Malicious Code Blocking Scheme with White List in Smartphone Environment

    Science.gov (United States)

    Lee, Kijeong; Tolentino, Randy S.; Park, Gil-Cheol; Kim, Yong-Tae

    Recently, the interest and demands for mobile communications are growing so fast because of the increasing prevalence of smartphones around the world. In addition, the existing feature phones were replaced by smartphones and it has widely improved while using the explosive growth of Internet users using smartphones, e-commerce enabled Internet banking transactions and the importance of protecting personal information. Therefore, the development of smartphones antivirus products was developed and launched in order to prevent malicious code or virus infection. In this paper, we proposed a new scheme to protect the smartphone from malicious codes and malicious applications that are element of security threats in mobile environment and to prevent information leakage from malicious code infection. The proposed scheme is based on the white list smartphone application which only allows installing authorized applications and to prevent the installation of malicious and untrusted mobile applications which can possibly infect the applications and programs of smartphones.

  17. Apolux : an innovative computer code for daylight design and analysis in architecture and urbanism

    Energy Technology Data Exchange (ETDEWEB)

    Claro, A.; Pereira, F.O.R.; Ledo, R.Z. [Santa Catarina Federal Univ., Florianopolis, SC (Brazil)

    2005-07-01

    The main capabilities of a new computer program for calculating and analyzing daylighting in architectural space were discussed. Apolux 1.0 was designed to use three-dimensional files generated in graphic editors in the data exchange file (DXF) format and was developed to integrate an architect's design characteristics. An example of its use in a design context development was presented. The program offers fast and flexible manipulation of video card models in different visualization conditions. The algorithm for working with the physics of light is based on the radiosity method representing the surfaces through finite elements divided in small triangular units of area which are fully confronted to each other. The form factors of each triangle are determined in relation to all others in the primary calculation. Visible directions of the sky are also included according to the modular units of a subdivided globe. Following these primary calculations, the different and successive daylighting solutions can be determined under different sky conditions. The program can also change the properties of the materials to quickly recalculate the solutions. The program has been applied in an office building in Florianopolis, Brazil. The four stages of design include initial discussion with the architects about the conceptual possibilities; development of a comparative study based on 2 architectural designs with different conceptual elements regarding daylighting exploitation in order to compare internal daylighting levels and distribution of the 2 options exposed to the same external conditions; study the solar shading devices for specific facades; and, simulations to test the performance of different designs. The program has proven to be very flexible with reliable results. It has the possibility of incorporating situations of the real sky through the input of the Spherical model of real sky luminance values. 3 refs., 14 figs.

  18. High performance 3D neutron transport on peta scale and hybrid architectures within APOLLO3 code

    International Nuclear Information System (INIS)

    Jamelot, E.; Dubois, J.; Lautard, J-J.; Calvin, C.; Baudron, A-M.

    2011-01-01

    APOLLO3 code is a common project of CEA, AREVA and EDF for the development of a new generation system for core physics analysis. We present here the parallelization of two deterministic transport solvers of APOLLO3: MINOS, a simplified 3D transport solver on structured Cartesian and hexagonal grids, and MINARET, a transport solver based on triangular meshes on 2D and prismatic ones in 3D. We used two different techniques to accelerate MINOS: a domain decomposition method, combined with an accelerated algorithm using GPU. The domain decomposition is based on the Schwarz iterative algorithm, with Robin boundary conditions to exchange information. The Robin parameters influence the convergence and we detail how we optimized the choice of these parameters. MINARET parallelization is based on angular directions calculation using explicit message passing. Fine grain parallelization is also available for each angular direction using shared memory multithreaded acceleration. Many performance results are presented on massively parallel architectures using more than 103 cores and on hybrid architectures using some tens of GPUs. This work contributes to the HPC development in reactor physics at the CEA Nuclear Energy Division. (author)

  19. Requirement analysis and architecture of data communication system for integral reactor

    International Nuclear Information System (INIS)

    Jeong, K. I.; Kwon, H. J.; Park, J. H.; Park, H. Y.; Koo, I. S.

    2005-05-01

    When digitalizing the Instrumentation and Control(I and C) systems in Nuclear Power Plants(NPP), a communication network is required for exchanging the digitalized data between I and C equipments in a NPP. A requirements analysis and an analysis of design elements and techniques are required for the design of a communication network. Through the requirements analysis of the code and regulation documents such as NUREG/CR-6082, section 7.9 of NUREG 0800 , IEEE Standard 7-4.3.2 and IEEE Standard 603, the extracted requirements can be used as a design basis and design concept for a detailed design of a communication network in the I and C system of an integral reactor. Design elements and techniques such as a physical topology, protocol transmission media and interconnection device should be considered for designing a communication network. Each design element and technique should be analyzed and evaluated as a portion of the integrated communication network design. In this report, the basic design requirements related to the design of communication network are investigated by using the code and regulation documents and an analysis of the design elements and techniques is performed. Based on these investigation and analysis, the overall architecture including the safety communication network and the non-safety communication network is proposed for an integral reactor

  20. Fuel element thermo-mechanical analysis during transient events using the FMS and FETMA codes

    International Nuclear Information System (INIS)

    Hernandez Lopez Hector; Hernandez Martinez Jose Luis; Ortiz Villafuerte Javier

    2005-01-01

    In the Instituto Nacional de Investigaciones Nucleares of Mexico, the Fuel Management System (FMS) software package has been used for long time to simulate the operation of a BWR nuclear power plant in steady state, as well as in transient events. To evaluate the fuel element thermo-mechanical performance during transient events, an interface between the FMS codes and our own Fuel Element Thermo Mechanical Analysis (FETMA) code is currently being developed and implemented. In this work, the results of the thermo-mechanical behavior of fuel rods in the hot channel during the simulation of transient events of a BWR nuclear power plant are shown. The transient events considered for this work are a load rejection and a feedwater control failure, which among the most important events that can occur in a BWR. The results showed that conditions leading to fuel rod failure at no time appeared for both events. Also, it is shown that a transient due load rejection is more demanding on terms of safety that the failure of a controller of the feedwater. (authors)

  1. A Systematic Review of Software Architecture Visualization Techniques

    NARCIS (Netherlands)

    Shahin, M.; Liang, P.; Ali Babar, M.

    2014-01-01

    Context Given the increased interest in using visualization techniques (VTs) to help communicate and understand software architecture (SA) of large scale complex systems, several VTs and tools have been reported to represent architectural elements (such as architecture design, architectural

  2. FEHM, Finite Element Heat and Mass Transfer Code

    International Nuclear Information System (INIS)

    Zyvoloski, G.A.

    2002-01-01

    1 - Description of program or function: FEHM is a numerical simulation code for subsurface transport processes. It models 3-D, time-dependent, multiphase, multicomponent, non-isothermal, reactive flow through porous and fractured media. It can accurately represent complex 3-D geologic media and structures and their effects on subsurface flow and transport. Its capabilities include flow of gas, water, and heat; flow of air, water, and heat; multiple chemically reactive and sorbing tracers; finite element/finite volume formulation; coupled stress module; saturated and unsaturated media; and double porosity and double porosity/double permeability capabilities. 2 - Methods: FEHM uses a preconditioned conjugate gradient solution of coupled linear equations and a fully implicit, fully coupled Newton Raphson solution of nonlinear equations. It has the capability of simulating transport using either a advection/diffusion solution or a particle tracking method. 3 - Restriction on the complexity of the problem: Disk space and machine memory are the only limitations

  3. Large Eddy Simulation of turbulent flows in compound channels with a finite element code

    International Nuclear Information System (INIS)

    Xavier, C.M.; Petry, A.P.; Moeller, S.V.

    2011-01-01

    This paper presents the numerical investigation of the developing flow in a compound channel formed by a rectangular main channel and a gap in one of the sidewalls. A three dimensional Large Eddy Simulation computational code with the classic Smagorinsky model is introduced, where the transient flow is modeled through the conservation equations of mass and momentum of a quasi-incompressible, isothermal continuous medium. Finite Element Method, Taylor-Galerkin scheme and linear hexahedrical elements are applied. Numerical results of velocity profile show the development of a shear layer in agreement with experimental results obtained with Pitot tube and hot wires. (author)

  4. Distribution Pattern of Fe, Sr, Zr and Ca Elements as Particle Size Function in the Code River Sediments from Upstream to Downstream

    International Nuclear Information System (INIS)

    Sri Murniasih; Muzakky

    2007-01-01

    The analysis of Fe, Sr, Zr and Ca elements concentration of granular sediment from upstream to downstream of Code river has been done. The aim of this research is to know the influence of particle size on the concentration of Fe, Sr, Zr and Ca elements in the Code river sediments from upstream to downstream and its distribution pattern. The instrument used was x-ray fluorescence with Si(Li) detector. Analysis results show that more Fe and Sr elements are very much found in 150 - 90 μm particle size, while Zr and Ca elements are very much found in < 90 μm particle size. Distribution pattern of Fe, Sr, Zr and Ca elements distribution in Code river sediments tends to increase relatively from upstream to downstream following its conductivity. The concentration of Fe, Sr, Zr and Ca elements are 1.49 ± 0.03 % - 5.93 ± 0.02 % ; 118.20 ± 10.73 ppm - 468.21 ± 20.36 ppm; 19.81 ppm ± 0.86 ppm - 76.36 ± 3.02 ppm and 3.22 ± 0.25 % - 11.40 ± 0.31 % successively. (author)

  5. Requirements for a multifunctional code architecture

    Energy Technology Data Exchange (ETDEWEB)

    Tiihonen, O. [VTT Energy (Finland); Juslin, K. [VTT Automation (Finland)

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed.

  6. Requirements for a multifunctional code architecture

    International Nuclear Information System (INIS)

    Tiihonen, O.; Juslin, K.

    1997-01-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed

  7. Vectorization and parallelization of a production reactor assembly code

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.; Michigan Univ., Ann Arbor, MI

    1991-01-01

    In order to use efficiently the new features of supercomputers, production codes, usually written 10 -20 years ago, must be tailored for modern computer architectures. We have chosen to optimize the CPM-2 code, a production reactor assembly code based on the collision probability transport method. Substantial speedup in the execution times was obtained with the parallel/vector version of the CPM-2 code. In addition, we have developed a new transfer probability method, which removes some of the modelling limitations of the collision probability method encoded in the CPM-2 code, and can fully utilize the parallel/vector architecture of a multiprocessor IBM 3090. (author)

  8. Vectorization and parallelization of a production reactor assembly code

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.

    1991-01-01

    In order to efficiently use new features of supercomputers, production codes, usually written 10 - 20 years ago, must be tailored for modern computer architectures. We have chosen to optimize the CPM-2 code, a production reactor assembly code based on the collision probability transport method. Substantial speedups in the execution times were obtained with the parallel/vector version of the CPM-2 code. In addition, we have developed a new transfer probability method, which removes some of the modelling limitations of the collision probability method encoded in the CPM-2 code, and can fully utilize parallel/vector architecture of a multiprocessor IBM 3090. (author)

  9. MOVE-Pro: a low power and high code density TTA architecture

    NARCIS (Netherlands)

    He, Y.; She, D.; Mesman, B.; Corporaal, H.

    2011-01-01

    Transport Triggered Architectures (TTAs) possess many advantageous, such as modularity, flexibility, and scalability. As an exposed datapath architecture, TTAs can effectively reduce the register file (RF) pressure in both number of accesses and number of RF ports. However, the conventional TTAs

  10. Performance Analysis of FEM Algorithmson GPU and Many-Core Architectures

    KAUST Repository

    Khurram, Rooh

    2015-04-27

    The roadmaps of the leading supercomputer manufacturers are based on hybrid systems, which consist of a mix of conventional processors and accelerators. This trend is mainly due to the fact that the power consumption cost of the future cpu-only Exascale systems will be unsustainable, thus accelerators such as graphic processing units (GPUs) and many-integrated-core (MIC) will likely be the integral part of the TOP500 (http://www.top500.org/) supercomputers, beyond 2020. The emerging supercomputer architecture will bring new challenges for the code developers. Continuum mechanics codes will particularly be affected, because the traditional synchronous implicit solvers will probably not scale on hybrid Exascale machines. In the previous study[1], we reported on the performance of a conjugate gradient based mesh motion algorithm[2]on Sandy Bridge, Xeon Phi, and K20c. In the present study we report on the comparative study of finite element codes, using PETSC and AmgX solvers on CPU and GPUs, respectively [3,4]. We believe this study will be a good starting point for FEM code developers, who are contemplating a CPU to accelerator transition.

  11. Accuracy Test of Software Architecture Compliance Checking Tools : Test Instruction

    NARCIS (Netherlands)

    Prof.dr. S. Brinkkemper; Dr. Leo Pruijt; C. Köppe; J.M.E.M. van der Werf

    2015-01-01

    Author supplied: "Abstract Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies

  12. Architectural elements and bounding surfaces in fluvial deposits: anatomy of the Kayenta formation (lower jurassic), Southwest Colorado

    Science.gov (United States)

    Miall, Andrew D.

    1988-03-01

    Three well-exposed outcrops in the Kayenta Formation (Lower Jurassic), near Dove Creek in southwestern Colorado, were studied using lateral profiles, in order to test recent regarding architectural-element analysis and the classification and interpretation of internal bounding surfaces. Examination of bounding surfaces within and between elements in the Kayenta outcrops raises problems in applying the three-fold classification of Allen (1983). Enlarging this classification to a six-fold hierarchy permits the discrimination of surfaces intermediate between Allen's second- and third-order types, corresponding to the upper bounding surfaces of macroforms, and internal erosional "reactivation" surfaces within the macroforms. Examples of the first five types of surface occur in the Kayenta outcrops at Dove Creek. The new classifications is offered as a general solution to the problem of description of complex, three-dimensional fluvial sandstone bodies. The Kayenta Formation at Dove Creek consists of a multistorey sandstone body, including the deposits of lateral- and downstream-accreted macroforms. The storeys show no internal cyclicity, neither within individual elements nor through the overall vertical thickness of the formation. Low paleocurrent variance indicates low sinuosity flow, whereas macroform geometry and orientation suggest low to moderate sinuosity. The many internal minor erosion surfaces draped with mud and followed by intraclast breccias imply frequent rapid stage fluctuation, consistent with variable (seasonal? monsonal? ephemmeral?) flow. The results suggest a fluvial architecture similar to that of the South Saskatchewan River, through with a three-dimensional geometry unlike that interpreted from surface studies of that river.

  13. FTS2000 network architecture

    Science.gov (United States)

    Klenart, John

    1991-01-01

    The network architecture of FTS2000 is graphically depicted. A map of network A topology is provided, with interservice nodes. Next, the four basic element of the architecture is laid out. Then, the FTS2000 time line is reproduced. A list of equipment supporting FTS2000 dedicated transmissions is given. Finally, access alternatives are shown.

  14. Creating a Structurally Realistic Finite Element Geometric Model of a Cardiomyocyte to Study the Role of Cellular Architecture in Cardiomyocyte Systems Biology.

    Science.gov (United States)

    Rajagopal, Vijay; Bass, Gregory; Ghosh, Shouryadipta; Hunt, Hilary; Walker, Cameron; Hanssen, Eric; Crampin, Edmund; Soeller, Christian

    2018-04-18

    With the advent of three-dimensional (3D) imaging technologies such as electron tomography, serial-block-face scanning electron microscopy and confocal microscopy, the scientific community has unprecedented access to large datasets at sub-micrometer resolution that characterize the architectural remodeling that accompanies changes in cardiomyocyte function in health and disease. However, these datasets have been under-utilized for investigating the role of cellular architecture remodeling in cardiomyocyte function. The purpose of this protocol is to outline how to create an accurate finite element model of a cardiomyocyte using high resolution electron microscopy and confocal microscopy images. A detailed and accurate model of cellular architecture has significant potential to provide new insights into cardiomyocyte biology, more than experiments alone can garner. The power of this method lies in its ability to computationally fuse information from two disparate imaging modalities of cardiomyocyte ultrastructure to develop one unified and detailed model of the cardiomyocyte. This protocol outlines steps to integrate electron tomography and confocal microscopy images of adult male Wistar (name for a specific breed of albino rat) rat cardiomyocytes to develop a half-sarcomere finite element model of the cardiomyocyte. The procedure generates a 3D finite element model that contains an accurate, high-resolution depiction (on the order of ~35 nm) of the distribution of mitochondria, myofibrils and ryanodine receptor clusters that release the necessary calcium for cardiomyocyte contraction from the sarcoplasmic reticular network (SR) into the myofibril and cytosolic compartment. The model generated here as an illustration does not incorporate details of the transverse-tubule architecture or the sarcoplasmic reticular network and is therefore a minimal model of the cardiomyocyte. Nevertheless, the model can already be applied in simulation-based investigations into the

  15. Space Internet Architectures and Technologies for NASA Enterprises

    Science.gov (United States)

    Bhasin, Kul; Hayden, Jeffrey L.

    2001-01-01

    NASA's future communications services will be supplied through a space communications network that mirrors the terrestrial Internet in its capabilities and flexibility. The notional requirements for future data gathering and distribution by this Space Internet have been gathered from NASA's Earth Science Enterprise (ESE), the Human Exploration and Development in Space (HEDS), and the Space Science Enterprise (SSE). This paper describes a communications infrastructure for the Space Internet, the architectures within the infrastructure, and the elements that make up the architectures. The architectures meet the requirements of the enterprises beyond 2010 with Internet 'compatible technologies and functionality. The elements of an architecture include the backbone, access, inter-spacecraft and proximity communication parts. From the architectures, technologies have been identified which have the most impact and are critical for the implementation of the architectures.

  16. Towards architectural information in implementation (NIER track)

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2011-01-01

    in a fast-faced agile project. We propose to embed as much architectural information as possible in the central artefact of the agile universe, the code. We argue that thereby valuable architectural information is retained for (automatic) documentation, validation, and further analysis, based......Agile development methods favor speed and feature producing iterations. Software architecture, on the other hand, is ripe with techniques that are slow and not oriented directly towards implementation of costumers’ needs. Thus, there is a major challenge in retaining architectural information...

  17. Modelling 3-D mechanical phenomena in a 1-D industrial finite element code: results and perspectives

    International Nuclear Information System (INIS)

    Guicheret-Retel, V.; Trivaudey, F.; Boubakar, M.L.; Masson, R.; Thevenin, Ph.

    2005-01-01

    Assessing fuel rod integrity in PWR reactors must enjoin two opposite goals: a one-dimensional finite element code (axial revolution symmetry) is needed to provide industrial results at the scale of the reactor core, while the main risk of cladding failure [e.g. pellet-cladding interaction (PCI)] is based on fully three-dimensional phenomena. First, parametric three-dimensional elastic calculations were performed to identify the relevant parameters (fragment number, contact pellet-cladding conditions, etc.) as regards PCI. Axial fragment number as well as friction coefficient are shown to play a major role in PCI as opposed to other parameters. Next, the main limitations of the one-dimensional hypothesis of the finite element code CYRANO3 are identified. To overcome these limitations, both two- and three-dimensional emulations of CYRANO3 were developed. These developments are shown to significantly improve the results provided by CYRANO3. (authors)

  18. User's Manual for the FEHM Application-A Finite-Element Heat- and Mass-Transfer Code

    Energy Technology Data Exchange (ETDEWEB)

    George A. Zyvoloski; Bruce A. Robinson; Zora V. Dash; Lynn L. Trease

    1997-07-07

    This document is a manual for the use of the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multicomponent flow in porous media. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved in the FEHM application by using the finite-element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat- and mass-transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. In fact, FEHM is capable of describing flow that is dominated in many areas by fracture and fault flow, including the inherently three-dimensional flow that results from permeation to and from faults and fractures. The code can handle coupled heat and mass-transfer effects, such as boiling, dryout, and condensation that can occur in the near-field region surrounding the potential repository and the natural convection that occurs through Yucca Mountain due to seasonal temperature changes. The code is also capable of incorporating the various adsorption mechanisms, ranging from simple linear relations to nonlinear isotherms, needed to describe the very complex transport processes at Yucca Mountain. This report outlines the uses and capabilities of the FEHM application, initialization of code variables, restart procedures, and error processing. The report describes all the data files, the input data

  19. Verification of Advective Bar Elements Implemented in the Aria Thermal Response Code.

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Brantley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    A verification effort was undertaken to evaluate the implementation of the new advective bar capability in the Aria thermal response code. Several approaches to the verification process were taken : a mesh refinement study to demonstrate solution convergence in the fluid and the solid, visually examining the mapping of the advective bar element nodes to the surrounding surfaces, and a comparison of solutions produced using the advective bars for simple geometries with solutions from commercial CFD software . The mesh refinement study has shown solution convergence for simple pipe flow in both temperature and velocity . Guidelines were provided to achieve appropriate meshes between the advective bar elements and the surrounding volume. Simulations of pipe flow using advective bars elements in Aria have been compared to simulations using the commercial CFD software ANSYS Fluent (r) and provided comparable solutions in temperature and velocity supporting proper implementation of the new capability. Verification of Advective Bar Elements iv Acknowledgements A special thanks goes to Dean Dobranich for his guidance and expertise through all stages of this effort . His advice and feedback was instrumental to its completion. Thanks also goes to Sam Subia and Tolu Okusanya for helping to plan many of the verification activities performed in this document. Thank you to Sam, Justin Lamb and Victor Brunini for their assistance in resolving issues encountered with running the advective bar element model. Finally, thanks goes to Dean, Sam, and Adam Hetzler for reviewing the document and providing very valuable comments.

  20. Change Impact Analysis of Crosscutting in Software Architectural Design

    NARCIS (Netherlands)

    van den Berg, Klaas

    2006-01-01

    Software architectures should be amenable to changes in user requirements and implementation technology. The analysis of the impact of these changes can be based on traceability of architectural design elements. Design elements have dependencies with other software artifacts but also evolve in time.

  1. Architecture-driven Migration of Legacy Systems to Cloud-enabled Software

    DEFF Research Database (Denmark)

    Ahmad, Aakash; Babar, Muhammad Ali

    2014-01-01

    of legacy systems to cloud computing. The framework leverages the software reengineering concepts that aim to recover the architecture from legacy source code. Then the framework exploits the software evolution concepts to support architecture-driven migration of legacy systems to cloud-based architectures....... The Legacy-to-Cloud Migration Horseshoe comprises of four processes: (i) architecture migration planning, (ii) architecture recovery and consistency, (iii) architecture transformation and (iv) architecture-based development of cloud-enabled software. We aim to discover, document and apply the migration...

  2. HEFF---A user's manual and guide for the HEFF code for thermal-mechanical analysis using the boundary-element method

    International Nuclear Information System (INIS)

    St John, C.M.; Sanjeevan, K.

    1991-12-01

    The HEFF Code combines a simple boundary-element method of stress analysis with the closed form solutions for constant or exponentially decaying heat sources in an infinite elastic body to obtain an approximate method for analysis of underground excavations in a rock mass with heat generation. This manual describes the theoretical basis for the code, the code structure, model preparation, and step taken to assure that the code correctly performs its intended functions. The material contained within the report addresses the Software Quality Assurance Requirements for the Yucca Mountain Site Characterization Project. 13 refs., 26 figs., 14 tabs

  3. Product Architecture Modularity Strategies

    DEFF Research Database (Denmark)

    Mikkola, Juliana Hsuan

    2003-01-01

    The focus of this paper is to integrate various perspectives on product architecture modularity into a general framework, and also to propose a way to measure the degree of modularization embedded in product architectures. Various trade-offs between modular and integral product architectures...... and how components and interfaces influence the degree of modularization are considered. In order to gain a better understanding of product architecture modularity as a strategy, a theoretical framework and propositions are drawn from various academic literature sources. Based on the literature review......, the following key elements of product architecture are identified: components (standard and new-to-the-firm), interfaces (standardization and specification), degree of coupling, and substitutability. A mathematical function, termed modularization function, is introduced to measure the degree of modularization...

  4. An Empirical Investigation of Architectural Prototyping

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2010-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system...... and in addressing issues regarding quality attributes, architectural risks, and the problem of knowledge transfer and conformance. However, the actual industrial use of architectural prototyping has not been thoroughly researched so far. In this article, we report from three studies of architectural prototyping...... in practice. First, we report findings from an ethnographic study of practicing software architects. Secondly, we report from a focus group on architectural prototyping involving architects from four companies. And, thirdly, we report from a survey study of 20 practicing software architects and software...

  5. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.

    Science.gov (United States)

    Revathy, M; Saravanan, R

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  6. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications

    Directory of Open Access Journals (Sweden)

    M. Revathy

    2015-01-01

    Full Text Available Low-density parity-check (LDPC codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax, and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  7. A portable modular architecture for robotic manipulator control

    International Nuclear Information System (INIS)

    Butler, P.L.

    1993-01-01

    A control architecture has been developed to provide a framework for robotic manipulator control. This architecture, called the Modular Integrated Control Architecture (MICA), has been successfully applied to two different manipulator systems. MICA is a portable system in two respects. First, it can be used for the control of different types of manipulator systems. Second, the MICA code is portable across several operating environments. This portability allows the sharing of common control code among various systems. A major portion of MICA is the precise control of multiple processors that have to be coordinated to control a manipulator system. By having NUCA control the processor synchronization, the system developer can concentrate on the specific aspects of a new manipulator system. MICA also provides standard functions for trajectory generation that can be used for most manipulators. Custom trajectory generators can be easily added to suit the needs of a particular robotic control system. Another facility that MICA provides is a simulation of the manipulator, allowing the control code to be simulated before trying it on a manipulator system. Using this technique, one can develop code for a manipulator system without risking damage to the arm during development

  8. GAUDI-Architecture design document

    CERN Document Server

    Mato, P

    1998-01-01

    98-064 This document is the result of the architecture design phase for the LHCb event data processing applications project. The architecture of the LHCb software system includes its logical and physical structure which has been forged by all the strategic and tactical decisions applied during development. The strategic decisions should be made explicitly with the considerations for the trade-off of each alternative. The other purpose of this document is that it serves as the main material for the scheduled architecture review that will take place in the next weeks. The architecture review will allow us to identify what are the weaknesses or strengths of the proposed architecture as well as we hope to obtain a list of suggested changes to improve it. All that well before the system is being realized in code. It is in our interest to identify the possible problems at the architecture design phase of the software project before much of the software is implemented. Strategic decisions must be cross checked caref...

  9. Mechanical modelization of PCI with Fragema and CEA finite element codes

    International Nuclear Information System (INIS)

    Bernard, P.; Joseph, J.; Atabek, R.; Chantant, M.

    1982-03-01

    In order to modelize the PCI phenomenon during a power ramp test two finite element codes have been used by FRAGEMA and CEA, TITUS and VERDON. The results given, by the 3D equivalent method developed with TITUS, and VERDON are equivalent, in particular the strains and the equivalent Von Mises stresses at the pellet to pellet interface are quite similar. An evaluation was made to explain experimental ramp tests results. These results come from FRISCA 04bis and FRISCA 104 rods which were ramp tested in SILOE. The choice of the equivalent Von Mises stress seems to be quite a good criterion to explain the failure threshold

  10. Development of a finite element code to solve thermo-hydro-mechanical coupling and simulate induced seismicity.

    Science.gov (United States)

    María Gómez Castro, Berta; De Simone, Silvia; Rossi, Riccardo; Larese De Tetto, Antonia; Carrera Ramírez, Jesús

    2015-04-01

    Coupled thermo-hydro-mechanical modeling is essential for CO2 storage because of (1) large amounts of CO2 will be injected, which will cause large pressure buildups and might compromise the mechanical stability of the caprock seal, (2) the most efficient technique to inject CO2 is the cold injection, which induces thermal stress changes in the reservoir and seal. These stress variations can cause mechanical failure in the caprock and can also trigger induced earthquakes. To properly assess these effects, numerical models that take into account the short and long-term thermo-hydro-mechanical coupling are an important tool. For this purpose, there is a growing need of codes that couple these processes efficiently and accurately. This work involves the development of an open-source, finite element code written in C ++ for correctly modeling the effects of thermo-hydro-mechanical coupling in the field of CO2 storage and in others fields related to these processes (geothermal energy systems, fracking, nuclear waste disposal, etc.), and capable to simulate induced seismicity. In order to be able to simulate earthquakes, a new lower dimensional interface element will be implemented in the code to represent preexisting fractures, where pressure continuity will be imposed across the fractures.

  11. Full scale seismic simulation of a nuclear reactor with parallel finite element analysis code for assembled structure

    International Nuclear Information System (INIS)

    Yamada, Tomonori

    2010-01-01

    The safety requirement of nuclear power plant attracts much attention nowadays. With the growing computing power, numerical simulation is one of key technologies to meet this safety requirement. Center for Computational Science and e-Systems of Japan Atomic Energy Agency has been developing a finite element analysis code for assembled structure to accurately evaluate the structural integrity of nuclear power plant in its entirety under seismic events. Because nuclear power plant is very huge assembled structure with tens of millions of mechanical components, the finite element model of each component is assembled into one structure and non-conforming meshes of mechanical components are bonded together inside the code. The main technique to bond these mechanical components is triple sparse matrix multiplication with multiple point constrains and global stiffness matrix. In our code, this procedure is conducted in a component by component manner, so that the working memory size and computing time for this multiplication are available on the current computing environment. As an illustrative example, seismic simulation of a real nuclear reactor of High Temperature engineering Test Reactor, which is located at the O-arai research and development center of JAEA, with 80 major mechanical components was conducted. Consequently, our code successfully simulated detailed elasto-plastic deformation of nuclear reactor and its computational performance was investigated. (author)

  12. Relating business intelligence and enterprise architecture - A method for combining operational data with architectural metadata

    NARCIS (Netherlands)

    Veneberg, R.K.M.; Iacob, Maria Eugenia; van Sinderen, Marten J.; Bodenstaff, L.

    Combining enterprise architecture and operational data is complex (especially when considering the actual ‘matching’ of data with enterprise architecture elements), and little has been written on how to do this. In this paper we aim to fill this gap, and propose a method to combine operational data

  13. SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages

    Energy Technology Data Exchange (ETDEWEB)

    Russel, E. [Lawrence Livermore National Lab., CA (United States)

    1997-11-01

    This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.

  14. Layered architecture for quantum computing

    OpenAIRE

    Jones, N. Cody; Van Meter, Rodney; Fowler, Austin G.; McMahon, Peter L.; Kim, Jungsang; Ladd, Thaddeus D.; Yamamoto, Yoshihisa

    2010-01-01

    We develop a layered quantum-computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction. In doing so, we propose a new quantum-computer architecture based on optical control of quantum dot...

  15. The plasma automata network (PAN) architecture

    International Nuclear Information System (INIS)

    Cameron-Carey, C.M.

    1991-01-01

    Conventional neural networks consist of processing elements which are interconnected according to a specified topology. Typically, the number of processing elements and the interconnection topology are fixed. A neural network's information processing capability lies mainly in the variability of interconnection strengths, which directly influence activation patterns; these patterns represent entities and their interrelationships. Contrast this architecture, with its fixed topology and variable interconnection strengths, against one having dynamic topology and fixed connection strength. This paper reports on this proposed architecture in which there are no connections between processing elements. Instead, the processing elements form a plasma, exchanging information upon collision. A plasma can be populated with several different types of processing elements, each with their won activation function and self-modification mechanism. The activation patterns that are the plasma;s response to stimulation drive natural selection among processing elements which evolve to optimize performance

  16. On Converting Software Systems to Object Oriented Architectures

    Directory of Open Access Journals (Sweden)

    Gabriela Czibula

    2010-06-01

    Full Text Available Object-oriented concepts are useful concerning the reuse of existing software. Therefore a transformation of procedural programs to objectoriented architectures becomes an important process to enhance the reuse of procedural programs. Moreover, it would be useful to assist by automatic methods the software developers in transforming procedural code into an equivalent
    object-oriented one. In this paper we aim at introducing a hierarchical clustering algorithm that can be used for assisting software developers in the process of transforming procedural code into an object-oriented architecture.

  17. Accuracy Test of Software Architecture Compliance Checking Tools – Test Instruction

    NARCIS (Netherlands)

    Pruijt, Leo; van der Werf, J.M.E.M.|info:eu-repo/dai/nl/36950674X; Brinkkemper., Sjaak|info:eu-repo/dai/nl/07500707X

    2015-01-01

    Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies between modules. Accurate tool

  18. Probabilistic evaluation of fuel element performance by the combined use of a fast running simplistic and a detailed deterministic fuel performance code

    International Nuclear Information System (INIS)

    Misfeldt, I.

    1980-01-01

    A comprehensive evaluation of fuel element performance requires a probabilistic fuel code supported by a well bench-marked deterministic code. This paper presents an analysis of a SGHWR ramp experiment, where the probabilistic fuel code FRP is utilized in combination with the deterministic fuel models FFRS and SLEUTH/SEER. The statistical methods employed in FRP are Monte Carlo simulation or a low-order Taylor approximation. The fast-running simplistic fuel code FFRS is used for the deterministic simulations, whereas simulations with SLEUTH/SEER are used to verify the predictions of FFRS. The ramp test was performed with a SGHWR fuel element, where 9 of the 36 fuel pins failed. There seemed to be good agreement between the deterministic simulations and the experiment, but the statistical evaluation shows that the uncertainty on the important performance parameters is too large for this ''nice'' result. The analysis does therefore indicate a discrepancy between the experiment and the deterministic code predictions. Possible explanations for this disagreement are discussed. (author)

  19. Portable LQCD Monte Carlo code using OpenACC

    Science.gov (United States)

    Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele

    2018-03-01

    Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.

  20. Validation of finite element code DELFIN by means of the zero power experiences at the nuclear power plant of Atucha I

    International Nuclear Information System (INIS)

    Grant, C.R.

    1996-01-01

    Code DELFIN, developed in CNEA, treats the spatial discretization using heterogeneous finite elements, allowing a correct treatment of the continuity of fluxes and currents among elements and a more realistic representation of the hexagonal lattice of the reactor. It can be used for fuel management calculation, Xenon oscillation and spatial kinetics. Using the HUEMUL code for cell calculation (which uses a generalized two dimensional collision probability theory and has the WIMS library incorporated in a data base), the zero power experiences performed in 1974 were calculated. (author). 8 refs., 9 figs., 3 tabs

  1. RAGE Architecture for Reusable Serious Gaming Technology Components

    Directory of Open Access Journals (Sweden)

    Wim van der Vegt

    2016-01-01

    Full Text Available For seizing the potential of serious games, the RAGE project—funded by the Horizon-2020 Programme of the European Commission—will make available an interoperable set of advanced technology components (software assets that support game studios at serious game development. This paper describes the overall software architecture and design conditions that are needed for the easy integration and reuse of such software assets in existing game platforms. Based on the component-based software engineering paradigm the RAGE architecture takes into account the portability of assets to different operating systems, different programming languages, and different game engines. It avoids dependencies on external software frameworks and minimises code that may hinder integration with game engine code. Furthermore it relies on a limited set of standard software patterns and well-established coding practices. The RAGE architecture has been successfully validated by implementing and testing basic software assets in four major programming languages (C#, C++, Java, and TypeScript/JavaScript, resp.. Demonstrator implementation of asset integration with an existing game engine was created and validated. The presented RAGE architecture paves the way for large scale development and application of cross-engine reusable software assets for enhancing the quality and diversity of serious gaming.

  2. Enterprise Architecture Analysis with XML

    OpenAIRE

    Boer, Frank; Bonsangue, Marcello; Jacob, Joost; Stam, A.; Torre, Leon

    2005-01-01

    htmlabstractThis paper shows how XML can be used for static and dynamic analysis of architectures. Our analysis is based on the distinction between symbolic and semantic models of architectures. The core of a symbolic model consists of its signature that specifies symbolically its structural elements and their relationships. A semantic model is defined as a formal interpretation of the symbolic model. This provides a formal approach to the design of architectural description languages and a g...

  3. Transduplication resulted in the incorporation of two protein-coding sequences into the Turmoil-1 transposable element of C. elegans

    Directory of Open Access Journals (Sweden)

    Pupko Tal

    2008-10-01

    Full Text Available Abstract Transposable elements may acquire unrelated gene fragments into their sequences in a process called transduplication. Transduplication of protein-coding genes is common in plants, but is unknown of in animals. Here, we report that the Turmoil-1 transposable element in C. elegans has incorporated two protein-coding sequences into its inverted terminal repeat (ITR sequences. The ITRs of Turmoil-1 contain a conserved RNA recognition motif (RRM that originated from the rsp-2 gene and a fragment from the protein-coding region of the cpg-3 gene. We further report that an open reading frame specific to C. elegans may have been created as a result of a Turmoil-1 insertion. Mutations at the 5' splice site of this open reading frame may have reactivated the transduplicated RRM motif. Reviewers This article was reviewed by Dan Graur and William Martin. For the full reviews, please go to the Reviewers' Reports section.

  4. Multiprocessor architecture: Synthesis and evaluation

    Science.gov (United States)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  5. The frequency-dependent elements in the code SASSI: A bridge between civil engineers and the soil-structure interaction specialists

    International Nuclear Information System (INIS)

    Tyapin, Alexander

    2007-01-01

    After four decades of the intensive studies of the soil-structure interaction (SSI) effects in the field of the NPP seismic analysis there is a certain gap between the SSI specialists and civil engineers. The results obtained using the advanced SSI codes like SASSI are often rather far from the results obtained using general codes (though match the experimental and field data). The reasons for the discrepancies are not clear because none of the parties can recall the results of the 'other party' and investigate the influence of various factors causing the difference step by step. As a result, civil engineers neither feel the SSI effects, nor control them. The author believes that the SSI specialists should do the first step forward (a) recalling 'viscous' damping in the structures versus the 'material' one and (b) convoluting all the SSI wave effects into the format of 'soil springs and dashpots', more or less clear for civil engineers. The tool for both tasks could be a special finite element with frequency-dependent stiffness developed by the author for the code SASSI. This element can represent both soil and structure in the SSI model and help to split various factors influencing seismic response. In the paper the theory and some practical issues concerning the new element are presented

  6. Using an Integrated Distributed Test Architecture to Develop an Architecture for Mars

    Science.gov (United States)

    Othon, William L.

    2016-01-01

    The creation of a crew-rated spacecraft architecture capable of sending humans to Mars requires the development and integration of multiple vehicle systems and subsystems. Important new technologies will be identified and matured within each technical discipline to support the mission. Architecture maturity also requires coordination with mission operations elements and ground infrastructure. During early architecture formulation, many of these assets will not be co-located and will required integrated, distributed test to show that the technologies and systems are being developed in a coordinated way. When complete, technologies must be shown to function together to achieve mission goals. In this presentation, an architecture will be described that promotes and advances integration of disparate systems within JSC and across NASA centers.

  7. PLASTEF: a code for the numerical simulation of thermoelastoplastic behaviour of materials using the finite element method

    International Nuclear Information System (INIS)

    Basombrio, F.G.; Sanchez Sarmiento, G.

    1978-01-01

    A general code for solving two-dimensional thermo-elastoplastic problems in geometries of arbitrary shape using the finite element method, is presented. The initial stress incremental procedure was adopted, for given histories of load and temperature. Some classical applications are included. (Auth.)

  8. Prevalence of transcription promoters within archaeal operons and coding sequences.

    Science.gov (United States)

    Koide, Tie; Reiss, David J; Bare, J Christopher; Pang, Wyming Lee; Facciotti, Marc T; Schmid, Amy K; Pan, Min; Marzolf, Bruz; Van, Phu T; Lo, Fang-Yin; Pratap, Abhishek; Deutsch, Eric W; Peterson, Amelia; Martin, Dan; Baliga, Nitin S

    2009-01-01

    Despite the knowledge of complex prokaryotic-transcription mechanisms, generalized rules, such as the simplified organization of genes into operons with well-defined promoters and terminators, have had a significant role in systems analysis of regulatory logic in both bacteria and archaea. Here, we have investigated the prevalence of alternate regulatory mechanisms through genome-wide characterization of transcript structures of approximately 64% of all genes, including putative non-coding RNAs in Halobacterium salinarum NRC-1. Our integrative analysis of transcriptome dynamics and protein-DNA interaction data sets showed widespread environment-dependent modulation of operon architectures, transcription initiation and termination inside coding sequences, and extensive overlap in 3' ends of transcripts for many convergently transcribed genes. A significant fraction of these alternate transcriptional events correlate to binding locations of 11 transcription factors and regulators (TFs) inside operons and annotated genes-events usually considered spurious or non-functional. Using experimental validation, we illustrate the prevalence of overlapping genomic signals in archaeal transcription, casting doubt on the general perception of rigid boundaries between coding sequences and regulatory elements.

  9. Development of Multi-Scale Finite Element Analysis Codes for High Formability Sheet Metal Generation

    International Nuclear Information System (INIS)

    Nnakamachi, Eiji; Kuramae, Hiroyuki; Ngoc Tam, Nguyen; Nakamura, Yasunori; Sakamoto, Hidetoshi; Morimoto, Hideo

    2007-01-01

    In this study, the dynamic- and static-explicit multi-scale finite element (F.E.) codes are developed by employing the homogenization method, the crystalplasticity constitutive equation and SEM-EBSD measurement based polycrystal model. These can predict the crystal morphological change and the hardening evolution at the micro level, and the macroscopic plastic anisotropy evolution. These codes are applied to analyze the asymmetrical rolling process, which is introduced to control the crystal texture of the sheet metal for generating a high formability sheet metal. These codes can predict the yield surface and the sheet formability by analyzing the strain path dependent yield, the simple sheet forming process, such as the limit dome height test and the cylindrical deep drawing problems. It shows that the shear dominant rolling process, such as the asymmetric rolling, generates ''high formability'' textures and eventually the high formability sheet. The texture evolution and the high formability of the newly generated sheet metal experimentally were confirmed by the SEM-EBSD measurement and LDH test. It is concluded that these explicit type crystallographic homogenized multi-scale F.E. code could be a comprehensive tool to predict the plastic induced texture evolution, anisotropy and formability by the rolling process and the limit dome height test analyses

  10. Software requirements, design, and verification and validation for the FEHM application - a finite-element heat- and mass-transfer code

    International Nuclear Information System (INIS)

    Dash, Z.V.; Robinson, B.A.; Zyvoloski, G.A.

    1997-07-01

    The requirements, design, and verification and validation of the software used in the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multicomponent flow in porous media, are described. The test of the DOE Code Comparison Project, Problem Five, Case A, which verifies that FEHM has correctly implemented heat and mass transfer and phase partitioning, is also covered

  11. System architectures for telerobotic research

    Science.gov (United States)

    Harrison, F. Wallace

    1989-01-01

    Several activities are performed related to the definition and creation of telerobotic systems. The effort and investment required to create architectures for these complex systems can be enormous; however, the magnitude of process can be reduced if structured design techniques are applied. A number of informal methodologies supporting certain aspects of the design process are available. More recently, prototypes of integrated tools supporting all phases of system design from requirements analysis to code generation and hardware layout have begun to appear. Activities related to system architecture of telerobots are described, including current activities which are designed to provide a methodology for the comparison and quantitative analysis of alternative system architectures.

  12. Non-linear heat transfer computer code by finite element method

    International Nuclear Information System (INIS)

    Nagato, Kotaro; Takikawa, Noboru

    1977-01-01

    The computer code THETA-2D for the calculation of temperature distribution by the two-dimensional finite element method was made for the analysis of heat transfer in a high temperature structure. Numerical experiment was performed for the numerical integration of the differential equation of heat conduction. The Runge-Kutta method of the numerical experiment produced an unstable solution. A stable solution was obtained by the β method with the β value of 0.35. In high temperature structures, the radiative heat transfer can not be neglected. To introduce a term of the radiative heat transfer, a functional neglecting the radiative heat transfer was derived at first. Then, the radiative term was added after the discretion by variation method. Five model calculations were carried out by the computer code. Calculation of steady heat conduction was performed. When estimated initial temperature is 1,000 degree C, reasonable heat blance was obtained. In case of steady-unsteady temperature calculation, the time integral by THETA-2D turned out to be under-estimation for enthalpy change. With a one-dimensional model, the temperature distribution in a structure, in which heat conductivity is dependent on temperature, was calculated. Calculation with a model which has a void inside was performed. Finally, model calculation for a complex system was carried out. (Kato, T.)

  13. ELLIPT2D: A Flexible Finite Element Code Written Python

    International Nuclear Information System (INIS)

    Pletzer, A.; Mollis, J.C.

    2001-01-01

    The use of the Python scripting language for scientific applications and in particular to solve partial differential equations is explored. It is shown that Python's rich data structure and object-oriented features can be exploited to write programs that are not only significantly more concise than their counter parts written in Fortran, C or C++, but are also numerically efficient. To illustrate this, a two-dimensional finite element code (ELLIPT2D) has been written. ELLIPT2D provides a flexible and easy-to-use framework for solving a large class of second-order elliptic problems. The program allows for structured or unstructured meshes. All functions defining the elliptic operator are user supplied and so are the boundary conditions, which can be of Dirichlet, Neumann or Robbins type. ELLIPT2D makes extensive use of dictionaries (hash tables) as a way to represent sparse matrices.Other key features of the Python language that have been widely used include: operator over loading, error handling, array slicing, and the Tkinter module for building graphical use interfaces. As an example of the utility of ELLIPT2D, a nonlinear solution of the Grad-Shafranov equation is computed using a Newton iterative scheme. A second application focuses on a solution of the toroidal Laplace equation coupled to a magnetohydrodynamic stability code, a problem arising in the context of magnetic fusion research

  14. Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels

    Science.gov (United States)

    Wang, Han

    2010-01-01

    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…

  15. Essential software architecture

    CERN Document Server

    Gorton, Ian

    2011-01-01

    Job titles like ""Technical Architect"" and ""Chief Architect"" nowadays abound in software industry, yet many people suspect that ""architecture"" is one of the most overused and least understood terms in professional software development. Gorton's book tries to resolve this dilemma. It concisely describes the essential elements of knowledge and key skills required to be a software architect. The explanations encompass the essentials of architecture thinking, practices, and supporting technologies. They range from a general understanding of structure and quality attributes through technical i

  16. Design requirements of communication architecture of SMART safety system

    International Nuclear Information System (INIS)

    Park, H. Y.; Kim, D. H.; Sin, Y. C.; Lee, J. Y.

    2001-01-01

    To develop the communication network architecture of safety system of SMART, the evaluation elements for reliability and performance factors are extracted from commercial networks and classified the required-level by importance. A predictable determinacy, status and fixed based architecture, separation and isolation from other systems, high reliability, verification and validation are introduced as the essential requirements of safety system communication network. Based on the suggested requirements, optical cable, star topology, synchronous transmission, point-to-point physical link, connection-oriented logical link, MAC (medium access control) with fixed allocation are selected as the design elements. The proposed architecture will be applied as basic communication network architecture of SMART safety system

  17. Rn3D: A finite element code for simulating gas flow and radon transport in variably saturated, nonisothermal porous media

    International Nuclear Information System (INIS)

    Holford, D.J.

    1994-01-01

    This document is a user's manual for the Rn3D finite element code. Rn3D was developed to simulate gas flow and radon transport in variably saturated, nonisothermal porous media. The Rn3D model is applicable to a wide range of problems involving radon transport in soil because it can simulate either steady-state or transient flow and transport in one-, two- or three-dimensions (including radially symmetric two-dimensional problems). The porous materials may be heterogeneous and anisotropic. This manual describes all pertinent mathematics related to the governing, boundary, and constitutive equations of the model, as well as the development of the finite element equations used in the code. Instructions are given for constructing Rn3D input files and executing the code, as well as a description of all output files generated by the code. Five verification problems are given that test various aspects of code operation, complete with example input files, FORTRAN programs for the respective analytical solutions, and plots of model results. An example simulation is presented to illustrate the type of problem Rn3D is designed to solve. Finally, instructions are given on how to convert Rn3D to simulate systems other than radon, air, and water

  18. Towards Product Lining Model-Driven Development Code Generators

    OpenAIRE

    Roth, Alexander; Rumpe, Bernhard

    2015-01-01

    A code generator systematically transforms compact models to detailed code. Today, code generation is regarded as an integral part of model-driven development (MDD). Despite its relevance, the development of code generators is an inherently complex task and common methodologies and architectures are lacking. Additionally, reuse and extension of existing code generators only exist on individual parts. A systematic development and reuse based on a code generator product line is still in its inf...

  19. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  20. Hijazi Architectural Object Library (haol)

    Science.gov (United States)

    Baik, A.; Boehm, J.

    2017-02-01

    As with many historical buildings around the world, building façades are of special interest; moreover, the details of such windows, stonework, and ornaments give each historic building its individual character. Each object of these buildings must be classified in an architectural object library. Recently, a number of researches have been focusing on this topic in Europe and Canada. From this standpoint, the Hijazi Architectural Objects Library (HAOL) has reproduced Hijazi elements as 3D computer models, which are modelled using a Revit Family (RFA). The HAOL will be dependent on the image survey and point cloud data. The Hijazi Object such as Roshan and Mashrabiyah, become as vocabulary of many Islamic cities in the Hijazi region such as Jeddah in Saudi Arabia, and even for a number of Islamic historic cities such as Istanbul and Cairo. These architectural vocabularies are the main cause of the beauty of these heritage. However, there is a big gap in both the Islamic architectural library and the Hijazi architectural library to provide these unique elements. Besides, both Islamic and Hijazi architecture contains a huge amount of information which has not yet been digitally classified according to period and styles. Due to this issue, this paper will be focusing on developing of Heritage BIM (HBIM) standards and the HAOL library to reduce the cost and the delivering time for heritage and new projects that involve in Hijazi architectural styles. Through this paper, the fundamentals of Hijazi architecture informatics will be provided via developing framework for HBIM models and standards. This framework will provide schema and critical information, for example, classifying the different shapes, models, and forms of structure, construction, and ornamentation of Hijazi architecture in order to digitalize parametric building identity.

  1. THE MEDIEVAL AND OTTOMAN HAMMAMS OF ALGERIA; ELEMENTS FOR A HISTORICAL STUDY OF BATHS ARCHITECTURE IN NORTH AFRICA

    Directory of Open Access Journals (Sweden)

    Nabila Cherif-Seffadj

    2009-03-01

    Full Text Available Algerian medinas (Islamic cities have several traditional public baths (hammams. However, these hammams are the least known in the Maghreb countries. The first French archaeological surveys carried out on Islamic monuments and sites in Algeria, have found few historic baths in medieval towns. All along the highlands route, from Algiers (capital city of Algeria located in the North to Tlemcen (city in the Western part of Algeria, these structures are found in all the cities founded after the Islamic religion expanded in the Western North Africa. These buildings are often associated to large mosques. In architectural history, these baths illustrate original spatial and organizational compositions under form proportions, methods of construction, ornamental elements and the technical skills of their builders. The ancient traditions of bathing interpreted in this building type are an undeniable legacy. They are present through architectural typology and technical implementation reflecting the important architectural heritage of the great Roman cities in Algeria. Furthermore, these traditions and buildings evolved through different eras. Master builders, who left Andalusia to seek refuge in the Maghreb countries, added the construction and ornamentation skills and techniques brought from Muslim Spain, while the Ottomans contribution in the history of many urban cities is important. Hence, the dual appellation of the hammam as “Moorish bath” and “Turkish bath” in Algeria is the perfect illustration of the evolution of bath architecture in Algeria.

  2. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  3. Lean Architecture for Agile Software Development

    CERN Document Server

    Coplien, James O

    2010-01-01

    More and more Agile projects are seeking architectural roots as they struggle with complexity and scale - and they're seeking lightweight ways to do it: Still seeking? In this book the authors help you to find your own path; Taking cues from Lean development, they can help steer your project toward practices with longstanding track records; Up-front architecture? Sure. You can deliver an architecture as code that compiles and that concretely guides development without bogging it down in a mass of documents and guesses about the implementation; Documentation? Even a whiteboard diagram, or a CRC

  4. A "Language Lab" for Architectural Design.

    Science.gov (United States)

    Mackenzie, Arch; And Others

    This paper discusses a "language lab" strategy in which traditional studio learning may be supplemented by language lessons using computer graphics techniques to teach architectural grammar, a body of elements and principles that govern the design of buildings belonging to a particular architectural theory or style. Two methods of…

  5. Parallel Fast Multipole Boundary Element Method for crustal dynamics

    International Nuclear Information System (INIS)

    Quevedo, Leonardo; Morra, Gabriele; Mueller, R Dietmar

    2010-01-01

    Crustal faults and sharp material transitions in the crust are usually represented as triangulated surfaces in structural geological models. The complex range of volumes separating such surfaces is typically three-dimensionally meshed in order to solve equations that describe crustal deformation with the finite-difference (FD) or finite-element (FEM) methods. We show here how the Boundary Element Method, combined with the Multipole approach, can revolutionise the calculation of stress and strain, solving the problem of computational scalability from reservoir to basin scales. The Fast Multipole Boundary Element Method (Fast BEM) tackles the difficulty of handling the intricate volume meshes and high resolution of crustal data that has put classical Finite 3D approaches in a performance crisis. The two main performance enhancements of this method: the reduction of required mesh elements from cubic to quadratic with linear size and linear-logarithmic runtime; achieve a reduction of memory and runtime requirements allowing the treatment of a new scale of geodynamic models. This approach was recently tested and applied in a series of papers by [1, 2, 3] for regional and global geodynamics, using KD trees for fast identification of near and far-field interacting elements, and MPI parallelised code on distributed memory architectures, and is now in active development for crustal dynamics. As the method is based on a free-surface, it allows easy data transfer to geological visualisation tools where only changes in boundaries and material properties are required as input parameters. In addition, easy volume mesh sampling of physical quantities enables direct integration with existing FD/FEM code.

  6. Implementation of thermo-viscoplastic constitutive equations into the finite element code ABAQUS

    International Nuclear Information System (INIS)

    Youn, Sam Son; Lee, Soon Bok; Kim, Jong Bum; Lee, Hyeong Yeon; Yoo, Bong

    1998-01-01

    Sophisticated viscoplatic constitutive laws describing material behavior at high temperature have been implemented in the general-purpose finite element code ABAQUS to predict the viscoplastic response of structures to cyclic loading. Because of the complexity of viscoplastic constitutive equation, the general implementation methods are developed. The solution of the non-linear system of algebraic equations arising from time discretization is determined using line-search and back-tracking in combination with Newton method. The time integration method of the constitutive equations is based on semi-implicit method with efficient time step control. For numerical examples, the viscoplastic model proposed by Chaboche is implemented and several applications are illustrated

  7. Analysis of experiments of the University of Hannover with the Cathare code on fluid dynamic effects in the fuel element top nozzle area during refilling and reflooding

    International Nuclear Information System (INIS)

    Bestion, D.

    1989-11-01

    The CATHARE code is used to calculate the experiment of the University of Hannover concerning the flooding limit at the fuel element top nozzle area. Some qualitative and quantitativ limit at the fuel element top nozzle area. on both the actual fluid dynamics which is observed in the experiments and on the corresponding code behaviour. Shortcomings of the present models are clearly identified. New developments are proposed which should extend the code capabilities

  8. ABAQUS-EPGEN: a general-purpose finite element code. Volume 3. Example problems manual

    International Nuclear Information System (INIS)

    Hibbitt, H.D.; Karlsson, B.I.; Sorensen, E.P.

    1983-03-01

    This volume is the Example and Verification Problems Manual for ABAQUS/EPGEN. Companion volumes are the User's, Theory and Systems Manuals. This volume contains two major parts. The bulk of the manual (Sections 1-8) contains worked examples that are discussed in detail, while Appendix A documents a large set of basic verification cases that provide the fundamental check of the elements in the code. The examples in Sections 1-8 illustrate and verify significant aspects of the program's capability. Most of these problems provide verification, but they have also been chosen to allow discussion of modeling and analysis techniques. Appendix A contains basic verification cases. Each of these cases verifies one element in the program's library. The verification consists of applying all possible load or flux types (including thermal loading of stress elements), and all possible foundation or film/radiation conditions, and checking the resulting force and stress solutions or flux and temperature results. This manual provides program verification. All of the problems described in the manual are run and the results checked, for each release of the program, and these verification results are made available

  9. A free surface algorithm in the N3S finite element code for turbulent flows

    International Nuclear Information System (INIS)

    Nitrosso, B.; Pot, G.; Abbes, B.; Bidot, T.

    1995-08-01

    In this paper, we present a free surface algorithm which was implemented in the N3S code. Free surfaces are represented by marker particles which move through a mesh. It is assumed that the free surface is located inside each element that contains markers and surrounded by at least one element with no marker inside. The mesh is then locally adjusted in order to coincide with the free surface which is well defined by the forefront marker particles. After describing the governing equations and the N3S solving methods, we present the free surface algorithm. Results obtained for two-dimensional and three-dimensional industrial problems of mould filling are presented. (authors). 5 refs., 2 figs

  10. Energy and architecture. [Denmark]; Energi + arkitektur

    Energy Technology Data Exchange (ETDEWEB)

    Lehrskov, H. [Ingenioerhoejskolen i Aarhus, Aarhus (Denmark); Oehlenschlaeger, R. [AplusB, Aarhus (Denmark); Kappel, K. [Solar City Copenhagen, Copenhagen (Denmark); Kleis, B. [Arkitekturformidling.dk, Vanloese (Denmark); Klint, J. [Kuben Management, Aarhus (Denmark); Vejsig Pedersen, P. [Cenergia, Herlev (Denmark)

    2011-07-01

    The book presents the best examples of Danish energy-oriented architecture with a focus on architectural and energy measures in the integrated design process, resulting in architectural quality. The book consists of two parts and consists first of an introduction to the challenges and tools within low-energy buildings, and then a catalog of a wide range of building projects in the categories of housing, business, education, institutions and sports. The book contains examples of new buildings that as a minimum meet the requirements of the building codes LE1, BR08. The book also contains suggestions for renovation projects that meet LE2 BR08, as the energy optimization of the existing building stock is an imminent task with great constructional and aesthetic challenges. The selected projects are designed and built in the period 2009 to 2011 and include both everyday architecture, created under highly competitive economic environment, as more exclusive development projects. The objectives of the projects are often higher than required by the building code, and in many projects measurements were made to find out what works. These examples show that the stricter energy requirements can serve as inspiration for a holistic architecture and contribute to a paradigm shift in the cooperation process between the project parties. But it is also clear that it requires a focused commitment of all players in the construction industry - clients, consultants, contractors and building product manufacturers - to reduce energy consumption significantly. (LN)

  11. MT-ADRES: Multithreading on Coarse-Grained Reconfigurable Architecture

    DEFF Research Database (Denmark)

    Wu, Kehuai; Kanstein, Andreas; Madsen, Jan

    2007-01-01

    The coarse-grained reconfigurable architecture ADRES (Architecture for Dynamically Reconfigurable Embedded Systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high-ILP archi......The coarse-grained reconfigurable architecture ADRES (Architecture for Dynamically Reconfigurable Embedded Systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high......-ILP architectures achieve only low parallelism when executing partially sequential code segments, which is also known as Amdahl’s law, this paper proposes to extend ADRES to MT-ADRES (Multi-Threaded ADRES) to also exploit thread-level parallelism. On MT-ADRES architectures, the array can be partitioned in multiple...

  12. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2009-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and

  13. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2008-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and

  14. ArchE - An Architecture Design Assistant

    Science.gov (United States)

    2007-08-02

    Architecture Design Assistant Len Bass August 2, 2007 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...ArchE - An Architecture Design Assistant 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...X, Module X 3 Author / Presenter, Date if Needed What is ArchE? ArchE is a software architecture design assistant, which: • Takes quality and

  15. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  16. Enterprise Architecture Evaluation

    DEFF Research Database (Denmark)

    Andersen, Peter; Carugati, Andrea

    2014-01-01

    By being holistically preoccupied with coherency among organizational elements such as organizational strategy, business needs and the IT functions role in supporting the business, enterprise architecture (EA) has grown to become a core competitive advantage. Though EA is a maturing research area...

  17. The application of diagrams in architectural design

    Directory of Open Access Journals (Sweden)

    Dulić Olivera

    2014-01-01

    Full Text Available Diagrams in architecture represent the visualization of the thinking process, or selective abstraction of concepts or ideas translated into the form of drawings. In addition, they provide insight into the way of thinking about and in architecture, thus creating a balance between the visual and the conceptual. The subject of research presented in this paper are diagrams as a specific kind of architectural representation, and possibilities and importance of their application in the design process. Diagrams are almost old as architecture itself, and they are an element of some of the most important studies of architecture during all periods of history - which results in a large number of different definitions of diagrams, but also very different conceptualizations of their features, functions and applications. The diagrams become part of contemporary architectural discourse during the eighties and nineties of the twentieth century, especially through the work of architects like Bernard Tschumi, Peter Eisenman, Rem Koolhaas, SANAA and others. The use of diagrams in the design process allows unification of some of the essential aspects of the profession: architectural representation and design process, as well as the question of the concept of architectural and urban design at a time of rapid changes at all levels of contemporary society. The aim of the research is the analysis of the diagram as a specific medium for processing large amounts of information that the architect should consider and incorporate into the architectural work. On that basis, it is assumed that an architectural diagram allows the creator the identification and analysis of specific elements or ideas of physical form, thereby constantly maintaining concept of the integrity of the architectural work.

  18. The computer code EURDYN - 1 M (release 1) for transient dynamic fluid-structure interaction. Pt.1: governing equations and finite element modelling

    International Nuclear Information System (INIS)

    Donea, J.; Fasoli-Stella, P.; Giuliani, S.; Halleux, J.P.; Jones, A.V.

    1980-01-01

    This report describes the governing equations and the finite element modelling used in the computer code EURDYN - 1 M. The code is a non-linear transient dynamic program for the analysis of coupled fluid-structure systems; It is designed for safety studies on LMFBR components (primary containment and fuel subassemblies)

  19. The Simulation Intranet Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  20. From structure from motion to historical building information modeling: populating a semantic-aware library of architectural elements

    Science.gov (United States)

    Santagati, Cettina; Lo Turco, Massimiliano

    2017-01-01

    In recent years, we have witnessed a huge diffusion of building information modeling (BIM) approaches in the field of architectural design, although very little research has been undertaken to explore the value, criticalities, and advantages attributable to the application of these methodologies in the cultural heritage domain. Furthermore, the last developments in digital photogrammetry lead to the easy generation of reliable low-cost three-dimensional textured models that could be used in BIM platforms to create semantic-aware objects that could compose a specific library of historical architectural elements. In this case, the transfer between the point cloud and its corresponding parametric model is not so trivial and the level of geometrical abstraction could not be suitable with the scope of the BIM. The aim of this paper is to explore and retrace the milestone works on this crucial topic in order to identify the unsolved issues and to propose and test a unique and simple workflow practitioner centered and based on the use of the latest available solutions for point cloud managing into commercial BIM platforms.

  1. Development of dynamic explicit crystallographic homogenization finite element analysis code to assess sheet metal formability

    International Nuclear Information System (INIS)

    Nakamura, Yasunori; Tam, Nguyen Ngoc; Ohata, Tomiso; Morita, Kiminori; Nakamachi, Eiji

    2004-01-01

    The crystallographic texture evolution induced by plastic deformation in the sheet metal forming process has a great influence on its formability. In the present study, a dynamic explicit finite element (FE) analysis code is newly developed by introducing a crystallographic homogenization method to estimate the polycrystalline sheet metal formability, such as the extreme thinning and 'earing'. This code can predict the plastic deformation induced texture evolution at the micro scale and the plastic anisotropy at the macro scale, simultaneously. This multi-scale analysis can couple the microscopic crystal plasticity inhomogeneous deformation with the macroscopic continuum deformation. In this homogenization process, the stress at the macro scale is defined by the volume average of those of the corresponding microscopic crystal aggregations in satisfying the equation of motion and compatibility condition in the micro scale 'unit cell', where the periodicity of deformation is satisfied. This homogenization algorithm is implemented in the conventional dynamic explicit finite element code by employing the updated Lagrangian formulation and the rate type elastic/viscoplastic constitutive equation.At first, it has been confirmed through a texture evolution analyses in cases of typical deformation modes that Taylor's 'constant strain homogenization algorithm' yields extreme concentration toward the preferred crystal orientations compared with our homogenization one. Second, we study the plastic anisotropy effects on 'earing' in the hemispherical cup deep drawing process of pure ferrite phase sheet metal. By the comparison of analytical results with those of Taylor's assumption, conclusions are drawn that the present newly developed dynamic explicit crystallographic homogenization FEM shows more reasonable prediction of plastic deformation induced texture evolution and plastic anisotropy at the macro scale

  2. Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification.

    Energy Technology Data Exchange (ETDEWEB)

    Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.

    2014-09-01

    We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and manycore architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby un- certainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communica- tion. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.

  3. KIN SP: A boundary element method based code for single pile kinematic bending in layered soil

    Directory of Open Access Journals (Sweden)

    Stefano Stacul

    2018-02-01

    Full Text Available In high seismicity areas, it is important to consider kinematic effects to properly design pile foundations. Kinematic effects are due to the interaction between pile and soil deformations induced by seismic waves. One of the effect is the arise of significant strains in weak soils that induce bending moments on piles. These moments can be significant in presence of a high stiffness contrast in a soil deposit. The single pile kinematic interaction problem is generally solved with beam on dynamic Winkler foundation approaches (BDWF or using continuous models. In this work, a new boundary element method (BEM based computer code (KIN SP is presented where the kinematic analysis is preceded by a free-field response analysis. The analysis results of this method, in terms of bending moments at the pile-head and at the interface of a two-layered soil, are influenced by many factors including the soil–pile interface discretization. A parametric study is presented with the aim to suggest the minimum number of boundary elements to guarantee the accuracy of a BEM solution, for typical pile–soil relative stiffness values as a function of the pile diameter, the location of the interface of a two-layered soil and of the stiffness contrast. KIN SP results have been compared with simplified solutions in literature and with those obtained using a quasi-three-dimensional (3D finite element code.

  4. Temporal Architecture: Poetic Dwelling in Japanese buildings

    Directory of Open Access Journals (Sweden)

    Michael Lazarin

    2014-07-01

    Full Text Available Heidegger’s thinking about poetic dwelling and Derrida’s impressions of Freudian estrangement are employed to provide a constitutional analysis of the experience of Japanese architecture, in particular, the Japanese vestibule (genkan. This analysis is supplemented by writings by Japanese architects and poets. The principal elements of Japanese architecture are: (1 ma, and (2 en. Ma is usually translated as ‘interval’ because, like the English word, it applies to both space and time.  However, in Japanese thinking, it is not so much an either/or, but rather a both/and. In other words, Japanese architecture emphasises the temporal aspect of dwelling in a way that Western architectural thinking usually does not. En means ‘joint, edge, the in-between’ as an ambiguous, often asymmetrical spanning of interior and exterior, rather than a demarcation of these regions. Both elements are aimed at producing an experience of temporality and transiency.

  5. Look and Do Ancient Greece. Teacher's Manual: Primary Program, Greek Art & Architecture [and] Workbook: The Art and Architecture of Ancient Greece [and] K-4 Videotape. History through Art and Architecture.

    Science.gov (United States)

    Luce, Ann Campbell

    This resource, containing a teacher's manual, reproducible student workbook, and a color teaching poster, is designed to accompany a 21-minute videotape program, but may be adapted for independent use. Part 1 of the program, "Greek Architecture," looks at elements of architectural construction as applied to Greek structures, and…

  6. User's manual for the FEHM application - A finite-element heat- and mass-transfer code

    International Nuclear Information System (INIS)

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved in the FEHM application by using the finite-element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat- and mass-transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. In fact, FEHM is capable of describing flow that is dominated in many areas by fracture and fault flow, including the inherently three-dimensional flow that results from permeation to and from faults and fractures. The code can handle coupled heat and mass-transfer effects, such as boiling, dryout, and condensation that can occur in the near-field region surrounding the potential repository and the natural convection that occurs through Yucca Mountain due to seasonal temperature changes. This report outlines the uses and capabilities of the FEHM application, initialization of code variables, restart procedures, and error processing. The report describes all the data files, the input data, including individual input records or parameters, and the various output files. The system interface is described, including the software environment and installation instructions

  7. A design of a wavelength-hopping time-spreading incoherent optical code division multiple access system

    International Nuclear Information System (INIS)

    Glesk, I.; Baby, V.

    2005-01-01

    We present the architecture and code design for a highly scalable, 2.5 Gb/s per user optical code division multiple access (OCDMA) system. The system is scalable to 100 potential and more than 10 simultaneous users, each with a bit error rate (BER) of less than 10 -9 . The system architecture uses a fast wavelength-hopping, time-spreading codes. Unlike frequency and phase sensitive coherent OCDMA systems, this architecture utilizes standard on off keyed optical pulses allocated in the time and wavelength dimensions. This incoherent OCDMA approach is compatible with existing WDM optical networks and utilizes off the shelf components. We discuss the novel optical subsystem design for encoders and decoders that enable the realization of a highly scalable incoherent OCDMA system with rapid reconfigurability. A detailed analysis of the scalability of the two dimensional code is presented and select network deployment architectures for OCDMA are discussed (Authors)

  8. Cryogenic Pupil Alignment Test Architecture for Aberrated Pupil Images

    Science.gov (United States)

    Bos, Brent; Kubalak, David A.; Antonille, Scott; Ohl, Raymond; Hagopian, John G.

    2009-01-01

    A document describes cryogenic test architecture for the James Webb Space Telescope (JWST) integrated science instrument module (ISIM). The ISIM element primarily consists of a mechanical metering structure, three science instruments, and a fine guidance sensor. One of the critical optomechanical alignments is the co-registration of the optical telescope element (OTE) exit pupil with the entrance pupils of the ISIM instruments. The test architecture has been developed to verify that the ISIM element will be properly aligned with the nominal OTE exit pupil when the two elements come together. The architecture measures three of the most critical pupil degrees-of-freedom during optical testing of the ISIM element. The pupil measurement scheme makes use of specularly reflective pupil alignment references located inside the JWST instruments, ground support equipment that contains a pupil imaging module, an OTE simulator, and pupil viewing channels in two of the JWST flight instruments. Pupil alignment references (PARs) are introduced into the instrument, and their reflections are checked using the instrument's mirrors. After the pupil imaging module (PIM) captures a reflected PAR image, the image will be analyzed to determine the relative alignment offset. The instrument pupil alignment preferences are specularly reflective mirrors with non-reflective fiducials, which makes the test architecture feasible. The instrument channels have fairly large fields of view, allowing PAR tip/tilt tolerances on the order of 0.5deg.

  9. A code for quantitative analysis of light elements in thick samples by PIGE

    International Nuclear Information System (INIS)

    Mateus, R.; Jesus, A.P.; Ribeiro, J.P.

    2005-01-01

    This work presents a code developed for the quantitative analysis of light elements in thick samples by PIGE. The new method avoids the use of standards in the analysis, using a formalism similar to the one used for PIXE analysis, where the excitation function of the nuclear reaction related to the gamma-ray emission is integrated along the depth of the sample. In order to check the validity of the code, we present results for the analysis of Lithium, Boron, Fluorine and Sodium in thick samples. For this purpose, the experimental values of the excitation functions of the reactions 7 Li(p,p'γ) 7 Li, 10 B(p,αγ) 7 Be, 19 F(p,p'γ) 19 F and 23 Na(p,p'γ) 23 Na were used as input. For stopping power cross-sections calculations the semi-empirical equations of Ziegler et al. and the Bragg's rule were used. Agreement between the experimental and the calculated gamma-ray yields was always better than 7.5%

  10. Advances in Architectural Elements For Future Missions to Titan

    Science.gov (United States)

    Reh, Kim; Coustenis, Athena; Lunine, Jonathan; Matson, Dennis; Lebreton, Jean-Pierre; Vargas, Andre; Beauchamp, Pat; Spilker, Tom; Strange, Nathan; Elliott, John

    2010-05-01

    The future exploration of Titan is of high priority for the solar system exploration community as recommended by the 2003 National Research Council (NRC) Decadal Survey [1] and ESA's Cosmic Vision Program themes. Recent Cassini-Huygens discoveries continue to emphasize that Titan is a complex world with very many Earth-like features. Titan has a dense, nitrogen atmosphere, an active climate and meteorological cycles where conditions are such that the working fluid, methane, plays the role that water does on Earth. Titan's surface, with lakes and seas, broad river valleys, sand dunes and mountains was formed by processes like those that have shaped the Earth. Supporting this panoply of Earth-like processes is an ice crust that floats atop what might be a liquid water ocean. Furthermore, Titan is rich in very many different organic compounds—more so than any place in the solar system, except Earth. The Titan Saturn System Mission (TSSM) concept that followed the 2007 TandEM ESA CV proposal [2] and the 2007 Titan Explorer NASA Flagship study [3], was examined [4,5] and prioritized by NASA and ESA in February 2009 as a mission to follow the Europa Jupiter System Mission. The TSSM study, like others before it, again concluded that an orbiter, a montgolfiere hot-air balloon and a surface package (e.g. lake lander, Geosaucer (instrumented heat shield), …) are very high priority elements for any future mission to Titan. Such missions could be conceived as Flagship/Cosmic Vision L-Class or as individual smaller missions that could possibly fit into NASA New Frontiers or ESA Cosmic Vision M-Class budgets. As a result of a multitude of Titan mission studies, a clear blueprint has been laid out for the work needed to reduce the risks inherent in such missions and the areas where advances would be beneficial for elements critical to future Titan missions have been identified. The purpose of this paper is to provide a brief overview of the flagship mission architecture and

  11. Constructional Efficiency in Al_Ahwaar Traditional Architecture

    Directory of Open Access Journals (Sweden)

    Usama Abdul-Mun'em Khuraibet

    2016-03-01

    Full Text Available Constructional Efficiency in architecture in general is one of the most important standard success for any structure and a measure of its continuity and relevance across time and space. Given the importance of Al-Ahwaar environment that owned the spatial, environmental, economic and social elements had a prominent impact in creation of architecture patterns form to create special architectural and structural environment, which had many qualities and ingredients that contributed to its continuity and existence over the years. From the premise that man and his environment is the main goal to any architectural style, Thus the research problem focusing on the lack of clarity of the previous literatures in its studies for the role of architectural styles in Al-Ahwaar in achieving constructional efficiency, despite the large number of studies on Al-Ahwaar architecture but it is mostly marked by non-clarity and lack in the constructional and technical aspects, Therefore, the research goal focusing on clarification of the impact of the techniques that used in formations Al_Ahwaar traditional architecture in order to reach to the constructional efficiency in various aspects such as technical, material, economical, and expressional. Assuming that achieving to the constructional efficiency at Al-Ahwaar traditional architecture depends on its characteristics and elements that contributed to the continuity of their patterns across time. The research depended on analytical method of a model of traditional architecture in Al-Ahwaar to reach those goals, as the study of these items aims to deepen the understanding of the designer to the requirements of each component in order to achieve integration together. These components must not conflict with each other, but it must be integrated during and after the design process until it comes out as a creative of architectural destination. al-ahwaar architecture, constructional efficiency, technical and material

  12. MT-ADRES: multi-threading on coarse-grained reconfigurable architecture

    DEFF Research Database (Denmark)

    Wu, Kehuai; Kanstein, Andreas; Madsen, Jan

    2008-01-01

    The coarse-grained reconfigurable architecture ADRES (architecture for dynamically reconfigurable embedded systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high-ILP archi......The coarse-grained reconfigurable architecture ADRES (architecture for dynamically reconfigurable embedded systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high......-ILP architectures achieve only low parallelism when executing partially sequential code segments, which is also known as Amdahl's law, this article proposes to extend ADRES to MT-ADRES (multi-threaded ADRES) to also exploit thread-level parallelism. On MT-ADRES architectures, the array can be partitioned...

  13. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  14. OpenCL code generation for low energy wide SIMD architectures with explicit datapath.

    NARCIS (Netherlands)

    She, D.; He, Y.; Waeijen, L.J.W.; Corporaal, H.; Jeschke, H.; Silvén, O.

    2013-01-01

    Energy efficiency is one of the most important aspects in designing embedded processors. The use of a wide SIMD processor architecture is a promising approach to build energy-efficient high performance embedded processors. In this paper, we propose a configurable wide SIMD architecture that utilizes

  15. FINEDAN - an explicit finite-element calculation code for two-dimensional analyses of fast dynamic transients in nuclear reactor technology

    International Nuclear Information System (INIS)

    Adamik, V.; Matejovic, P.

    1989-01-01

    The problems are discussed of nonstationary, nonlinear dynamics of the continuum. A survey is presented of calculation methods in the given area with emphasis on the area of impact problems. A description is presented of the explicit finite elements method and its application to two-dimensional Cartesian and cylindrical configurations. Using the method the explicit calculation code FINEDAN was written which was tested in a series of verification calculations for different configurations and different types of continuum. The main characteristics are presented of the code and of some, of its practical applications. Envisaged trends of the development of the code and its possible applications in the technology of nuclear reactors are given. (author). 9 figs., 4 tabs., 10 refs

  16. Parallel Finite Element Particle-In-Cell Code for Simulations of Space-charge Dominated Beam-Cavity Interactions

    International Nuclear Information System (INIS)

    Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.

    2007-01-01

    Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented

  17. Minimalism in architecture: Abstract conceptualization of architecture

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2015-01-01

    Full Text Available Minimalism in architecture contains the idea of the minimum as a leading creative tend to be considered and interpreted in working through phenomena of empathy and abstraction. In the Western culture, the root of this idea is found in empathy of Wilhelm Worringer and abstraction of Kasimir Malevich. In his dissertation, 'Abstraction and Empathy' Worringer presented his thesis on the psychology of style through which he explained the two opposing basic forms: abstraction and empathy. His conclusion on empathy as a psychological basis of observation expression is significant due to the verbal congruence with contemporary minimalist expression. His intuition was enhenced furthermore by figure of Malevich. Abstraction, as an expression of inner unfettered inspiration, has played a crucial role in the development of modern art and architecture of the twentieth century. Abstraction, which is one of the basic methods of learning in psychology (separating relevant from irrelevant features, Carl Jung is used to discover ideas. Minimalism in architecture emphasizes the level of abstraction to which the individual functions are reduced. Different types of abstraction are present: in the form as well as function of the basic elements: walls and windows. The case study is an example of Sou Fujimoto who is unequivocal in its commitment to the autonomy of abstract conceptualization of architecture.

  18. Layered Architecture for Quantum Computing

    Directory of Open Access Journals (Sweden)

    N. Cody Jones

    2012-07-01

    Full Text Available We develop a layered quantum-computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction. In doing so, we propose a new quantum-computer architecture based on optical control of quantum dots. The time scales of physical-hardware operations and logical, error-corrected quantum gates differ by several orders of magnitude. By dividing functionality into layers, we can design and analyze subsystems independently, demonstrating the value of our layered architectural approach. Using this concrete hardware platform, we provide resource analysis for executing fault-tolerant quantum algorithms for integer factoring and quantum simulation, finding that the quantum-dot architecture we study could solve such problems on the time scale of days.

  19. An Evaluation of Automated Code Generation with the PetriCode Approach

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Automated code generation is an important element of model driven development methodologies. We have previously proposed an approach for code generation based on Coloured Petri Net models annotated with textual pragmatics for the network protocol domain. In this paper, we present and evaluate thr...... important properties of our approach: platform independence, code integratability, and code readability. The evaluation shows that our approach can generate code for a wide range of platforms which is integratable and readable....

  20. Description and applicability of the BEFEM-CODE

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T.

    1980-05-15

    The BEFEM-CODE, developed for rock mechanics problems in hard rock with joints, is a simple FEM code constructed using triangular and quadrilateral elements. As an option, a joint element of the Goodman type may be used. The Cook-Pian type quadrilateral stress hybrid element was introduced into the version of the code used for the Naesliden project, to replace the constant stress quadrilateral elements. This hybrid element, derived with assumed stress distributions, simplifies the excavation process for use in non-linear models. The shear behavior of the Goodman 1976 joint element has been replaced by Goodman's 1968 formulation. This element makes it possible to take dilation into account, but it was not considered necessary to use dilation to simulate proper joint behavior in the Naesliden project. The code uses Barton's shear strength criteria. Excessive nodal forces due to failure and non-linearities in the joint elements are redistributed with stress transfer iterations. Convergence can be speeded up by dividing each excavation sequence into several loadsteps in which the stiffness matrix is recalculated.

  1. The application of Malay wood carving on contemporary architecture in Malaysia

    Directory of Open Access Journals (Sweden)

    Nila Inangda Manyam Keumala Daud

    2012-12-01

    Full Text Available Malay wood carving has been identified as one of the most important element in Malay Traditional Architecture. The application of this special element has its own philosophy and its purpose was meant to enrich the architecture character values. Issues are currently raised that the application of the Malay wood carving on contemporary architecture in Malaysia has ignored its unique original concept and philosophy. This indicate that the development of Malay Wood Carving requires attention and need more encouragement in order to catch up with the vast development of architecture in Malaysia, and overcome the problem of introducing more meaningful Malay wood carving in contemporary architecture building. A research has been conducted to investigate the thread of the development and application of Malay wood carving in contemporary architecture based on case study of a five star hotel in Kuala Lumpur.

  2. Sending policies in dynamic wireless mesh using network coding

    DEFF Research Database (Denmark)

    Pandi, Sreekrishna; Fitzek, Frank; Pihl, Jeppe

    2015-01-01

    This paper demonstrates the quick prototyping capabilities of the Python-Kodo library for network coding based performance evaluation and investigates the problem of data redundancy in a network coded wireless mesh with opportunistic overhearing. By means of several wireless meshed architectures ...

  3. Investigation of coolant thermal mixing within 28-element CANDU fuel bundles using the ASSERT-PV thermal hydraulics code

    International Nuclear Information System (INIS)

    Lightston, M.F.; Rock, R.

    1996-01-01

    This paper presents the results of a study of the thermal mixing of single-phase coolant in 28-element CANDU fuel bundles under steady-state conditions. The study, which is based on simulations performed using the ASSERT-PV thermal hydraulic code, consists of two main parts. In the first part the various physical mechanisms that contribute to coolant mixing are identified and their impact is isolated via ASSERT-PV simulations. The second part is concerned with development of a preliminary model suitable for use in the fuel and fuel channel code FACTAR to predict the thermal mixing that occurs between flow annuli. (author)

  4. A Reference Architecture for Space Information Management

    Science.gov (United States)

    Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.

    2006-01-01

    We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.

  5. The Walk-Man Robot Software Architecture

    OpenAIRE

    Mirko Ferrati; Alessandro Settimi; Alessandro Settimi; Luca Muratore; Alberto Cardellino; Alessio Rocchi; Enrico Mingo Hoffman; Corrado Pavan; Dimitrios Kanoulas; Nikos G. Tsagarakis; Lorenzo Natale; Lucia Pallottino

    2016-01-01

    A software and control architecture for a humanoid robot is a complex and large project, which involves a team of developers/researchers to be coordinated and requires many hard design choices. If such project has to be done in a very limited time, i.e., less than 1 year, more constraints are added and concepts, such as modular design, code reusability, and API definition, need to be used as much as possible. In this work, we describe the software architecture developed for Walk-Man, a robot ...

  6. Code conforming determination of cumulative usage factors for general elastic-plastic finite element analyses

    International Nuclear Information System (INIS)

    Rudolph, Juergen; Goetz, Andreas; Hilpert, Roland

    2012-01-01

    The procedures of fatigue analyses of several relevant nuclear and conventional design codes (ASME, KTA, EN, AD) for power plant components differentiate between an elastic, simplified elastic-plastic and elastic-plastic fatigue check. As a rule, operational load levels will exclude the purely elastic fatigue check. The application of the code procedure of the simplified elastic-plastic fatigue check is common practice. Nevertheless, resulting cumulative usage factors may be overly conservative mainly due to high code based plastification penalty factors Ke. As a consequence, the more complex and still code conforming general elastic-plastic fatigue analysis methodology based on non-linear finite element analysis (FEA) is applied for fatigue design as an alternative. The requirements of the FEA and the material law to be applied have to be clarified in a first step. Current design codes only give rough guidelines on these relevant items. While the procedure for the simplified elastic-plastic fatigue analysis and the associated code passages are based on stress related cycle counting and the determination of pseudo elastic equivalent stress ranges, an adaptation to elastic-plastic strains and strain ranges is required for the elastic-plastic fatigue check. The associated requirements are explained in detail in the paper. If the established and implemented evaluation mechanism (cycle counting according to the peak and valley respectively the rainflow method, calculation of stress ranges from arbitrary load-time histories and determination of cumulative usage factors based on all load events) is to be retained, a conversion of elastic-plastic strains and strain ranges into pseudo elastic stress ranges is required. The algorithm to be applied is described in the paper. It has to be implemented in the sense of an extended post processing operation of FEA e.g. by APDL scripts in ANSYS registered . Variations of principal stress (strain) directions during the loading

  7. Optimized Method for Generating and Acquiring GPS Gold Codes

    Directory of Open Access Journals (Sweden)

    Khaled Rouabah

    2015-01-01

    Full Text Available We propose a simpler and faster Gold codes generator, which can be efficiently initialized to any desired code, with a minimum delay. Its principle consists of generating only one sequence (code number 1 from which we can produce all the other different signal codes. This is realized by simply shifting this sequence by different delays that are judiciously determined by using the bicorrelation function characteristics. This is in contrast to the classical Linear Feedback Shift Register (LFSR based Gold codes generator that requires, in addition to the shift process, a significant number of logic XOR gates and a phase selector to change the code. The presence of all these logic XOR gates in classical LFSR based Gold codes generator provokes the consumption of an additional time in the generation and acquisition processes. In addition to its simplicity and its rapidity, the proposed architecture, due to the total absence of XOR gates, has fewer resources than the conventional Gold generator and can thus be produced at lower cost. The Digital Signal Processing (DSP implementations have shown that the proposed architecture presents a solution for acquiring Global Positioning System (GPS satellites signals optimally and in a parallel way.

  8. Application of Protocol-Oriented MVVM Architecture in iOS Development

    OpenAIRE

    Luong Nguyen, Khoi Nguyen

    2017-01-01

    The mobile application industry is fast paced. Requirements change, additions of new features occur on a daily basis and demand frequent code structure adjustment. Thus, a flexible and maintainable software architecture is often a key factor for an application’s success. The major objective of this thesis is to propose a practical use case of Protocol Oriented Model View View Model, an architecture inspired by the Protocol Oriented Programming paradigm. This thesis explains the architectur...

  9. A new three-tier architecture design for multi-sphere neutron spectrometer with the FLUKA code

    Science.gov (United States)

    Huang, Hong; Yang, Jian-Bo; Tuo, Xian-Guo; Liu, Zhi; Wang, Qi-Biao; Wang, Xu

    2016-07-01

    The current commercially, available Bonner sphere neutron spectrometer (BSS) has high sensitivity to neutrons below 20 MeV, which causes it to be poorly placed to measure neutrons ranging from a few MeV to 100 MeV. The paper added moderator layers and the auxiliary material layer upon 3He proportional counters with FLUKA code, with a view to improve. The results showed that the responsive peaks to neutrons below 20 MeV gradually shift to higher energy region and decrease slightly with the increasing moderator thickness. On the contrary, the response for neutrons above 20 MeV was always very low until we embed auxiliary materials such as copper (Cu), lead (Pb), tungsten (W) into moderator layers. This paper chose the most suitable auxiliary material Pb to design a three-tier architecture multi-sphere neutron spectrometer (NBSS). Through calculating and comparing, the NBSS was advantageous in terms of response for 5-100 MeV and the highest response was 35.2 times the response of polyethylene (PE) ball with the same PE thickness.

  10. Genesis and Evolution of Interfaces in Product Architecture

    DEFF Research Database (Denmark)

    Donmez, Mehmet; Hsuan, Juliana

    Interfaces are elements of the product architecture that facilitates innovation and enables an organization to leverage the trade-off between cost and performance of its products. Despite the importance of interfaces for organizations, little is known about their genesis and evolution. In this st......Interfaces are elements of the product architecture that facilitates innovation and enables an organization to leverage the trade-off between cost and performance of its products. Despite the importance of interfaces for organizations, little is known about their genesis and evolution...

  11. Analysis of lower head failure with simplified models and a finite element code

    Energy Technology Data Exchange (ETDEWEB)

    Koundy, V. [CEA-IPSN-DPEA-SEAC, Service d' Etudes des Accidents, Fontenay-aux-Roses (France); Nicolas, L. [CEA-DEN-DM2S-SEMT, Service d' Etudes Mecaniques et Thermiques, Gif-sur-Yvette (France); Combescure, A. [INSA-Lyon, Lab. Mecanique des Solides, Villeurbanne (France)

    2001-07-01

    The objective of the OLHF (OECD lower head failure) experiments is to characterize the timing, mode and size of lower head failure under high temperature loading and reactor coolant system pressure due to a postulated core melt scenario. Four tests have been performed at Sandia National Laboratories (USA), in the frame of an OECD project. The experimental results have been used to develop and validate predictive analysis models. Within the framework of this project, several finite element calculations were performed. In parallel, two simplified semi-analytical methods were developed in order to get a better understanding of the role of various parameters on the creep phenomenon, e.g. the behaviour of the lower head material and its geometrical characteristics on the timing, mode and location of failure. Three-dimensional modelling of crack opening and crack propagation has also been carried out using the finite element code Castem 2000. The aim of this paper is to present the two simplified semi-analytical approaches and to report the status of the 3D crack propagation calculations. (authors)

  12. The architecture of a modern military health information system.

    Science.gov (United States)

    Mukherji, Raj J; Egyhazy, Csaba J

    2004-06-01

    This article describes a melding of a government-sponsored architecture for complex systems with open systems engineering architecture developed by the Institute for Electrical and Electronics Engineers (IEEE). Our experience in using these two architectures in building a complex healthcare system is described in this paper. The work described shows that it is possible to combine these two architectural frameworks in describing the systems, operational, and technical views of a complex automation system. The advantage in combining the two architectural frameworks lies in the simplicity of implementation and ease of understanding of automation system architectural elements by medical professionals.

  13. Collaborative production indicators in information architecture

    Directory of Open Access Journals (Sweden)

    Zayr Claudio Gomes da Silva

    2017-04-01

    Full Text Available Information architecture is considered a strategic domain of collaborative production of Information Science. We describe the conditions of collaborative production in information architecture, considering it a sub-area of the study of Information Science. In order to do so, we specifically address indicators of scientific production that include topics of study, typology and authorship, postgraduate programs and areas to which it is linked, among others. This is an exploratory and descriptive research. The scientific production of the National Meeting of Information Science Research (ENANCIB, from 2003 to 2013, is mapped in the "Network Matters" repository. Bibliometry is used to identify paratextual and textual elements that form evidence of collaborative production in information architecture. We verified the plurality in the academic formation of the researchers that approach information architecture, the sharing of languages, some indications of the disciplinary convergences from the collaboration in coauthorship, as well as a plexus of relations through the indirect citations that represent the sharing of elements Theoretical-methodological approaches in interdisciplinary production. In addition, the academic training of the researchers with the highest productivity index is mainly related to Librarianship and Computer Science. The collaborative production in the information architecture is presented as a multidisciplinary production process, constituting a convergent domain that allows the effectiveness of interdisciplinary practices in Information Science.

  14. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  15. National Positioning, Navigation, and Timing Architecture

    National Research Council Canada - National Science Library

    Huested, Patrick; Popejoy, Paul D

    2008-01-01

    .... The strategy is supported by vectors, or enterprise architecture elements, for using multiple PNT-related phenomenologies and interchangeable PNT solutions, PNT and Communications synergy, and co...

  16. Coupling Computer Codes for The Analysis of Severe Accident Using A Pseudo Shared Memory Based on MPI

    International Nuclear Information System (INIS)

    Cho, Young Chul; Park, Chang-Hwan; Kim, Dong-Min

    2016-01-01

    As there are four codes in-vessel analysis code (CSPACE), ex-vessel analysis code (SACAP), corium behavior analysis code (COMPASS), and fission product behavior analysis code, for the analysis of severe accident, it is complex to implement the coupling of codes with the similar methodologies for RELAP and CONTEMPT or SPACE and CAP. Because of that, an efficient coupling so called Pseudo shared memory architecture was introduced. In this paper, coupling methodologies will be compared and the methodology used for the analysis of severe accident will be discussed in detail. The barrier between in-vessel and ex-vessel has been removed for the analysis of severe accidents with the implementation of coupling computer codes with pseudo shared memory architecture based on MPI. The remaining are proper choice and checking of variables and values for the selected severe accident scenarios, e.g., TMI accident. Even though it is possible to couple more than two computer codes with pseudo shared memory architecture, the methodology should be revised to couple parallel codes especially when they are programmed using MPI

  17. Coupling Computer Codes for The Analysis of Severe Accident Using A Pseudo Shared Memory Based on MPI

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Young Chul; Park, Chang-Hwan; Kim, Dong-Min [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    As there are four codes in-vessel analysis code (CSPACE), ex-vessel analysis code (SACAP), corium behavior analysis code (COMPASS), and fission product behavior analysis code, for the analysis of severe accident, it is complex to implement the coupling of codes with the similar methodologies for RELAP and CONTEMPT or SPACE and CAP. Because of that, an efficient coupling so called Pseudo shared memory architecture was introduced. In this paper, coupling methodologies will be compared and the methodology used for the analysis of severe accident will be discussed in detail. The barrier between in-vessel and ex-vessel has been removed for the analysis of severe accidents with the implementation of coupling computer codes with pseudo shared memory architecture based on MPI. The remaining are proper choice and checking of variables and values for the selected severe accident scenarios, e.g., TMI accident. Even though it is possible to couple more than two computer codes with pseudo shared memory architecture, the methodology should be revised to couple parallel codes especially when they are programmed using MPI.

  18. Evaluating the performance of the particle finite element method in parallel architectures

    Science.gov (United States)

    Gimenez, Juan M.; Nigro, Norberto M.; Idelsohn, Sergio R.

    2014-05-01

    This paper presents a high performance implementation for the particle-mesh based method called particle finite element method two (PFEM-2). It consists of a material derivative based formulation of the equations with a hybrid spatial discretization which uses an Eulerian mesh and Lagrangian particles. The main aim of PFEM-2 is to solve transport equations as fast as possible keeping some level of accuracy. The method was found to be competitive with classical Eulerian alternatives for these targets, even in their range of optimal application. To evaluate the goodness of the method with large simulations, it is imperative to use of parallel environments. Parallel strategies for Finite Element Method have been widely studied and many libraries can be used to solve Eulerian stages of PFEM-2. However, Lagrangian stages, such as streamline integration, must be developed considering the parallel strategy selected. The main drawback of PFEM-2 is the large amount of memory needed, which limits its application to large problems with only one computer. Therefore, a distributed-memory implementation is urgently needed. Unlike a shared-memory approach, using domain decomposition the memory is automatically isolated, thus avoiding race conditions; however new issues appear due to data distribution over the processes. Thus, a domain decomposition strategy for both particle and mesh is adopted, which minimizes the communication between processes. Finally, performance analysis running over multicore and multinode architectures are presented. The Courant-Friedrichs-Lewy number used influences the efficiency of the parallelization and, in some cases, a weighted partitioning can be used to improve the speed-up. However the total cputime for cases presented is lower than that obtained when using classical Eulerian strategies.

  19. Tests of a 3D Self Magnetic Field Solver in the Finite Element Gun Code MICHELLE

    CERN Document Server

    Nelson, Eric M

    2005-01-01

    We have recently implemented a prototype 3d self magnetic field solver in the finite-element gun code MICHELLE. The new solver computes the magnetic vector potential on unstructured grids. The solver employs edge basis functions in the curl-curl formulation of the finite-element method. A novel current accumulation algorithm takes advantage of the unstructured grid particle tracker to produce a compatible source vector, for which the singular matrix equation is easily solved by the conjugate gradient method. We will present some test cases demonstrating the capabilities of the prototype 3d self magnetic field solver. One test case is self magnetic field in a square drift tube. Another is a relativistic axisymmetric beam freely expanding in a round pipe.

  20. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    Science.gov (United States)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  1. Architecture of vagal motor units controlling striated muscle of esophagus: peripheral elements patterning peristalsis?

    Science.gov (United States)

    Powley, Terry L; Mittal, Ravinder K; Baronowsky, Elizabeth A; Hudson, Cherie N; Martin, Felecia N; McAdams, Jennifer L; Mason, Jacqueline K; Phillips, Robert J

    2013-12-01

    Little is known about the architecture of the vagal motor units that control esophageal striated muscle, in spite of the fact that these units are necessary, and responsible, for peristalsis. The present experiment was designed to characterize the motor neuron projection fields and terminal arbors forming esophageal motor units. Nucleus ambiguus compact formation neurons of the rat were labeled by bilateral intracranial injections of the anterograde tracer dextran biotin. After tracer transport, thoracic and abdominal esophagi were removed and prepared as whole mounts of muscle wall without mucosa or submucosa. Labeled terminal arbors of individual vagal motor neurons (n=78) in the esophageal wall were inventoried, digitized and analyzed morphometrically. The size of individual vagal motor units innervating striated muscle, throughout thoracic and abdominal esophagus, averaged 52 endplates per motor neuron, a value indicative of fine motor control. A majority (77%) of the motor terminal arbors also issued one or more collateral branches that contacted neurons, including nitric oxide synthase-positive neurons, of local myenteric ganglia. Individual motor neuron terminal arbors co-innervated, or supplied endplates in tandem to, both longitudinal and circular muscle fibers in roughly similar proportions (i.e., two endplates to longitudinal for every three endplates to circular fibers). Both the observation that vagal motor unit collaterals project to myenteric ganglia and the fact that individual motor units co-innervate longitudinal and circular muscle layers are consistent with the hypothesis that elements contributing to peristaltic programming inhere, or are "hardwired," in the peripheral architecture of esophageal motor units. © 2013.

  2. QCA Gray Code Converter Circuits Using LTEx Methodology

    Science.gov (United States)

    Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan

    2018-04-01

    The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.

  3. Introduction of polycrystal constitutive laws in a finite element code with applications to zirconium forming

    International Nuclear Information System (INIS)

    Maudlin, P.J.; Tome, C.N.; Kaschner, G.C.; Gray, G.T. III

    1998-01-01

    In this work the authors simulate the compressive deformation of heavily textured zirconium sheet using a finite element code with the constitutive response given by a polycrystal self-consistent model. They show that the strong anisotropy of the response can be explained in terms of the texture and the relative activity of prismatic (easy) and pyramidal (hard) slip modes. The simulations capture the yield anisotropy observed for so-called through-thickness and in-plane compression tests in terMs of the loading curves and final specimen geometries

  4. Architectural Theory and Graphical Criteria for Modelling Certain Late Gothic Projects by Hernan Ruiz "the Elder"

    Directory of Open Access Journals (Sweden)

    Antonio Luis Ampliato Briones

    2014-10-01

    Full Text Available This paper primarily reflects on the need to create graphical codes for producing images intended to communicate architecture. Each step of the drawing needs to be a deliberate process in which the proposed code highlights the relationship between architectural theory and graphic action. Our aim is not to draw the result of the architectural process but the design structure of the actual process; to draw as we design; to draw as we build. This analysis of the work of the Late Gothic architect Hernan Ruiz the Elder, from Cordoba, addresses two aspects: the historical and architectural investigation, and the graphical project for communication purposes.

  5. Elemental ABAREX -- a user's manual

    International Nuclear Information System (INIS)

    Smith, A.B.

    1999-01-01

    ELEMENTAL ABAREX is an extended version of the spherical optical-statistical model code ABAREX, designed for the interpretation of neutron interactions with elemental targets consisting of up to ten isotopes. The contributions from each of the isotopes of the element are explicitly dealt with, and combined for comparison with the elemental observables. Calculations and statistical fitting of experimental data are considered. The code is written in FORTRAN-77 and arranged for use on the IBM-compatible personal computer (PC), but it should operate effectively on a number of other systems, particularly VAX/VMS and IBM work stations. Effort is taken to make the code user friendly. With this document a reasonably skilled individual should become fluent with the use of the code in a brief period of time

  6. Code compression for VLIW embedded processors

    Science.gov (United States)

    Piccinelli, Emiliano; Sannino, Roberto

    2004-04-01

    The implementation of processors for embedded systems implies various issues: main constraints are cost, power dissipation and die area. On the other side, new terminals perform functions that require more computational flexibility and effort. Long code streams must be loaded into memories, which are expensive and power consuming, to run on DSPs or CPUs. To overcome this issue, the "SlimCode" proprietary algorithm presented in this paper (patent pending technology) can reduce the dimensions of the program memory. It can run offline and work directly on the binary code the compiler generates, by compressing it and creating a new binary file, about 40% smaller than the original one, to be loaded into the program memory of the processor. The decompression unit will be a small ASIC, placed between the Memory Controller and the System bus of the processor, keeping unchanged the internal CPU architecture: this implies that the methodology is completely transparent to the core. We present comparisons versus the state-of-the-art IBM Codepack algorithm, along with its architectural implementation into the ST200 VLIW family core.

  7. User`s manual for the FEHM application -- A finite-element heat- and mass-transfer code

    Energy Technology Data Exchange (ETDEWEB)

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved in the FEHM application by using the finite-element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat- and mass-transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. In fact, FEHM is capable of describing flow that is dominated in many areas by fracture and fault flow, including the inherently three-dimensional flow that results from permeation to and from faults and fractures. The code can handle coupled heat and mass-transfer effects, such as boiling, dryout, and condensation that can occur in the near-field region surrounding the potential repository and the natural convection that occurs through Yucca Mountain due to seasonal temperature changes. This report outlines the uses and capabilities of the FEHM application, initialization of code variables, restart procedures, and error processing. The report describes all the data files, the input data, including individual input records or parameters, and the various output files. The system interface is described, including the software environment and installation instructions.

  8. Tandem Mirror Reactor Systems Code (Version I)

    International Nuclear Information System (INIS)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost

  9. Lightgrid-an agile distributed computing architecture for Geant4

    International Nuclear Information System (INIS)

    Young, Jason; Perry, John O.; Jevremovic, Tatjana

    2010-01-01

    A light weight grid based computing architecture has been developed to accelerate Geant4 computations on a variety of network architectures. This new software is called LightGrid. LightGrid has a variety of features designed to overcome current limitations on other grid based computing platforms, more specifically, smaller network architectures. By focusing on smaller, local grids, LightGrid is able to simplify the grid computing process with minimal changes to existing Geant4 code. LightGrid allows for integration between Geant4 and MySQL, which both increases flexibility in the grid as well as provides a faster, reliable, and more portable method for accessing results than traditional data storage systems. This unique method of data acquisition allows for more fault tolerant runs as well as instant results from simulations as they occur. The performance increases brought along by using LightGrid allow simulation times to be decreased linearly. LightGrid also allows for pseudo-parallelization with minimal Geant4 code changes.

  10. Writing analytic element programs in Python.

    Science.gov (United States)

    Bakker, Mark; Kelson, Victor A

    2009-01-01

    The analytic element method is a mesh-free approach for modeling ground water flow at both the local and the regional scale. With the advent of the Python object-oriented programming language, it has become relatively easy to write analytic element programs. In this article, an introduction is given of the basic principles of the analytic element method and of the Python programming language. A simple, yet flexible, object-oriented design is presented for analytic element codes using multiple inheritance. New types of analytic elements may be added without the need for any changes in the existing part of the code. The presented code may be used to model flow to wells (with either a specified discharge or drawdown) and streams (with a specified head). The code may be extended by any hydrogeologist with a healthy appetite for writing computer code to solve more complicated ground water flow problems. Copyright © 2009 The Author(s). Journal Compilation © 2009 National Ground Water Association.

  11. Transverse pumped laser amplifier architecture

    Science.gov (United States)

    Bayramian, Andrew James; Manes, Kenneth; Deri, Robert; Erlandson, Al; Caird, John; Spaeth, Mary

    2013-07-09

    An optical gain architecture includes a pump source and a pump aperture. The architecture also includes a gain region including a gain element operable to amplify light at a laser wavelength. The gain region is characterized by a first side intersecting an optical path, a second side opposing the first side, a third side adjacent the first and second sides, and a fourth side opposing the third side. The architecture further includes a dichroic section disposed between the pump aperture and the first side of the gain region. The dichroic section is characterized by low reflectance at a pump wavelength and high reflectance at the laser wavelength. The architecture additionally includes a first cladding section proximate to the third side of the gain region and a second cladding section proximate to the fourth side of the gain region.

  12. Improvements, verifications and validations of the BOW code

    International Nuclear Information System (INIS)

    Yu, S.D.; Tayal, M.; Singh, P.N.

    1995-01-01

    The BOW code calculates the lateral deflections of a fuel element consisting of sheath and pellets, due to temperature gradients, hydraulic drag and gravity. the fuel element is subjected to restraint from endplates, neighboring fuel elements and the pressure tube. Many new features have been added to the BOW code since its original release in 1985. This paper outlines the major improvements made to the code and verification/validation results. (author)

  13. Architecture of absurd (forms, positions, apposition

    Directory of Open Access Journals (Sweden)

    Fedorov Viktor Vladimirovich

    2014-04-01

    Full Text Available In everyday life we constantly face absurd things, which seem to lack common sense. The notion of the absurd acts as: a an aesthetic category; b an element of logic; c a metaphysical phenomenon. The opportunity of its overcoming is achieved through the understanding of the situation, the faith in the existence of sense and hope for his understanding. The architecture of absurd should be considered as a loss of sense of a part of architectural landscape (urban environment. The ways of organization of the architecture of absurd: the exaggerated forms and proportions, the unnatural position and apposition of various objects. These are usually small-scale facilities that have local spatial and temporary value. There are no large absurd architectural spaces, as the natural architectural environment dampens the perturbation of sense-sphere. The architecture of absurd is considered «pathology» of the environment. «Nonsense» objects and hope (or even faith to detect sense generate a fruitful paradox of architecture of absurd presence in the world.

  14. Implementation of second moment closure turbulence model for incompressible flows in the industrial finite element code N3S

    International Nuclear Information System (INIS)

    Pot, G.; Laurence, D.; Rharif, N.E.; Leal de Sousa, L.; Compe, C.

    1995-12-01

    This paper deals with the introduction of a second moment closure turbulence model (Reynolds Stress Model) in an industrial finite element code, N3S, developed at Electricite de France.The numerical implementation of the model in N3S will be detailed in 2D and 3D. Some details are given concerning finite element computations and solvers. Then, some results will be given, including a comparison between standard k-ε model, R.S.M. model and experimental data for some test case. (authors). 22 refs., 3 figs

  15. Verifying Architectural Design Rules of the Flight Software Product Line

    Science.gov (United States)

    Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen

    2009-01-01

    This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.

  16. Non-Binary Protograph-Based LDPC Codes: Analysis,Enumerators and Designs

    OpenAIRE

    Sun, Yizeng

    2013-01-01

    Non-binary LDPC codes can outperform binary LDPC codes using sum-product algorithm with higher computation complexity. Non-binary LDPC codes based on protographs have the advantage of simple hardware architecture. In the first part of this thesis, we will use EXIT chart analysis to compute the thresholds of different protographs over GF(q). Based on threshold computation, some non-binary protograph-based LDPC codes are designed and their frame error rates are compared with binary LDPC codes. ...

  17. FEMAXI-III. An axisymmetric finite element computer code for the analysis of fuel rod performance

    International Nuclear Information System (INIS)

    Ichikawa, M.; Nakajima, T.; Okubo, T.; Iwano, Y.; Ito, K.; Kashima, K.; Saito, H.

    1980-01-01

    For the analysis of local deformation of fuel rods, which is closely related to PCI failure in LWR, FEMAXI-III has been developed as an improved version based on the essential models of FEMAXI-II, MIPAC, and FEAST codes. The major features of FEMAXI-III are as follows: Elasto-plasticity, creep, pellet cracking, relocation, densification, hot pressing, swelling, fission gas release, and their interrelated effects are considered. Contact conditions between pellet and cladding are exactly treated, where sliding or sticking is defined by iterations. Special emphasis is placed on creep and pellet cracking. In the former, an implicit algorithm is applied to improve numerical stability. In the latter, the pellet is assumed to be non-tension material. The recovery of pellet stiffness under compression is related to initial relocation. Quadratic isoparametric elements are used. The skyline method is applied to solve linear stiffness equation to reduce required core memories. The basic performance of the code has been proven to be satisfactory. (author)

  18. Executable Architecture Research at Old Dominion University

    Science.gov (United States)

    Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.

    2011-01-01

    Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.

  19. Design of complex architectures using a three dimension approach : the crosswork case

    NARCIS (Netherlands)

    Seguel Pérez, R.E.; Grefen, P.W.P.J.; Eshuis, H.

    2010-01-01

    In this paper, we present a three dimensional design approach of complex information systems architectures. Key element of this approach is the model transformation cube, which consists of three dimensions along which architecture models can be positioned. Industry architecture frameworks to guide

  20. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  1. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  2. Elements of regional architecture in the works of architect Ivan Antić

    Directory of Open Access Journals (Sweden)

    Milašinović-Marić Dijana

    2017-01-01

    Full Text Available The body of work of Ivan Antić (Belgrade, 1923-2005, one of the most important Serbian architects who created his works in the period from 1955 to 1990, represents almost a reification of ideals of the times he lived in, both in terms of form and in structural and substantive terms. His work is placed within a rationalistic concept which is essentially experienced as an undisturbed harmony between his personality and the contemporary architectural expression. However, besides such way of interpretation, his architecture also includes examples indicating the thinking about the folk tradition, architectural heritage, the primordial, as well as the archetypal, typical for a region. In the context of the body of work of architect Ivan Antić, this paper will particularly place accent and track such threads of thinking which are, in an obvious or transparent sense, expressed in a series of realized solutions and designs such as the Guard's Home in Dedinje (Belgrade, 1957-1958, Children's Home (Jermenovci, 1956-1957, Museum of the Genocide in Šumarice which he designed together with I. Raspopović (Kragujevac, 1968-1975, 'Politika' Cultural Centre (Krupanj, 1976-1981, '25th May' Sports Center (Belgrade, 1971-1973, or his own house in Lisović near Belgrade. All the abovementioned buildings, as well as numerous other, which belong at the top of Serbian architecture, reflect the spirit of the time in which he created them. They clearly indicate the unbreakable bond which exists in architecture between the inherited, vernacular, contemporary and personal architect's attitude.

  3. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  4. Home networking architecture for IPv6

    OpenAIRE

    Arkko, Jari; Weil, Jason; Troan, Ole; Brandt, Anders

    2012-01-01

    This text describes evolving networking technology within increasingly large residential home networks. The goal of this document is to define an architecture for IPv6-based home networking while describing the associated principles, considerations and requirements. The text briefly highlights the specific implications of the introduction of IPv6 for home networking, discusses the elements of the architecture, and suggests how standard IPv6 mechanisms and addressing can be employed in home ne...

  5. THE CASE FOR DAYLIGHTING IN ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    Barrett Richard

    2009-07-01

    Full Text Available The paper discusses the reasons for using daylight in the design of architectural form and space. These reasons extend from those of a practical nature, including energy conservation, cost factors, and health and wellbeing, to those of a more intangible, aesthetic nature. Some historical precedents are offered as examples of projects in which designing to maximize daylighting was crucial in the mind of the architect. By contrast there is also discussion relating to the ‘the lost art’ of using natural lighting in architecture. Some of the reasons for this loss of conviction and expertise are considered. The place of national building codes and other statutory requirements is examined, as is the role of the architect and his/her relationship with other professionals involved in daylighting design in architecture.

  6. Homeowner's Architectural Responses to Crime in Dar Es Salaan : Its impacts and implications to urban architecture, urban design and urban management

    OpenAIRE

    Bulamile, Ludigija Boniface

    2009-01-01

    HTML clipboardThis study is about Homeowner’s architectural responses to crime in Dar es Salaam Tanzania: its impacts and implications to urban architecture, urban design and urban management. The study explores and examines the processes through which homeowners respond to crimes of burglary, home robbery and fear of it using architectural or physical elements. The processes are explored and examined using case study methodology in three cases in Dar es Salaam. The cases are residentia...

  7. Nucleoporins as components of the nuclear pore complex core structure and Tpr as the architectural element of the nuclear basket.

    Science.gov (United States)

    Krull, Sandra; Thyberg, Johan; Björkroth, Birgitta; Rackwitz, Hans-Richard; Cordes, Volker C

    2004-09-01

    The vertebrate nuclear pore complex (NPC) is a macromolecular assembly of protein subcomplexes forming a structure of eightfold radial symmetry. The NPC core consists of globular subunits sandwiched between two coaxial ring-like structures of which the ring facing the nuclear interior is capped by a fibrous structure called the nuclear basket. By postembedding immunoelectron microscopy, we have mapped the positions of several human NPC proteins relative to the NPC core and its associated basket, including Nup93, Nup96, Nup98, Nup107, Nup153, Nup205, and the coiled coil-dominated 267-kDa protein Tpr. To further assess their contributions to NPC and basket architecture, the genes encoding Nup93, Nup96, Nup107, and Nup205 were posttranscriptionally silenced by RNA interference (RNAi) in HeLa cells, complementing recent RNAi experiments on Nup153 and Tpr. We show that Nup96 and Nup107 are core elements of the NPC proper that are essential for NPC assembly and docking of Nup153 and Tpr to the NPC. Nup93 and Nup205 are other NPC core elements that are important for long-term maintenance of NPCs but initially dispensable for the anchoring of Nup153 and Tpr. Immunogold-labeling for Nup98 also results in preferential labeling of NPC core regions, whereas Nup153 is shown to bind via its amino-terminal domain to the nuclear coaxial ring linking the NPC core structures and Tpr. The position of Tpr in turn is shown to coincide with that of the nuclear basket, with different Tpr protein domains corresponding to distinct basket segments. We propose a model in which Tpr constitutes the central architectural element that forms the scaffold of the nuclear basket.

  8. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  9. Development of a computer code 'CRACK' for elastic and elastoplastic fracture mechanics analysis of 2-D structures by finite element technique

    International Nuclear Information System (INIS)

    Dutta, B.K.; Kakodkar, A.; Maiti, S.K.

    1986-01-01

    The fracture mechanics analysis of nuclear components is required to ensure prevention of sudden failure due to dynamic loadings. The linear elastic analysis near to a crack tip shows presence of stress singularity at the crack tip. The simulation of this singularity in numerical methods enhance covergence capability. In finite element technique this can be achieved by placing mid nodes of 8 noded or 6 noded isoparametric elements, at one fourth ditance from crack tip. Present report details this characteristic of finite element, implementation of this element in a code 'CRACK', implementation of J-integral to compute stress intensity factor and solution of number of cases for elastic and elastoplastic fracture mechanics analysis. 6 refs., 6 figures. (author)

  10. A Dual Launch Robotic and Human Lunar Mission Architecture

    Science.gov (United States)

    Jones, David L.; Mulqueen, Jack; Percy, Tom; Griffin, Brand; Smitherman, David

    2010-01-01

    This paper describes a comprehensive lunar exploration architecture developed by Marshall Space Flight Center's Advanced Concepts Office that features a science-based surface exploration strategy and a transportation architecture that uses two launches of a heavy lift launch vehicle to deliver human and robotic mission systems to the moon. The principal advantage of the dual launch lunar mission strategy is the reduced cost and risk resulting from the development of just one launch vehicle system. The dual launch lunar mission architecture may also enhance opportunities for commercial and international partnerships by using expendable launch vehicle services for robotic missions or development of surface exploration elements. Furthermore, this architecture is particularly suited to the integration of robotic and human exploration to maximize science return. For surface operations, an innovative dual-mode rover is presented that is capable of performing robotic science exploration as well as transporting human crew conducting surface exploration. The dual-mode rover can be deployed to the lunar surface to perform precursor science activities, collect samples, scout potential crew landing sites, and meet the crew at a designated landing site. With this approach, the crew is able to evaluate the robotically collected samples to select the best samples for return to Earth to maximize the scientific value. The rovers can continue robotic exploration after the crew leaves the lunar surface. The transportation system for the dual launch mission architecture uses a lunar-orbit-rendezvous strategy. Two heavy lift launch vehicles depart from Earth within a six hour period to transport the lunar lander and crew elements separately to lunar orbit. In lunar orbit, the crew transfer vehicle docks with the lander and the crew boards the lander for descent to the surface. After the surface mission, the crew returns to the orbiting transfer vehicle for the return to the Earth. This

  11. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  12. Network Coding Parallelization Based on Matrix Operations for Multicore Architectures

    DEFF Research Database (Denmark)

    Wunderlich, Simon; Cabrera, Juan; Fitzek, Frank

    2015-01-01

    such as the Raspberry Pi2 with four cores in the order of up to one full magnitude. The speed increase gain is even higher than the number of cores of the Raspberry Pi2 since the newly introduced approach exploits the cache architecture way better than by-the-book matrix operations. Copyright © 2015 by the Institute...

  13. Lunar Navigation Architecture Design Considerations

    Science.gov (United States)

    D'Souza, Christopher; Getchius, Joel; Holt, Greg; Moreau, Michael

    2009-01-01

    The NASA Constellation Program is aiming to establish a long-term presence on the lunar surface. The Constellation elements (Orion, Altair, Earth Departure Stage, and Ares launch vehicles) will require a lunar navigation architecture for navigation state updates during lunar-class missions. Orion in particular has baselined earth-based ground direct tracking as the primary source for much of its absolute navigation needs. However, due to the uncertainty in the lunar navigation architecture, the Orion program has had to make certain assumptions on the capabilities of such architectures in order to adequately scale the vehicle design trade space. The following paper outlines lunar navigation requirements, the Orion program assumptions, and the impacts of these assumptions to the lunar navigation architecture design. The selection of potential sites was based upon geometric baselines, logistical feasibility, redundancy, and abort support capability. Simulated navigation covariances mapped to entry interface flightpath- angle uncertainties were used to evaluate knowledge errors. A minimum ground station architecture was identified consisting of Goldstone, Madrid, Canberra, Santiago, Hartebeeshoek, Dongora, Hawaii, Guam, and Ascension Island (or the geometric equivalent).

  14. Monte Carlo method implemented in a finite element code with application to dynamic vacuum in particle accelerators

    CERN Document Server

    Garion, C

    2009-01-01

    Modern particle accelerators require UHV conditions during their operation. In the accelerating cavities, breakdowns can occur, releasing large amount of gas into the vacuum chamber. To determine the pressure profile along the cavity as a function of time, the time-dependent behaviour of the gas has to be simulated. To do that, it is useful to apply accurate three-dimensional method, such as Test Particles Monte Carlo. In this paper, a time-dependent Test Particles Monte Carlo is used. It has been implemented in a Finite Element code, CASTEM. The principle is to track a sample of molecules during time. The complex geometry of the cavities can be created either in the FE code or in a CAD software (CATIA in our case). The interface between the two softwares to export the geometry from CATIA to CASTEM is given. The algorithm of particle tracking for collisionless flow in the FE code is shown. Thermal outgassing, pumping surfaces and electron and/or ion stimulated desorption can all be generated as well as differ...

  15. Architectures Toward Reusable Science Data Systems

    Science.gov (United States)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  16. Information architecture for building digital library | Obuh ...

    African Journals Online (AJOL)

    The paper provided an overview of constituent elements of a digital library and explained the underlying information architecture and building blocks for a digital library. It specifically proffered meaning to the various elements or constituents of a digital library system. The paper took a look at the structure of information as a ...

  17. Memristor-Based Synapse Design and Training Scheme for Neuromorphic Computing Architecture

    Science.gov (United States)

    2012-06-01

    system level built upon the conventional Von Neumann computer architecture [2][3]. Developing the neuromorphic architecture at chip level by...SCHEME FOR NEUROMORPHIC COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-11-2-0046 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...creation of memristor-based neuromorphic computing architecture. Rather than the existing crossbar-based neuron network designs, we focus on memristor

  18. Early modern architecture : how to prolong a lifespan

    NARCIS (Netherlands)

    Jonge, de W.

    1993-01-01

    In Modern Movement architecture, the structural elements of a building - mostly concrete or steel frames - form an indissoluble part of the original design approach. Therefore, such elements are part of the historic value of these buildings. At the same time, modern architects strove after minimal

  19. A Systematic Hardware Sharing Method for Unified Architecture Design of H.264 Transforms

    Directory of Open Access Journals (Sweden)

    Po-Hung Chen

    2015-01-01

    Full Text Available Multitransform techniques have been widely used in modern video coding and have better compression efficiency than the single transform technique that is used conventionally. However, every transform needs a corresponding hardware implementation, which results in a high hardware cost for multiple transforms. A novel method that includes a five-step operation sharing synthesis and architecture-unification techniques is proposed to systematically share the hardware and reduce the cost of multitransform coding. In order to demonstrate the effectiveness of the method, a unified architecture is designed using the method for all of the six transforms involved in the H.264 video codec: 2D 4 × 4 forward and inverse integer transforms, 2D 4 × 4 and 2 × 2 Hadamard transforms, and 1D 8 × 8 forward and inverse integer transforms. Firstly, the six H.264 transform architectures are designed at a low cost using the proposed five-step operation sharing synthesis technique. Secondly, the proposed architecture-unification technique further unifies these six transform architectures into a low cost hardware-unified architecture. The unified architecture requires only 28 adders, 16 subtractors, 40 shifters, and a proposed mux-based routing network, and the gate count is only 16308. The unified architecture processes 8 pixels/clock-cycle, up to 275 MHz, which is equal to 707 Full-HD 1080 p frames/second.

  20. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    Energy Technology Data Exchange (ETDEWEB)

    Valach, M; Zymak, J; Svoboda, R [Nuclear Research Inst. Rez plc, Rez (Czech Republic)

    1997-08-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs.

  1. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    International Nuclear Information System (INIS)

    Valach, M.; Zymak, J.; Svoboda, R.

    1997-01-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs

  2. Space and Architecture's Current Line of Research? A Lunar Architecture Workshop With An Architectural Agenda.

    Science.gov (United States)

    Solomon, D.; van Dijk, A.

    space context that will be useful on Earth on a conceptual and practical level? * In what ways could architecture's field of reference offer building on the Moon (and other celestial bodies) a paradigm shift? 1 In addition to their models and designs, workshop participants will begin authoring a design recommendation for the building of (infra-) structures and habitats on celestial bodies in particular the Moon and Mars. The design recommendation, a substantiated aesthetic code of conduct (not legally binding) will address long term planning and incorporate issues of sustainability, durability, bio-diversity, infrastructure, CHANGE, and techniques that lend themselves to Earth-bound applications. It will also address the cultural implications of architectural design might have within the context of space exploration. The design recommendation will ultimately be presented for peer review to both the space and architecture communities. What would the endorsement from the architectural community of such a document mean to the space community? The Lunar Architecture Workshop is conceptualised, produced and organised by(in alphabetical order): Alexander van Dijk, Art Race in Space, Barbara Imhof; ES- CAPE*spHERE, Vienna, University of Technology, Institute for Design and Building Construction, Vienna, Bernard Foing; ESA SMART1 Project Scientist, Susmita Mo- hanty; MoonFront, LLC, Hans Schartner' Vienna University of Technology, Institute for Design and Building Construction, Debra Solomon; Art Race in Space, Dutch Art Institute, Paul van Susante; Lunar Explorers Society. Workshop locations: ESTEC, Noordwijk, NL and V2_Lab, Rotterdam, NL Workshop dates: June 3-16, 2002 (a Call for Participation will be made in March -April 2002.) 2

  3. Modelling the attenuation in the ATHENA finite elements code for the ultrasonic testing of austenitic stainless steel welds.

    Science.gov (United States)

    Chassignole, B; Duwig, V; Ploix, M-A; Guy, P; El Guerjouma, R

    2009-12-01

    Multipass welds made in austenitic stainless steel, in the primary circuit of nuclear power plants with pressurized water reactors, are characterized by an anisotropic and heterogeneous structure that disturbs the ultrasonic propagation and makes ultrasonic non-destructive testing difficult. The ATHENA 2D finite element simulation code was developed to help understand the various physical phenomena at play. In this paper, we shall describe the attenuation model implemented in this code to give an account of wave scattering phenomenon through polycrystalline materials. This model is in particular based on the optimization of two tensors that characterize this material on the basis of experimental values of ultrasonic velocities attenuation coefficients. Three experimental configurations, two of which are representative of the industrial welds assessment case, are studied in view of validating the model through comparison with the simulation results. We shall thus provide a quantitative proof that taking into account the attenuation in the ATHENA code dramatically improves the results in terms of the amplitude of the echoes. The association of the code and detailed characterization of a weld's structure constitutes a remarkable breakthrough in the interpretation of the ultrasonic testing on this type of component.

  4. Language-based support for service oriented architectures

    DEFF Research Database (Denmark)

    Giambiagi, Pablo; Owe, Olaf; Ravn, Anders Peter

    2006-01-01

    The fast evolution of the Internet has popularized service-oriented architectures (SOA) with their promise of dynamic IT-supported inter-business collaborations. Yet this popularity does not reflect on the number of actual applications using the architecture. Programming models in use today make...... a poor match for the distributed, loosely-coupled, document-based nature of SOA. The gap is actually increasing. For example, interoperability between different organizations, requires contracts to reduce risks. Thus, high-level models of contracts are making their way into service-oriented architectures......, but application developers are still left to their own devices when it comes to writing code that will comply with a contract. This paper surveys existing and future directions regarding language-based solutions to the above problem....

  5. Fundamentals of computer architecture and design

    CERN Document Server

    Bindal, Ahmet

    2017-01-01

    This textbook provides semester-length coverage of computer architecture and design, providing a strong foundation for students to understand modern computer system architecture and to apply these insights and principles to future computer designs.  It is based on the author’s decades of industrial experience with computer architecture and design, as well as with teaching students focused on pursuing careers in computer engineering.  Unlike a number of existing textbooks for this course, this one focuses not only on CPU architecture, but also covers in great detail in system buses, peripherals and memories.This book teaches every element in a computing system in two steps.  First, it introduces the functionality of each topic (and subtopics) and then goes into “from-scratch design” of a particular digital block from its architectural specifications using timing diagrams.  The author describes how the data-path of a certain digital block is generated using timin g diagrams, a method which most textbo...

  6. Future city architecture for optimal living

    CERN Document Server

    Pardalos, Panos

    2015-01-01

      This book offers a wealth of interdisciplinary approaches to urbanization strategies in architecture centered on growing concerns about the future of cities and their impacts on essential elements of architectural optimization, livability, energy consumption and sustainability. It portrays the urban condition in architectural terms, as well as the living condition in human terms, both of which can be optimized by mathematical modeling as well as mathematical calculation and assessment.   Special features include:   ·        new research on the construction of future cities and smart cities   ·        discussions of sustainability and new technologies designed to advance ideas to future city developments   Graduate students and researchers in architecture, engineering, mathematical modeling, and building physics will be engaged by the contributions written by eminent international experts from a variety of disciplines including architecture, engineering, modeling, optimization, and relat...

  7. An empirical study of software architectures' effect on product quality

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Jonasson, Kristjan; Neukirchen, Helmut

    2011-01-01

    Software architectures shift the focus of developers from lines-of-code to coarser-grained components and their interconnection structure. Unlike 2ne-grained objects, these components typically encompass business functionality and need to be aware of the underlying business processes. Hence......, the interface of a component should re4ect relevant parts of the business process and the software architecture should emphasize the coordination among components. To shed light on these issues, we provide a framework for component-based software architectures focusing on the process perspective. The interface...

  8. The use of the MCNP code for the quantitative analysis of elements in geological formations

    Energy Technology Data Exchange (ETDEWEB)

    Cywicka-Jakiel, T.; Woynicka, U. [The Henryk Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Zorski, T. [University of Mining and Metallurgy, Faculty of Geology, Geophysics and Environmental Protection, Krakow (Poland)

    2003-07-01

    The Monte Carlo modelling calculations using the MCNP code have been performed, which support the spectrometric neutron-gamma (SNGL) borehole logging. The SNGL enables the lithology identification through the quantitative analysis of the elements in geological formations and thus can be very useful for the oil and gas industry as well as for prospecting of the potential host rocks for radioactive waste disposal. In the SNGL experiment, gamma-rays induced by the neutron interactions with the nuclei of the rock elements are detected using the gamma-ray probe of complex mechanical and electronic construction. The probe has to be calibrated for a wide range of the elemental concentrations, to assure the proper quantitative analysis. The Polish Calibration Station in Zielona Gora is equipped with a limited number of calibration standards. An extension of the experimental calibration and the evaluation of the effect of the so-called side effects (for example the borehole and formation salinity variation) on the accuracy of the SNGL method can be done by the use of the MCNP code. The preliminary MCNP results showing the effect of the borehole and formation fluids salinity variations on the accuracy of silicon (Si), calcium (Ca) and iron (Fe) content determination are presented in the paper. The main effort has been focused on a modelling of the complex SNGL probe situated in a fluid filled borehole, surrounded by a geological formation. Track length estimate of the photon flux from the (n,gamma) interactions as a function of gamma-rays energy was used. Calculations were run on the PC computer with AMD Athlon 1.33 GHz processor. Neutron and photon cross-sections libraries were taken from the MCNP4c package and based mainly on the ENDF/B-6, ENDF/B-5 and MCPLIB02 data. The results of simulated experiment are in conformity with results of the real experiment performed with the use of the main lithology models (sandstones, limestones and dolomite). (authors)

  9. The use of the MCNP code for the quantitative analysis of elements in geological formations

    International Nuclear Information System (INIS)

    Cywicka-Jakiel, T.; Woynicka, U.; Zorski, T.

    2003-01-01

    The Monte Carlo modelling calculations using the MCNP code have been performed, which support the spectrometric neutron-gamma (SNGL) borehole logging. The SNGL enables the lithology identification through the quantitative analysis of the elements in geological formations and thus can be very useful for the oil and gas industry as well as for prospecting of the potential host rocks for radioactive waste disposal. In the SNGL experiment, gamma-rays induced by the neutron interactions with the nuclei of the rock elements are detected using the gamma-ray probe of complex mechanical and electronic construction. The probe has to be calibrated for a wide range of the elemental concentrations, to assure the proper quantitative analysis. The Polish Calibration Station in Zielona Gora is equipped with a limited number of calibration standards. An extension of the experimental calibration and the evaluation of the effect of the so-called side effects (for example the borehole and formation salinity variation) on the accuracy of the SNGL method can be done by the use of the MCNP code. The preliminary MCNP results showing the effect of the borehole and formation fluids salinity variations on the accuracy of silicon (Si), calcium (Ca) and iron (Fe) content determination are presented in the paper. The main effort has been focused on a modelling of the complex SNGL probe situated in a fluid filled borehole, surrounded by a geological formation. Track length estimate of the photon flux from the (n,gamma) interactions as a function of gamma-rays energy was used. Calculations were run on the PC computer with AMD Athlon 1.33 GHz processor. Neutron and photon cross-sections libraries were taken from the MCNP4c package and based mainly on the ENDF/B-6, ENDF/B-5 and MCPLIB02 data. The results of simulated experiment are in conformity with results of the real experiment performed with the use of the main lithology models (sandstones, limestones and dolomite). (authors)

  10. Proposed hardware architectures of particle filter for object tracking

    Science.gov (United States)

    Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED

    2012-12-01

    In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.

  11. Guided waves dispersion equations for orthotropic multilayered pipes solved using standard finite elements code.

    Science.gov (United States)

    Predoi, Mihai Valentin

    2014-09-01

    The dispersion curves for hollow multilayered cylinders are prerequisites in any practical guided waves application on such structures. The equations for homogeneous isotropic materials have been established more than 120 years ago. The difficulties in finding numerical solutions to analytic expressions remain considerable, especially if the materials are orthotropic visco-elastic as in the composites used for pipes in the last decades. Among other numerical techniques, the semi-analytical finite elements method has proven its capability of solving this problem. Two possibilities exist to model a finite elements eigenvalue problem: a two-dimensional cross-section model of the pipe or a radial segment model, intersecting the layers between the inner and the outer radius of the pipe. The last possibility is here adopted and distinct differential problems are deduced for longitudinal L(0,n), torsional T(0,n) and flexural F(m,n) modes. Eigenvalue problems are deduced for the three modes classes, offering explicit forms of each coefficient for the matrices used in an available general purpose finite elements code. Comparisons with existing solutions for pipes filled with non-linear viscoelastic fluid or visco-elastic coatings as well as for a fully orthotropic hollow cylinder are all proving the reliability and ease of use of this method. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Validations and applications of the FEAST code

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Z.; Tayal, M.; Lau, J.H.; Evinou, D. [Atomic Energy of Canada Limited, Mississauga, Ontario (Canada); Jun, J.S. [Korea Atomic Energy Research Inst. (Korea, Republic of)

    1999-07-01

    The FEAST (Finite Element Analysis for STresses) code is part of a suite of computer codes that are used to assess the structural integrity of CANDu fuel elements and bundles. A detailed validation of the FEAST code was recently performed. The FEAST calculations are in good agreement with a variety of analytical solutions (18 cases) for stresses, strains and displacements. This consistency shows that the FEAST code correctly incorporates the fundamentals of stress analysis. Further, the calculations of the FEAST code match the variations in axial and hoop strain profiles, measured by strain gauges near the sheath-endcap weld during an out-reactor compression test. The code calculations are also consistent with photoelastic measurements in simulated endcaps. (author)

  13. Validations and applications of the FEAST code

    International Nuclear Information System (INIS)

    Xu, Z.; Tayal, M.; Lau, J.H.; Evinou, D.; Jun, J.S.

    1999-01-01

    The FEAST (Finite Element Analysis for STresses) code is part of a suite of computer codes that are used to assess the structural integrity of CANDu fuel elements and bundles. A detailed validation of the FEAST code was recently performed. The FEAST calculations are in good agreement with a variety of analytical solutions (18 cases) for stresses, strains and displacements. This consistency shows that the FEAST code correctly incorporates the fundamentals of stress analysis. Further, the calculations of the FEAST code match the variations in axial and hoop strain profiles, measured by strain gauges near the sheath-endcap weld during an out-reactor compression test. The code calculations are also consistent with photoelastic measurements in simulated endcaps. (author)

  14. Network topology exploration of mesh-based coarse-grain reconfigurable architectures

    NARCIS (Netherlands)

    Bansal, N.; Gupta, S.; Dutt, N.D.; Nicolau, A.; Gupta, R.

    2004-01-01

    Several coarse-grain reconfigurable architectures proposed recently consist of a large number of processing elements (PEs) connected in a mesh-like network topology. We study the effects of three aspects of network topology exploration on the performance of applications on these architectures: (a)

  15. Utilizing elements of the CSAU phenomena identification and ranking table (PIRT) to qualify a PWR non-LOCA transients system code

    Energy Technology Data Exchange (ETDEWEB)

    Greene, K.R.; Fletcher, C.D.; Gottula, R.C.; Lindquist, T.R.; Stitt, B.D. [Framatome ANP, Richland, WA (United States)

    2001-07-01

    Licensing analyses of Nuclear Regulatory Commission (NRC) Standard Review Plan (SRP) Chapter 15 non-LOCA transients are an important part of establishing operational safety limits and design limits for nuclear power plants. The applied codes and methods are generally qualified using traditional methods of benchmarking and assessment, sample problems, and demonstration of conservatism. Rigorous formal methods for developing code and methodology have been created and applied to qualify realistic methods for Large Break Loss-of-Coolant Accidents (LBLOCA's). This methodology, Code Scaling, Applicability, and Uncertainty (CSAU), is a very demanding, resource intensive, process to apply. It would be challenging to apply a comprehensive and complete CSAU level of analysis, individually, to each of the more than 30 non-LOCA transients that comprise Chapter 15 events. However, certain elements of the process can be easily adapted to improve quality of the codes and methods used to analyze non- LOCA transients. One of these elements is the Phenomena Identification and Ranking Table (PIRT). This paper presents the results of an informally constructed PIRT that applies to non-LOCA transients for Pressurized Water Reactors (PWR's) of the Westinghouse and Combustion Engineering design. A group of experts in thermal-hydraulics and safety analysis identified and ranked the phenomena. To begin the process, the PIRT was initially performed individually by each expert. Then through group interaction and discussion, a consensus was reached on both the significant phenomena and the appropriate ranking. The paper also discusses using the PIRT as an aid to qualify a 'conservative' system code and methodology. Once agreement was obtained on the phenomena and ranking, the table was divided into six functional groups, by nature of the transients, along the same lines as Chapter 15. Then, assessment and disposition of the significant phenomena was performed. The PIRT and

  16. NCEL: two dimensional finite element code for steady-state temperature distribution in seven rod-bundle

    International Nuclear Information System (INIS)

    Hrehor, M.

    1979-01-01

    The paper deals with an application of the finite element method to the heat transfer study in seven-pin models of LMFBR fuel subassembly. The developed code NCEL solves two-dimensional steady state heat conduction equation in the whole subassembly model cross-section and enebles to perform the analysis of thermal behaviour in both normal and accidental operational conditions as eccentricity of the central rod or full or partial (porous) blockage of some part of the cross-flow area. The heat removal is simulated by heat sinks in coolant under conditions of subchannels slug flow approximation

  17. A data acquisition architecture for the SSC

    International Nuclear Information System (INIS)

    Partridge, R.

    1990-01-01

    An SSC data acquisition architecture applicable to high-p T detectors is described. The architecture is based upon a small set of design principles that were chosen to simplify communication between data acquisition elements while providing the required level of flexibility and performance. The architecture features an integrated system for data collection, event building, and communication with a large processing farm. The interface to the front end electronics system is also discussed. A set of design parameters is given for a data acquisition system that should meet the needs of high-p T detectors at the SSC

  18. Weight-4 Parity Checks on a Surface Code Sublattice with Superconducting Qubits

    Science.gov (United States)

    Takita, Maika; Corcoles, Antonio; Magesan, Easwar; Bronn, Nicholas; Hertzberg, Jared; Gambetta, Jay; Steffen, Matthias; Chow, Jerry

    We present a superconducting qubit quantum processor design amenable to the surface code architecture. In such architecture, parity checks on the data qubits, performed by measuring their X- and Z- syndrome qubits, constitute a critical aspect. Here we show fidelities and outcomes of X- and Z-parity measurements done on a syndrome qubit in a full plaquette consisting of one syndrome qubit coupled via bus resonators to four code qubits. Parities are measured after four code qubits are prepared into sixteen initial states in each basis. Results show strong dependence on ZZ between qubits on the same bus resonators. This work is supported by IARPA under Contract W911NF-10-1-0324.

  19. RURAL CHURCHES, „PEARLS” OF RURAL ARCHITECTURE IN CRIȘANA AND MARAMUREȘ

    Directory of Open Access Journals (Sweden)

    Alexandru ILIEȘ

    2013-12-01

    Full Text Available In Romania in general and in particular Crişana and Maramureș wooden churches are identifies of local identity. Using specific tools and methods used in geographical but complementary fields, in conjunction with an interdisciplinary architectural heritage element are analyzed wooden churches as tourist planning perspective. Each „land" and ethnographic area of the Tisa and Mureș north to south has a specific fingerprint identifiable architectural style of these „pearls" of Romanian folk architecture. This diversity is an element of favorable effects on tourism diversification and increasing the attractiveness of a region or locality.

  20. Architectures for wrist-worn energy harvesting

    Science.gov (United States)

    Rantz, R.; Halim, M. A.; Xue, T.; Zhang, Q.; Gu, L.; Yang, K.; Roundy, S.

    2018-04-01

    This paper reports the simulation-based analysis of six dynamical structures with respect to their wrist-worn vibration energy harvesting capability. This work approaches the problem of maximizing energy harvesting potential at the wrist by considering multiple mechanical substructures; rotational and linear motion-based architectures are examined. Mathematical models are developed and experimentally corroborated. An optimization routine is applied to the proposed architectures to maximize average power output and allow for comparison. The addition of a linear spring element to the structures has the potential to improve power output; for example, in the case of rotational structures, a 211% improvement in power output was estimated under real walking excitation. The analysis concludes that a sprung rotational harvester architecture outperforms a sprung linear architecture by 66% when real walking data is used as input to the simulations.

  1. The application of enterprise reference architecture in the financial industry

    NARCIS (Netherlands)

    Harmsen van der Beek - Hamer, ten W.; Trienekens, J.J.M.; Grefen, P.W.P.J.; Aier, S.

    2012-01-01

    Abstract. Financial institutions are facing enormous challenges in business / IT alignment. Enterprise architecture (EA) is seen as key in addressing these challenges. Major issues still exist in EA design and realization. The concept of reference architecture is explored as one of the elements that

  2. Islamic Architecture and Arch

    Directory of Open Access Journals (Sweden)

    Mohammed Mahbubur Rahman

    2015-01-01

    Full Text Available The arch, an essential architectural element since the early civilizations, permitted the construction of lighter walls and vaults, often covering a large span. Visually it was an important decorative feature that was trans-mitted from architectural decoration to other forms of art worldwide. In early Islamic period, Muslims were receiving from many civilizations, which they improved and re-introduced to bring about the Renaissance. Arches appeared in Mesopotamia, Indus, Egyptian, Babylonian, Greek and Assyrian civilizations; but the Romans applied the technique to a wide range of structures. The Muslims mastered the use and design of the arch, employed for structural and functional purposes, progressively meeting decorative and symbolic pur-poses. Islamic architecture is characterized by arches employed in all types of buildings; most common uses being in arcades. This paper discusses the process of assimilation and charts how they contributed to other civilizations.

  3. Fast underdetermined BSS architecture design methodology for real time applications.

    Science.gov (United States)

    Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R

    2015-01-01

    In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.

  4. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    Science.gov (United States)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  5. Development and Evaluation of Mould for Double Curved Concrete Elements

    DEFF Research Database (Denmark)

    Jepsen, Christian Raun; Kristensen, Mathias Kræmmergaard; Kirkegaard, Poul Henning

    2011-01-01

    freeform concrete formwork are available, and more are being developed [1-4]. The common way of producing moulds for unique elements today is to manufacture one mould for each unique element using CNC milling in cheaper materials, but since the method is still labour intensive and produces a lot of waste......Complex freeform architecture is one of the most striking trends in contemporary architecture. Architecture differs from traditional target industries of CAD/CAM technology in many ways including aesthetics, statics, structural aspects, scale and manufacturing technologies. Designing a piece...... of freeform architecture in a CAD program is fairly easy, but the translation to a real piece of architecture can be difficult and expensive and as traditional production methods for free-form architecture prove costly, architects and engineers are forced to simplify designs. Today, methods for manufacturing...

  6. Defining The Energy Saving Potential of Architectural Design

    DEFF Research Database (Denmark)

    Naboni, Emanuele; Malcangi, Antonio; Zhang, Yi

    2015-01-01

    Designers, in response to codes or voluntary " green building " programs, are increasingly concerned with building energy demand reduction, but they are not fully aware of the energy saving potential of architectural design. According to literature, building form, construction and material choices...... on sustainable design: " Design With Climate " by Olgyay (1963), which discussed strategies for climate-adapted architecture, and Lechner´s " Heating, Cooling and Lighting " (1991), on how to reduce building energy needs by as much as 60 – 80 percent with proper architectural design decisions. Both books used...... behaviour. The research shows the best solution for each of the climates and compares them with Olgyay´s findings. Finally, for each climate the energy saving potential is defined and then compared to Lechner's conclusions. Defining The Energy Saving Potential of Architectural Design (PDF Download Available...

  7. A Model of Trusted Connection Architecture

    Directory of Open Access Journals (Sweden)

    Zhang Xun

    2017-01-01

    Full Text Available According to that traditional trusted network connection architecture (TNC has limitations on dynamic network environment and the user behavior support, we develop TCA to propose a trusted connection architecture supporting behavior measurement (TCA-SBM, besides, the structure diagram of network architecture is given. Through introducing user behavior measure elements, TCA-SBM can conduct measurement on the whole network in time dimension periodically, and refine the measurement on network behavior in measure dimension to conduct fine-grained dynamic trusted measurement. As a result, TCA-SBM enhances the TCA’s ability to adapt to the dynamic change of network and makes up the deficiency of trusted computing framework in the network connection.

  8. Self-Organizing Maps on the Cell Broadband Engine Architecture

    International Nuclear Information System (INIS)

    McConnell, Sabine M

    2010-01-01

    We present and evaluate novel parallel implementations of Self-Organizing Maps for the Cell Broadband Engine Architecture. Motivated by the interactive nature of the data-mining process, we evaluate the scalability of the implementations on two clusters using different network characteristics and incarnations (PS3 TM console and PowerXCell 8i) of the architecture. Our implementations use varying combinations of the Power Processing Elements (PPEs) and Synergistic Processing Elements (SPEs) found in the Cell architecture. For a single processor, our implementation scaled well with the number of SPEs regardless of the incarnation. When combining multiple PS3 TM consoles, the synchronization over the slower network resulted in poor speedups and demonstrated that the use of such a low-cost cluster may be severely restricted, even without the use of SPEs. When using multiple SPEs for the PowerXCell 8i cluster, the speedup grew linearly with increasing number of SPEs for a given number of processors, and linear up to a maximum with the number of processors for a given number of SPEs. Our implementation achieved a worst-case efficiency of 67% for the maximum number of processing elements involved in the computation, but consistently higher values for smaller numbers of processing elements with speedups of up to 70.

  9. Business model driven service architecture design for enterprise application integration

    OpenAIRE

    Gacitua-Decar, Veronica; Pahl, Claus

    2008-01-01

    Increasingly, organisations are using a Service-Oriented Architecture (SOA) as an approach to Enterprise Application Integration (EAI), which is required for the automation of business processes. This paper presents an architecture development process which guides the transition from business models to a service-based software architecture. The process is supported by business reference models and patterns. Firstly, the business process models are enhanced with domain model elements, applicat...

  10. Analysis of Damage in Laminated Architectural Glazing Subjected to Wind Loading and Windborne Debris Impact

    Directory of Open Access Journals (Sweden)

    Daniel S. Stutts

    2013-05-01

    Full Text Available Wind loading and windborne debris (missile impact are the two primary mechanisms that result in window glazing damage during hurricanes. Wind-borne debris is categorized into two types: small hard missiles; such as roof gravel; and large soft missiles representing lumber from wood-framed buildings. Laminated architectural glazing (LAG may be used in buildings where impact resistance is needed. The glass plies in LAG undergo internal damage before total failure. The bulk of the published work on this topic either deals with the stress and dynamic analyses of undamaged LAG or the total failure of LAG. The pre-failure damage response of LAG due to the combination of wind loading and windborne debris impact is studied. A continuum damage mechanics (CDM based constitutive model is developed and implemented via an axisymmetric finite element code to study the failure and damage behavior of laminated architectural glazing subjected to combined loading of wind and windborne debris impact. The effect of geometric and material properties on the damage pattern is studied parametrically.

  11. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  12. Development of a modular integrated control architecture for flexible manipulators. Final report

    International Nuclear Information System (INIS)

    Burks, B.L.; Battiston, G.

    1994-01-01

    In April 1994, ORNL and SPAR completed the joint development of a manipulator controls architecture for flexible structure controls under a CRADA between the two organizations. The CRADA project entailed design and development of a new architecture based upon the Modular Integrated Control Architecture (MICA) previously developed by ORNL. The new architecture, dubbed MICA-II, uses an object-oriented coding philosophy to provide a highly modular and expandable architecture for robotic manipulator control. This architecture can be readily ported to control of many different manipulator systems. The controller also provides a user friendly graphical operator interface and display of many forms of data including system diagnostics. The capabilities of MICA-II were demonstrated during oscillation damping experiments using the Flexible Beam Experimental Test Bed at Hanford

  13. Coding as literacy metalithikum IV

    CERN Document Server

    Bühlmann, Vera; Moosavi, Vahid

    2015-01-01

    Recent developments in computer science, particularly "data-driven procedures" have opened a new level of design and engineering. This has also affected architecture. The publication collects contributions on Coding as Literacy by computer scientists, mathematicians, philosophers, cultural theorists, and architects. "Self-Organizing Maps" (SOM) will serve as the concrete reference point for all further discussions.

  14. Technology of Oak Architectural and Decorative Elements Manufacturing for Iconostasis Recreating in Krestovozdvizhensky Temple in Village of Syrostan, Chelyabinsk region

    Science.gov (United States)

    Yudin, V.

    2017-11-01

    Due to the historical peculiarities of Russia, by the end of the 20th century many temples were destroyed or they lost their iconostases which most often were made of wood. When it became necessary to revive the traditional craft it turned out that it was lost almost completely which negatively affects the quality of the wooden iconostases restoration and their new construction. The article aims to fill the loss of knowledge and skills that make up the content of one of the most interesting types of the architectural and monumental and decorative art through study of the forms of preserved fragments once being a very rich historical and cultural heritage. Similar works on the study of wooden iconostases aimed at the recreation of oak decorative wooden elements and restoration practice have not been performed so far which gives it a character of particular relevance for the architectural science. New and relevant technological improvements are not rejected but skillfully introduced into the arsenal of techniques and means of modern restorers and carvers to facilitate the recovery of iconostasis construction from a crisis state and the transition to the subsequent continuation of the tradition development. The deep knowledge of the research subject allowed one to use oak decorative elements in the manufacture for recreating the iconostasis of the Krestovozdvizhensky temple in the village of Syrostan, the Chelyabinsk region. This material is undoubtedly of a scientific and reference nature as well as economic efficiency for all those who wish to join the noble traditional iconostasis making art.

  15. Summary Report for ASC L2 Milestone #4782: Assess Newly Emerging Programming and Memory Models for Advanced Architectures on Integrated Codes

    Energy Technology Data Exchange (ETDEWEB)

    Neely, J. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hornung, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Black, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Robinson, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-29

    This document serves as a detailed companion to the powerpoint slides presented as part of the ASC L2 milestone review for Integrated Codes milestone #4782 titled “Assess Newly Emerging Programming and Memory Models for Advanced Architectures on Integrated Codes”, due on 9/30/2014, and presented for formal program review on 9/12/2014. The program review committee is represented by Mike Zika (A Program Project Lead for Kull), Brian Pudliner (B Program Project Lead for Ares), Scott Futral (DEG Group Lead in LC), and Mike Glass (Sierra Project Lead at Sandia). This document, along with the presentation materials, and a letter of completion signed by the review committee will act as proof of completion for this milestone.

  16. Tourists' Transformation Experience: From Destination Architecture to Identity Formation

    DEFF Research Database (Denmark)

    Ye, Helen Yi; Tussyadiah, Iis

    2010-01-01

    Today’s tourists seek unique destinations that could associate with their self identity in a profound way. It is meaningful for destinations to design unique physical elements that offer transformational travel experiences. This study aims at identifying how tourists encounter architecture...... in a destination and if architecture facilitates tourists’ self transformation. Based on narrative structure analysis by deconstruction of travel blog posts, the results suggest that tourists perceive architectural landscape as an important feature that reflects destinations’ identity. Four different interaction...

  17. Light in Architecture as an Inspired Theme

    Science.gov (United States)

    Dębowska, Danuta

    2017-10-01

    The theme of the article is to highlight the important role of natural light in architecture. Natural light, or solar radiation absorbed by our sense of sight was a strong inspiration from ancient times. Originally constituted as a link between heaven and earth. It played a major role in shaping the places of worship, such as even Stonehenge. In the church architecture it was and still is the guiding element, the main matrix around builds an architecture narrative. Over the centuries, the study of the role of light in architecture, and in fact chiaroscuro, led to the culmination of solutions full of fantasy and “quirks” in the Baroque era (Baroque with Italian barocco: strange, exaggerated). Enamored of carved body and the use of multipurpose ornament topped was the discovery of a concave-convex façade parete ondulata created by Francesca Borrromini. Conscious manipulation of light developed, at the time, to a maximum of the art illusion and optical illusions in architectural buildings. Changing the perception of privilege in detail and introduce the principle of “beauty comes from functionality” in times of modernism meant that architects started to look for the most extreme simplicity. Sincerity of forms, and thus the lack of ornamentation, however, did not result in a lack of interest in light. On the contrary, the light became detail, eye-catching element against a smooth surface of the wall. The continuation of this concept of creating a strong password exposing Mies van der Rohe’s „less is more” took over the architecture created in the current minimalism. To minimize the detail with the introduction of large glazing resulted in strengthening the effect of opening the flow of light and penetrating the interior to the exterior. The principle of deep reflection on the light is certainly used in the design of monumental buildings, such as galleries, museums. It could be used more widely in the common architecture, noting the heritage and

  18. Evaluation of a server-client architecture for accelerator modeling and simulation

    International Nuclear Information System (INIS)

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-01-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands

  19. Community participation towards the value of traditional architecture resilience, on the settlements’ patters in Tenganan village, Amlapura

    Science.gov (United States)

    Febriani, Listyana; Gede Wyana Lokantara, I.

    2017-12-01

    Ecological conditions such as landslide, flood, and the global warming issues are the disasters that should be anticipated. The value of traditional architecture resilience has a role towards a city as cultural heritage. Based on that influence, the role of architecture is needed in fostering the environment to be able to survive and sustain just as an architectural concept that considers human needs and natural balancing. The purpose of this study is to analyse the concept of traditional architecture and community participation in maintaining this condition, so it would be able to have a value of sustainability. The research method used is mix method that is start from observation and macro analysis element (main building and public facility) and micro element (house of resident) to analyse community participation in realizing traditional architectural resilience in Tenganan Village, Amlapura. The results of this study found that the traditional settlements in Amlapura, Karangasem, Bali is a form of urban architecture that can survive in a sustainable way of macro elements and micro elements oriented to environmental ecological conditions. This condition happensbecause the community has a high enough participation to maintain it in the form of custom rules.

  20. Deep Learning Methods for Improved Decoding of Linear Codes

    Science.gov (United States)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  1. Service-Oriented Architecture Approach to MAGTF Logistics Support Systems

    Science.gov (United States)

    2013-09-01

    Support System-Marine Corps IT Information Technology KPI Key Performance Indicators LCE Logistics Command Element ITV In-transit Visibility LCM...building blocks, options, KPI (key performance indicators), design decisions and the corresponding; the physical attributes which is the second attribute... KPI ) that they impact. h. Layer 8 (Information Architecture) The business intelligence layer and information architecture safeguards the inclusion

  2. Type VI secretion systems of human gut Bacteroidales segregate into three genetic architectures, two of which are contained on mobile genetic elements.

    Science.gov (United States)

    Coyne, Michael J; Roelofs, Kevin G; Comstock, Laurie E

    2016-01-15

    Type VI secretion systems (T6SSs) are contact-dependent antagonistic systems employed by Gram negative bacteria to intoxicate other bacteria or eukaryotic cells. T6SSs were recently discovered in a few Bacteroidetes strains, thereby extending the presence of these systems beyond Proteobacteria. The present study was designed to analyze in a global nature the diversity, abundance, and properties of T6SSs in the Bacteroidales, the most predominant Gram negative bacterial order of the human gut. By performing extensive bioinformatics analyses and creating hidden Markov models for Bacteroidales Tss proteins, we identified 130 T6SS loci in 205 human gut Bacteroidales genomes. Of the 13 core T6SS proteins of Proteobacteria, human gut Bacteroidales T6SS loci encode orthologs of nine, and an additional five other core proteins not present in Proteobacterial T6SSs. The Bacteroidales T6SS loci segregate into three distinct genetic architectures with extensive DNA identity between loci of a given genetic architecture. We found that divergent DNA regions of a genetic architecture encode numerous types of effector and immunity proteins and likely include new classes of these proteins. TheT6SS loci of genetic architecture 1 are contained on highly similar integrative conjugative elements (ICEs), as are the T6SS loci of genetic architecture 2, whereas the T6SS loci of genetic architecture 3 are not and are confined to Bacteroides fragilis. Using collections of co-resident Bacteroidales strains from human subjects, we provide evidence for the transfer of genetic architecture 1 T6SS loci among co-resident Bacteroidales species in the human gut. However, we also found that established ecosystems can harbor strains with distinct T6SS of all genetic architectures. This is the first study to comprehensively analyze of the presence and diversity of T6SS loci within an order of bacteria and to analyze T6SSs of bacteria from a natural community. These studies demonstrate that more than

  3. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A., E-mail: sharpejr@mcmaster.ca [McMaster University, Hamilton, ON (Canada); Nowak, M. [McMaster University, Hamilton, ON (Canada); Institut National Polytechnique de Grenoble, Phelma, Grenoble (France); Pencer, J. [McMaster University, Hamilton, ON (Canada); Canadian Nuclear Laboratories, Chalk River, ON, (Canada); Novog, D.; Buijs, A. [McMaster University, Hamilton, ON (Canada)

    2015-07-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg{sup -1} [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  4. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    International Nuclear Information System (INIS)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A.; Nowak, M.; Pencer, J.; Novog, D.; Buijs, A.

    2015-01-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg"-"1 [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  5. Digital visual communications using a Perceptual Components Architecture

    Science.gov (United States)

    Watson, Andrew B.

    1991-01-01

    The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.

  6. Scaling gysela code beyond 32K-cores on bluegene/Q***

    Directory of Open Access Journals (Sweden)

    Bigot J.

    2013-12-01

    Full Text Available Gyrokinetic simulations lead to huge computational needs. Up to now, the semi- Lagrangian code Gysela performed large simulations using a few thousands cores (8k cores typically. Simulation with finer resolutions and with kinetic electrons are expected to increase those needs by a huge factor, providing a good example of applications requiring Exascale machines. This paper presents our work to improve Gysela in order to target an architecture that presents one possible way towards Exascale: the Blue Gene/Q. After analyzing the limitations of the code on this architecture, we have implemented three kinds of improvement: computational performance improvements, memory consumption improvements and disk i/o improvements. As a result, we show that the code now scales beyond 32k cores with much improved performances. This will make it possible to target the most powerful machines available and thus handle much larger physical cases.

  7. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  8. Publicity and identity in the public space architecture

    Directory of Open Access Journals (Sweden)

    Maria de Lourdes Carneiro da Cunha Nóbrega

    2009-12-01

    Full Text Available This article aims at showing the relationship between publicity elements, such as posters and signs, with the city architecture, so that the contribution of these elements with the identity of the urban sites, especially the historical ones, can be understood. In order to do that, a focal point is given to Rua da Palma, located in the city of Recife (Pernambuco, Brazil. For the development of this research, which was based on the present morphological analysis of the site, a survey of photographs and the use of the buildings in the street from 2006 to 2009 was carried out. Also, the existing urban legislation was analyzed. Studies undertaken by authors such as Certau (1994, Venturi (1977 and Koolhaas (2004, among others, and concepts related to retail marketing helped in the conclusion of this analysis of the urban space, which presents architecture as a publicity media, often transforming the identity of the area. A starting point is presented here for future investigation on the role of urban laws and urban control, which deal with the placement of publicity elements in the architecture of the city and contribute for the formation or the urban landscape. This landscape which is considered an integral part of a cultural identity.

  9. A new 3-D integral code for computation of accelerator magnets

    International Nuclear Information System (INIS)

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab

  10. DOD Business Systems Modernization: Military Departments Need to Strengthen Management of Enterprise Architecture Programs

    National Research Council Canada - National Science Library

    Hite, Randolph C; Johnson, Tonia; Eagle, Timothy; Epps, Elena; Holland, Michael; Lakhmani, Neela; LaPaza, Rebecca; Le, Anh; Paintsil, Freda

    2008-01-01

    .... Our framework for managing and evaluating the status of architecture programs consists of 31 core elements related to architecture governance, content, use, and measurement that are associated...

  11. Initial validation of ATLAS software on the ARM architecture

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Quadt, Arnulf; Smith, Joshua Wyatt [II. Physikalisches Institut, Georg-August Universitaet Goettingen (Germany); Seuster, Rolf [TRIUMF (Canada); Stewart, Graeme [University of Glasgow (United Kingdom)

    2016-07-01

    In the early 2000's the introduction of the multi-core era of computing helped industry and experiments such as ATLAS realize even more computing power. This was necessary as the limits of what a single-core processor could do where quickly being reached. Our current model of computing is to increase the number of multi-core nodes in a server farm in order to handle the increased influx of data. As power costs and our need for more computing power increase, this model will eventually become non-realistic. Once again a paradigm shift has to take place. One such option is to look at alternative architectures for large scale server farms. ARM processors are such an example. Making up approximately 95 % of the smartphone and tablet market these processors are widely available, very power conservative and constantly becoming faster. The ATLAS software code base (Athena) is extremely complex comprising of more than 6.5 million lines of code. It has very recently been ported to the ARM 64-bit architecture. The process of our port as well as the first validation plots are presented and compared to the traditional x86 architecture.

  12. High-Efficient Parallel CAVLC Encoders on Heterogeneous Multicore Architectures

    Directory of Open Access Journals (Sweden)

    H. Y. Su

    2012-04-01

    Full Text Available This article presents two high-efficient parallel realizations of the context-based adaptive variable length coding (CAVLC based on heterogeneous multicore processors. By optimizing the architecture of the CAVLC encoder, three kinds of dependences are eliminated or weaken, including the context-based data dependence, the memory accessing dependence and the control dependence. The CAVLC pipeline is divided into three stages: two scans, coding, and lag packing, and be implemented on two typical heterogeneous multicore architectures. One is a block-based SIMD parallel CAVLC encoder on multicore stream processor STORM. The other is a component-oriented SIMT parallel encoder on massively parallel architecture GPU. Both of them exploited rich data-level parallelism. Experiments results show that compared with the CPU version, more than 70 times of speedup can be obtained for STORM and over 50 times for GPU. The implementation of encoder on STORM can make a real-time processing for 1080p @30fps and GPU-based version can satisfy the requirements for 720p real-time encoding. The throughput of the presented CAVLC encoders is more than 10 times higher than that of published software encoders on DSP and multicore platforms.

  13. Computing on Knights and Kepler Architectures

    International Nuclear Information System (INIS)

    Bortolotti, G; Caberletti, M; Ferraro, A; Giacomini, F; Manzali, M; Maron, G; Salomoni, D; Crimi, G; Zanella, M

    2014-01-01

    A recent trend in scientific computing is the increasingly important role of co-processors, originally built to accelerate graphics rendering, and now used for general high-performance computing. The INFN Computing On Knights and Kepler Architectures (COKA) project focuses on assessing the suitability of co-processor boards for scientific computing in a wide range of physics applications, and on studying the best programming methodologies for these systems. Here we present in a comparative way our results in porting a Lattice Boltzmann code on two state-of-the-art accelerators: the NVIDIA K20X, and the Intel Xeon-Phi. We describe our implementations, analyze results and compare with a baseline architecture adopting Intel Sandy Bridge CPUs.

  14. FINELM: a multigroup finite element diffusion code

    International Nuclear Information System (INIS)

    Higgs, C.E.; Davierwalla, D.M.

    1981-06-01

    FINELM is a FORTRAN IV program to solve the Neutron Diffusion Equation in X-Y, R-Z, R-theta, X-Y-Z and R-theta-Z geometries using the method of Finite Elements. Lagrangian elements of linear or higher degree to approximate the spacial flux distribution have been provided. The method of dissections, coarse mesh rebalancing and Chebyshev acceleration techniques are available. Simple user defined input is achieved through extensive input subroutines. The input preparation is described followed by a program structure description. Sample test cases are provided. (Auth.)

  15. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    Science.gov (United States)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  16. Solution of 2D and 3D hexagonal geometry benchmark problems by using the finite element diffusion code DIFGEN

    International Nuclear Information System (INIS)

    Gado, J.

    1986-02-01

    The four group, 2D and 3D hexagonal geometry HTGR benchmark problems and a 2D hexagonal geometry PWR (WWER) benchmark problem have been solved by using the finite element diffusion code DIFGEN. The hexagons (or hexagonal prisms) were subdivided into first order or second order triangles or quadrilaterals (or triangular or quadrilateral prisms). In the 2D HTGR case of the number of the inserted absorber rods was also varied (7, 6, 0 or 37 rods). The calculational results are in a good agreement with the results of other calculations. The larger systematic series of DIFGEN calculations have given a quantitative picture on the convergence properties of various finite element modellings of hexagonal grids in DIFGEN. (orig.)

  17. Contribution to comprehending symbolism and meaning of architectural form

    Directory of Open Access Journals (Sweden)

    Alihodžić Rifat

    2017-01-01

    Full Text Available Architectural form and space, from the very beginning of their creation, weren’t only elements reflecting mere act of building; as the act of human actions, they included proper symbolic presentation of a creator's perception of the world. The initial point is that each physical, therefore each architectural form, speaks volumes on more than just their purpose, so it can have symbolic meanings, being proved in history of architecture for such a long time. While observing architectural form, these two questions impose. The first question refers to identifying usable purpose of particular facility, in other words, its function. The second question imposes to identify what are the things that we are reminded of concerning that particular facility. This second question represents search for the meaning in each form that mankind instinctively longs to identify in order to comprehend the world we live in. No matter if we are in natural or building area, everything we are surrounded by has got specific forms recalling certain associations. The aim of this paper is to indicate that pictures appearing as a consequence of close forms and designs represent associations and they should not be compared to symbols. The goal of this research is to contribute to clearer seeing of symbolism of architectural form, in which situations it exists and whether it exists in contemporary architectural forms. This work is based on elements of Gestalt observation theory.

  18. Dynamic code block size for JPEG 2000

    Science.gov (United States)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  19. Non-technical approach to the challenges of ecological architecture: Learning from Van der Laan

    Directory of Open Access Journals (Sweden)

    María-Jesús González-Díaz

    2016-06-01

    Full Text Available Up to now, ecology has a strong influence on the development of technical and instrumental aspects of architecture, such as renewable and efficient of resources and energy, CO2 emissions, air quality, water reuse, some social and economical aspects. These concepts define the physical keys and codes of the current ׳sustainable׳ architecture, normally instrumental but rarely and insufficiently theorised. But is not there another way of bringing us to nature? We need a theoretical referent. This is where we place the Van der Laan׳s thoughts: he considers that art completes nature and he builds his theoretical discourse on it, trying to better understand many aspects of architecture. From a conceptual point of view, we find in his works sense of timelessness, universality, special attention on the ׳locus׳ and a strict sense of proportions and use of materials according to nature. Could these concepts complement our current sustainable architecture? How did Laan apply the current codes of ecology in his architecture? His work may help us to get a theoretical interpretation of nature and not only physical. This paper develops this idea through the comparison of thoughts and works of Laan with the current technical approach to ׳sustainable׳ architecture.

  20. New binary linear codes which are dual transforms of good codes

    NARCIS (Netherlands)

    Jaffe, D.B.; Simonis, J.

    1999-01-01

    If C is a binary linear code, one may choose a subset S of C, and form a new code CST which is the row space of the matrix having the elements of S as its columns. One way of picking S is to choose a subgroup H of Aut(C) and let S be some H-stable subset of C. Using (primarily) this method for

  1. Insulator function and topological domain border strength scale with architectural protein occupancy

    Science.gov (United States)

    2014-01-01

    Background Chromosome conformation capture studies suggest that eukaryotic genomes are organized into structures called topologically associating domains. The borders of these domains are highly enriched for architectural proteins with characterized roles in insulator function. However, a majority of architectural protein binding sites localize within topological domains, suggesting sites associated with domain borders represent a functionally different subclass of these regulatory elements. How topologically associating domains are established and what differentiates border-associated from non-border architectural protein binding sites remain unanswered questions. Results By mapping the genome-wide target sites for several Drosophila architectural proteins, including previously uncharacterized profiles for TFIIIC and SMC-containing condensin complexes, we uncover an extensive pattern of colocalization in which architectural proteins establish dense clusters at the borders of topological domains. Reporter-based enhancer-blocking insulator activity as well as endogenous domain border strength scale with the occupancy level of architectural protein binding sites, suggesting co-binding by architectural proteins underlies the functional potential of these loci. Analyses in mouse and human stem cells suggest that clustering of architectural proteins is a general feature of genome organization, and conserved architectural protein binding sites may underlie the tissue-invariant nature of topologically associating domains observed in mammals. Conclusions We identify a spectrum of architectural protein occupancy that scales with the topological structure of chromosomes and the regulatory potential of these elements. Whereas high occupancy architectural protein binding sites associate with robust partitioning of topologically associating domains and robust insulator function, low occupancy sites appear reserved for gene-specific regulation within topological domains. PMID

  2. A Reference Architecture for Provisioning of Tools as a Service: Meta-Model, Ontologies and Design Elements

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali; Sheng, Quan Z.

    2016-01-01

    Software Architecture (SA) plays a critical role in designing, developing and evolving cloud-based platforms that can be used to provision different types of services to consumers on demand. In this paper, we present a Reference Architecture (RA) for designing cloud-based Tools as a service SPACE...... (TSPACE) for provisioning a bundled suite of tools by following the Software as a Service (SaaS) model. The reference architecture has been designed by leveraging information structuring approaches and by using well-known architecture design principles and patterns. The RA has been documented using view...

  3. Coding theory on the m-extension of the Fibonacci p-numbers

    International Nuclear Information System (INIS)

    Basu, Manjusri; Prasad, Bandhu

    2009-01-01

    In this paper, we introduce a new Fibonacci G p,m matrix for the m-extension of the Fibonacci p-numbers where p (≥0) is integer and m (>0). Thereby, we discuss various properties of G p,m matrix and the coding theory followed from the G p,m matrix. In this paper, we establish the relations among the code elements for all values of p (nonnegative integer) and m(>0). We also show that the relation, among the code matrix elements for all values of p and m=1, coincides with the relation among the code matrix elements for all values of p [Basu M, Prasad B. The generalized relations among the code elements for Fibonacci coding theory. Chaos, Solitons and Fractals (2008). doi: 10.1016/j.chaos.2008.09.030]. In general, correct ability of the method increases as p increases but it is independent of m.

  4. DEPUTY: analysing architectural structures and checking style

    International Nuclear Information System (INIS)

    Gorshkov, D.; Kochelev, S.; Kotegov, S.; Pavlov, I.; Pravilnikov, V.; Wellisch, J.P.

    2001-01-01

    The DepUty (dependencies utility) can be classified as a project and process management tool. The main goal of DepUty is to assist by means of source code analysis and graphical representation using UML, in understanding dependencies of sub-systems and packages in CMS Object Oriented software, to understand architectural structure, and to schedule code release in modularised integration. It also allows a new-comer to more easily understand the global structure of CMS software, and to void circular dependencies up-front or re-factor the code, in case it was already too close to the edge of non-maintainability. The authors will discuss the various views DepUty provides to analyse package dependencies, and illustrate both the metrics and style checking facilities it provides

  5. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  6. Marginally Stable Triangular Recurrent Neural Network Architecture for Time Series Prediction.

    Science.gov (United States)

    Sivakumar, Seshadri; Sivakumar, Shyamala

    2017-09-25

    This paper introduces a discrete-time recurrent neural network architecture using triangular feedback weight matrices that allows a simplified approach to ensuring network and training stability. The triangular structure of the weight matrices is exploited to readily ensure that the eigenvalues of the feedback weight matrix represented by the block diagonal elements lie on the unit circle in the complex z-plane by updating these weights based on the differential of the angular error variable. Such placement of the eigenvalues together with the extended close interaction between state variables facilitated by the nondiagonal triangular elements, enhances the learning ability of the proposed architecture. Simulation results show that the proposed architecture is highly effective in time-series prediction tasks associated with nonlinear and chaotic dynamic systems with underlying oscillatory modes. This modular architecture with dual upper and lower triangular feedback weight matrices mimics fully recurrent network architectures, while maintaining learning stability with a simplified training process. While training, the block-diagonal weights (hence the eigenvalues) of the dual triangular matrices are constrained to the same values during weight updates aimed at minimizing the possibility of overfitting. The dual triangular architecture also exploits the benefit of parsing the input and selectively applying the parsed inputs to the two subnetworks to facilitate enhanced learning performance.

  7. Data Element Registry Services

    Data.gov (United States)

    U.S. Environmental Protection Agency — Data Element Registry Services (DERS) is a resource for information about value lists (aka code sets / pick lists), data dictionaries, data elements, and EPA data...

  8. Ferroelectric tunneling element and memory applications which utilize the tunneling element

    Science.gov (United States)

    Kalinin, Sergei V [Knoxville, TN; Christen, Hans M [Knoxville, TN; Baddorf, Arthur P [Knoxville, TN; Meunier, Vincent [Knoxville, TN; Lee, Ho Nyung [Oak Ridge, TN

    2010-07-20

    A tunneling element includes a thin film layer of ferroelectric material and a pair of dissimilar electrically-conductive layers disposed on opposite sides of the ferroelectric layer. Because of the dissimilarity in composition or construction between the electrically-conductive layers, the electron transport behavior of the electrically-conductive layers is polarization dependent when the tunneling element is below the Curie temperature of the layer of ferroelectric material. The element can be used as a basis of compact 1R type non-volatile random access memory (RAM). The advantages include extremely simple architecture, ultimate scalability and fast access times generic for all ferroelectric memories.

  9. Level-2 Milestone 5588: Deliver Strategic Plan and Initial Scalability Assessment by Advanced Architecture and Portability Specialists Team

    Energy Technology Data Exchange (ETDEWEB)

    Draeger, Erik W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-30

    This report documents the fact that the work in creating a strategic plan and beginning customer engagements has been completed. The description of milestone is: The newly formed advanced architecture and portability specialists (AAPS) team will develop a strategic plan to meet the goals of 1) sharing knowledge and experience with code teams to ensure that ASC codes run well on new architectures, and 2) supplying skilled computational scientists to put the strategy into practice. The plan will be delivered to ASC management in the first quarter. By the fourth quarter, the team will identify their first customers within PEM and IC, perform an initial assessment and scalability and performance bottleneck for next-generation architectures, and embed AAPS team members with customer code teams to assist with initial portability development within standalone kernels or proxy applications.

  10. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  11. Do Performance-Based Codes Support Universal Design in Architecture?

    DEFF Research Database (Denmark)

    Grangaard, Sidse; Frandsen, Anne Kathrine

    2016-01-01

    – Universal Design (UD). The empirical material consists of input from six workshops to which all 700 Danish Architectural firms were invited, as well as eight group interviews. The analysis shows that the current prescriptive requirements are criticized for being too homogenous and possibilities...... for differentiation and zoning are required. Therefore, a majority of professionals are interested in a performance-based model because they think that such a model will support ‘accessibility zoning’, achieving flexibility because of different levels of accessibility in a building due to its performance. The common...... of educational objectives is suggested as a tool for such a boost. The research project has been financed by the Danish Transport and Construction Agency....

  12. MC 68020 μp architecture

    International Nuclear Information System (INIS)

    Casals, O.; Dejuan, E.; Labarta, J.

    1988-01-01

    The MC68020 is a 32-bit microprocessor object code compatible with the earlier MC68000 and MC68010. In this paper we describe its architecture and two coprocessors: the MC68851 paged memory management unit and the MC68882 floating point coprocessor. Between its most important characteristics we can point up: addressing mode extensions for enhanced support of high level languages, an on-chip instruction cache and full support of virtual memory. (Author)

  13. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  14. A new coding concept for fast ultrasound imaging using pulse trains

    DEFF Research Database (Denmark)

    Misaridis, T.; Jensen, Jørgen Arendt

    2002-01-01

    Frame rate in ultrasound imaging can he increased by simultaneous transmission of multiple beams using coded waveforms. However, the achievable degree of orthogonality among coded waveforms is limited in ultrasound, and the image quality degrades unacceptably due to interbeam interference....... In this paper, an alternative combined time-space coding approach is undertaken. In the new method all transducer elements are excited with short pulses and the high time-bandwidth (TB) product waveforms are generated acoustically. Each element transmits a short pulse spherical wave with a constant transmit...... delay from element to element, long enough to assure no pulse overlapping for all depths in the image. Frequency shift keying is used for "per element" coding. The received signals from a point scatterer are staggered pulse trains which are beamformed for all beam directions and further processed...

  15. Proceedings of the OECD/CSNI workshop on transient thermal-hydraulic and neutronic codes requirements

    Energy Technology Data Exchange (ETDEWEB)

    Ebert, D.

    1997-07-01

    This is a report on the CSNI Workshop on Transient Thermal-Hydraulic and Neutronic Codes Requirements held at Annapolis, Maryland, USA November 5-8, 1996. This experts` meeting consisted of 140 participants from 21 countries; 65 invited papers were presented. The meeting was divided into five areas: (1) current and prospective plans of thermal hydraulic codes development; (2) current and anticipated uses of thermal-hydraulic codes; (3) advances in modeling of thermal-hydraulic phenomena and associated additional experimental needs; (4) numerical methods in multi-phase flows; and (5) programming language, code architectures and user interfaces. The workshop consensus identified the following important action items to be addressed by the international community in order to maintain and improve the calculational capability: (a) preserve current code expertise and institutional memory, (b) preserve the ability to use the existing investment in plant transient analysis codes, (c) maintain essential experimental capabilities, (d) develop advanced measurement capabilities to support future code validation work, (e) integrate existing analytical capabilities so as to improve performance and reduce operating costs, (f) exploit the proven advances in code architecture, numerics, graphical user interfaces, and modularization in order to improve code performance and scrutibility, and (g) more effectively utilize user experience in modifying and improving the codes.

  16. Proceedings of the OECD/CSNI workshop on transient thermal-hydraulic and neutronic codes requirements

    International Nuclear Information System (INIS)

    Ebert, D.

    1997-07-01

    This is a report on the CSNI Workshop on Transient Thermal-Hydraulic and Neutronic Codes Requirements held at Annapolis, Maryland, USA November 5-8, 1996. This experts' meeting consisted of 140 participants from 21 countries; 65 invited papers were presented. The meeting was divided into five areas: (1) current and prospective plans of thermal hydraulic codes development; (2) current and anticipated uses of thermal-hydraulic codes; (3) advances in modeling of thermal-hydraulic phenomena and associated additional experimental needs; (4) numerical methods in multi-phase flows; and (5) programming language, code architectures and user interfaces. The workshop consensus identified the following important action items to be addressed by the international community in order to maintain and improve the calculational capability: (a) preserve current code expertise and institutional memory, (b) preserve the ability to use the existing investment in plant transient analysis codes, (c) maintain essential experimental capabilities, (d) develop advanced measurement capabilities to support future code validation work, (e) integrate existing analytical capabilities so as to improve performance and reduce operating costs, (f) exploit the proven advances in code architecture, numerics, graphical user interfaces, and modularization in order to improve code performance and scrutibility, and (g) more effectively utilize user experience in modifying and improving the codes

  17. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology

    Directory of Open Access Journals (Sweden)

    Shuo Chen

    2018-01-01

    Full Text Available As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D TCAI architecture based on single input multiple output (SIMO technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  18. Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures

    Science.gov (United States)

    2015-09-01

    Hardware Architecture 55 5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.2 RISC -V Extensions...56 5.3 RISC -V Tag Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.3.1 Basic Return Pointer Policy...60 5.4 RISC -V Policy Test Framework . . . . . . . . . . . . . . . . . . . . . 60 6 Compiler Support 63 6.1 LLVM

  19. Code package to analyse behavior of the WWER fuel rods in normal operation: TOPRA's code

    International Nuclear Information System (INIS)

    Scheglov, A.; Proselkov, V.

    2001-01-01

    This paper briefly describes the code package intended for analysis of WWER fuel rod characteristics. The package includes two computer codes: TOPRA-1 and TOPRA-2 for full-scale fuel rod analyses; MRZ and MKK codes for analyzing the separate sections of fuel rods in r-z and r-j geometry. The TOPRA's codes are developed on the base of PIN-mod2 version and verified against experimental results obtained in MR, MIR and Halden research reactors (in the framework of SOFIT, FGR-2 and FUMEX experimental programs). Comparative analysis of calculation results and results from post-reactor examination of the WWER-440 and WWER-1000 fuel rod are also made as additional verification of these codes. To avoid the enlarging of uncertainties in fuel behavior prediction as a result of simplifying of the fuel geometry, MKK and MRZ codes are developed on the basis of the finite element method with use of the three nodal finite elements. Results obtained in the course of the code verification indicate the possibility for application of the method and TOPRA's code for simplified engineering calculations of WWER fuel rods thermal-physical parameters. An analysis of maximum relative errors for predicting of the fuel rod characteristics in the range of the accepted parameter values is also presented in the paper

  20. Accuracy of finite-element models for the stress analysis of multiple-holed moderator blocks

    International Nuclear Information System (INIS)

    Smith, P.D.; Sullivan, R.M.; Lewis, A.C.; Yu, H.J.

    1981-01-01

    Two steps have been taken to quantify and improve the accuracy in the analysis. First, the limitations of various approximation techniques have been studied with the aid of smaller benchmark problems containing fewer holes. Second, a new family of computer programs has been developed for handling such large problems. This paper describes the accuracy studies and the benchmark problems. A review is given of some proposed modeling techniques including local mesh refinement, homogenization, a special-purpose finite element, and substructuring. Some limitations of these approaches are discussed. The new finite element programs and the features that contribute to their efficiency are discussed. These include a standard architecture for out-of-core data processing and an equation solver that operates on a peripheral array processor. The central conclusions of the paper are: (1) modeling approximation methods such as local mesh refinement and homogenization tend to be unreliable, and they should be justified by a fine mesh benchmark analysis; and (2) finite element codes are now available that can achieve accurate solutions at a reasonable cost, and there is no longer a need to employ modeling approximations in the two-dimensional analysis of HTGR fuel elements. 10 figures

  1. A CABAC codec of H.264AVC with secure arithmetic coding

    Science.gov (United States)

    Neji, Nihel; Jridi, Maher; Alfalou, Ayman; Masmoudi, Nouri

    2013-02-01

    This paper presents an optimized H.264/AVC coding system for HDTV displays based on a typical flow with high coding efficiency and statics adaptivity features. For high quality streaming, the codec uses a Binary Arithmetic Encoding/Decoding algorithm with high complexity and a JVCE (Joint Video compression and encryption) scheme. In fact, particular attention is given to simultaneous compression and encryption applications to gain security without compromising the speed of transactions [1]. The proposed design allows us to encrypt the information using a pseudo-random number generator (PRNG). Thus we achieved the two operations (compression and encryption) simultaneously and in a dependent manner which is a novelty in this kind of architecture. Moreover, we investigated the hardware implementation of CABAC (Context-based adaptive Binary Arithmetic Coding) codec. The proposed architecture is based on optimized binarizer/de-binarizer to handle significant pixel rates videos with low cost and high performance for most frequent SEs. This was checked using HD video frames. The obtained synthesis results using an FPGA (Xilinx's ISE) show that our design is relevant to code main profile video stream.

  2. Information Architecture: The Data Warehouse Foundation.

    Science.gov (United States)

    Thomas, Charles R.

    1997-01-01

    Colleges and universities are initiating data warehouse projects to provide integrated information for planning and reporting purposes. A survey of 40 institutions with active data warehouse projects reveals the kinds of tools, contents, data cycles, and access currently used. Essential elements of an integrated information architecture are…

  3. Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures

    International Nuclear Information System (INIS)

    Ohana, N; Lanti, E; Tran, T M; Brunner, S; Hariri, F; Villard, L; Jocksch, A; Gheller, C

    2016-01-01

    With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices. (paper)

  4. From green architecture to architectural green

    DEFF Research Database (Denmark)

    Earon, Ofri

    2011-01-01

    that describes the architectural exclusivity of this particular architecture genre. The adjective green expresses architectural qualities differentiating green architecture from none-green architecture. Currently, adding trees and vegetation to the building’s facade is the main architectural characteristics...... they have overshadowed the architectural potential of green architecture. The paper questions how a green space should perform, look like and function. Two examples are chosen to demonstrate thorough integrations between green and space. The examples are public buildings categorized as pavilions. One......The paper investigates the topic of green architecture from an architectural point of view and not an energy point of view. The purpose of the paper is to establish a debate about the architectural language and spatial characteristics of green architecture. In this light, green becomes an adjective...

  5. ISLAMIZATION OF CONTEMPORARY ARCHITECTURE: SHIFTING THE PARADIGM OF ISLAMIC ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    Mustapha Ben- Hamouche

    2012-03-01

    Full Text Available Islamic architecture is often thought as a history course and thus finds its material limited to the cataloguing and studying of legacies of successive empires or various geographic regions of the Islamic world. In practice, adherent professionals tend to reproduce high styles such as Umayyad, Abassid, Fatimid, Ottoman, etc., or recycle well known elements such as the minarets, courtyards, and mashrabiyyahs. This approach, endorsed by the present comprehensive Islamic revival, is believed to be the way to defend and revitalize the identity of Muslim societies that was initially affected by colonization and now is being offended by globalization. However, this approach often clashes with the contemporary trends in architecture that do not necessarily oppose the essence of Islamic architecture. Furthermore, it sometimes lead to an erroneous belief that consists of relating a priori forms to Islam and that clashes with the timeless and universal character of the Islamic religion. The key question to be asked then is, beyond this historicist view, what would be an “Islamic architec-ture” of nowadays that originates from the essence of Islam and that responds to contemporary conditions, needs, aspirations of present Muslim societies and individuals. To what extends can Islamic architecture bene-fits from modern progress and contemporary thought in resurrecting itself without loosing its essence. The hypothesis of the study is that, just as early Muslim architecture started from the adoption, use and re-use of early pre-Islamic architectures before reaching originality, this process, called Islamization, could also take place nowadays with the contemporary thought that is mostly developed in Western and non-Islamic environ-ments. Mechanisms in Islam that allowed the “absorption” of pre-existing civilizations should thus structure the islamization approach and serve the scholars and professionals to reach the new Islamic architecture. The

  6. Development of MCNP interface code in HFETR

    International Nuclear Information System (INIS)

    Qiu Liqing; Fu Rong; Deng Caiyu

    2007-01-01

    In order to describe the HFETR core with MCNP method, the interface code MCNPIP for HFETR and MCNP code is developed. This paper introduces the core DXSY and flowchart of MCNPIP code, and the handling of compositions of fuel elements and requirements on hardware and software. Finally, MCNPIP code is validated against the practical application. (authors)

  7. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    Science.gov (United States)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  8. Properties of non-coding DNA and identification of putative cis-regulatory elements in Theileria parva

    Directory of Open Access Journals (Sweden)

    Guo Xiang

    2008-12-01

    Full Text Available Abstract Background Parasites in the genus Theileria cause lymphoproliferative diseases in cattle, resulting in enormous socio-economic losses. The availability of the genome sequences and annotation for T. parva and T. annulata has facilitated the study of parasite biology and their relationship with host cell transformation and tropism. However, the mechanism of transcriptional regulation in this genus, which may be key to understanding fundamental aspects of its parasitology, remains poorly understood. In this study, we analyze the evolution of non-coding sequences in the Theileria genome and identify conserved sequence elements that may be involved in gene regulation of these parasitic species. Results Intergenic regions and introns in Theileria are short, and their length distributions are considerably right-skewed. Intergenic regions flanked by genes in 5'-5' orientation tend to be longer and slightly more AT-rich than those flanked by two stop codons; intergenic regions flanked by genes in 3'-5' orientation have intermediate values of length and AT composition. Intron position is negatively correlated with intron length, and positively correlated with GC content. Using stringent criteria, we identified a set of high-quality orthologous non-coding sequences between T. parva and T. annulata, and determined the distribution of selective constraints across regions, which are shown to be higher close to translation start sites. A positive correlation between constraint and length in both intergenic regions and introns suggests a tight control over length expansion of non-coding regions. Genome-wide searches for functional elements revealed several conserved motifs in intergenic regions of Theileria genomes. Two such motifs are preferentially located within the first 60 base pairs upstream of transcription start sites in T. parva, are preferentially associated with specific protein functional categories, and have significant similarity to know

  9. Structural elements design manual

    CERN Document Server

    Draycott, Trevor

    2012-01-01

    Gives clear explanations of the logical design sequence for structural elements. The Structural Engineer says: `The book explains, in simple terms, and with many examples, Code of Practice methods for sizing structural sections in timber, concrete,masonry and steel. It is the combination into one book of section sizing methods in each of these materials that makes this text so useful....Students will find this an essential support text to the Codes of Practice in their study of element sizing'.

  10. Traceability of Requirements and Software Architecture for Change Management

    NARCIS (Netherlands)

    Göknil, Arda

    2011-01-01

    At the present day, software systems get more and more complex. The requirements of software systems change continuously and new requirements emerge frequently. New and/or modified requirements are integrated with the existing ones, and adaptations to the architecture and source code of the system

  11. Objective Oriented Design of Architecture for TH System Safety Analysis Code and Verification

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Bub Dong

    2008-03-15

    In this work, objective oriented design of generic system analysis code has been tried based on the previous works in KAERI for two phase three field Pilot code. It has been performed to implement of input and output design, TH solver, component model, special TH models, heat structure solver, general table, trip and control, and on-line graphics. All essential features for system analysis has been designed and implemented in the final product SYSTF code. The computer language C was used for implementation in the Visual studio 2008 IDE (Integrated Development Environment) since it has easier and lighter than C++ feature. The code has simple and essential features of models and correlation, special component, special TH model and heat structure model. However the input features is able to simulate the various scenarios, such as steady state, non LOCA transient and LOCA accident. The structure validity has been tested through the various verification tests and it has been shown that the developed code can treat the non LOCA and LOCA simulation. However more detailed design and implementation of models are required to get the physical validity of SYSTF code simulation.

  12. Objective Oriented Design of Architecture for TH System Safety Analysis Code and Verification

    International Nuclear Information System (INIS)

    Chung, Bub Dong

    2008-03-01

    In this work, objective oriented design of generic system analysis code has been tried based on the previous works in KAERI for two phase three field Pilot code. It has been performed to implement of input and output design, TH solver, component model, special TH models, heat structure solver, general table, trip and control, and on-line graphics. All essential features for system analysis has been designed and implemented in the final product SYSTF code. The computer language C was used for implementation in the Visual studio 2008 IDE (Integrated Development Environment) since it has easier and lighter than C++ feature. The code has simple and essential features of models and correlation, special component, special TH model and heat structure model. However the input features is able to simulate the various scenarios, such as steady state, non LOCA transient and LOCA accident. The structure validity has been tested through the various verification tests and it has been shown that the developed code can treat the non LOCA and LOCA simulation. However more detailed design and implementation of models are required to get the physical validity of SYSTF code simulation

  13. Collision detection of convex polyhedra on the NVIDIA GPU architecture for the discrete element method

    CSIR Research Space (South Africa)

    Govender, Nicolin

    2015-09-01

    Full Text Available consideration due to the architectural differences between CPU and GPU platforms. This paper describes the DEM algorithms and heuristics that are optimized for the parallel NVIDIA Kepler GPU architecture in detail. This includes a GPU optimized collision...

  14. Applications of the ARGUS code in accelerator physics

    International Nuclear Information System (INIS)

    Petillo, J.J.; Mankofsky, A.; Krueger, W.A.; Kostas, C.; Mondelli, A.A.; Drobot, A.T.

    1993-01-01

    ARGUS is a three-dimensional, electromagnetic, particle-in-cell (PIC) simulation code that is being distributed to U.S. accelerator laboratories in collaboration between SAIC and the Los Alamos Accelerator Code Group. It uses a modular architecture that allows multiple physics modules to share common utilities for grid and structure input., memory management, disk I/O, and diagnostics, Physics modules are in place for electrostatic and electromagnetic field solutions., frequency-domain (eigenvalue) solutions, time- dependent PIC, and steady-state PIC simulations. All of the modules are implemented with a domain-decomposition architecture that allows large problems to be broken up into pieces that fit in core and that facilitates the adaptation of ARGUS for parallel processing ARGUS operates on either Cray or workstation platforms, and MOTIF-based user interface is available for X-windows terminals. Applications of ARGUS in accelerator physics and design are described in this paper

  15. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... correlation between the study of existing architectures and the training of competences to design for present-day realities.......This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...

  16. The NIMROD Code

    Science.gov (United States)

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  17. Cellulose as an Architectural Element in Spatially Structured Escherichia coli Biofilms

    Science.gov (United States)

    Serra, Diego O.; Richter, Anja M.

    2013-01-01

    Morphological form in multicellular aggregates emerges from the interplay of genetic constitution and environmental signals. Bacterial macrocolony biofilms, which form intricate three-dimensional structures, such as large and often radially oriented ridges, concentric rings, and elaborate wrinkles, provide a unique opportunity to understand this interplay of “nature and nurture” in morphogenesis at the molecular level. Macrocolony morphology depends on self-produced extracellular matrix components. In Escherichia coli, these are stationary phase-induced amyloid curli fibers and cellulose. While the widely used “domesticated” E. coli K-12 laboratory strains are unable to generate cellulose, we could restore cellulose production and macrocolony morphology of E. coli K-12 strain W3110 by “repairing” a single chromosomal SNP in the bcs operon. Using scanning electron and fluorescence microscopy, cellulose filaments, sheets and nanocomposites with curli fibers were localized in situ at cellular resolution within the physiologically two-layered macrocolony biofilms of this “de-domesticated” strain. As an architectural element, cellulose confers cohesion and elasticity, i.e., tissue-like properties that—together with the cell-encasing curli fiber network and geometrical constraints in a growing colony—explain the formation of long and high ridges and elaborate wrinkles of wild-type macrocolonies. In contrast, a biofilm matrix consisting of the curli fiber network only is brittle and breaks into a pattern of concentric dome-shaped rings separated by deep crevices. These studies now set the stage for clarifying how regulatory networks and in particular c-di-GMP signaling operate in the three-dimensional space of highly structured and “tissue-like” bacterial biofilms. PMID:24097954

  18. Structural evaluation method for class 1 vessels by using elastic-plastic finite element analysis in code case of JSME rules on design and construction

    International Nuclear Information System (INIS)

    Asada, Seiji; Hirano, Takashi; Nagata, Tetsuya; Kasahara, Naoto

    2008-01-01

    A structural evaluation method by using elastic-plastic finite element analysis has been developed and published as a code case of Rules on Design and Construction for Nuclear Power Plants (The First Part: Light Water Reactor Structural Design Standard) in the JSME Codes for Nuclear Power Generation Facilities. Its title is 'Alternative Structural Evaluation Criteria for Class 1 Vessels Based on Elastic-Plastic Finite Element Analysis' (NC-CC-005). This code case applies elastic-plastic analysis to evaluation of such failure modes as plastic collapse, thermal ratchet, fatigue and so on. Advantage of this evaluation method is free from stress classification, consistently use of Mises stress and applicability to complex 3-dimensional structures which are hard to be treated by the conventional stress classification method. The evaluation method for plastic collapse has such variation as the Lower Bound Approach Method, Twice-Elastic-Slope Method and Elastic Compensation Method. Cyclic Yield Area (CYA) based on elastic analysis is applied to screening evaluation of thermal ratchet instead of secondary stress evaluation, and elastic-plastic analysis is performed when the CYA screening criteria is not satisfied. Strain concentration factors can be directly calculated based on elastic-plastic analysis. (author)

  19. Theoretical Interpretation of Modular Artistic Forms Based on the Example of Contemporarylithuanian Architecture

    Directory of Open Access Journals (Sweden)

    Aušra Černauskienė

    2015-05-01

    Full Text Available The article analyses modular artistic forms that emerge in all scale structures of contemporary architecture. Module, as a standard unit of measure has been in use since antiquity. It gained even more significance amid innovative building and computing technologies of the 20th and 21st centuries. Static and fixed perceptions of a module were supplemented with concepts of dynamic and adaptable modular units, such as fractals, parameters and algorithms. Various expressions and trends of modular design appear in contemporary architecture of Lithuania, where modular forms consist of repetitive spatial and planar elements. Spatial modules as blocks or flats and planar modular wall elements are a characteristic expression of the contemporary architecture in Lithuania.

  20. Effective coding with VHDL principles and best practice

    CERN Document Server

    Jasinski, Ricardo

    2016-01-01

    A guide to applying software design principles and coding practices to VHDL to improve the readability, maintainability, and quality of VHDL code. This book addresses an often-neglected aspect of the creation of VHDL designs. A VHDL description is also source code, and VHDL designers can use the best practices of software development to write high-quality code and to organize it in a design. This book presents this unique set of skills, teaching VHDL designers of all experience levels how to apply the best design principles and coding practices from the software world to the world of hardware. The concepts introduced here will help readers write code that is easier to understand and more likely to be correct, with improved readability, maintainability, and overall quality. After a brief review of VHDL, the book presents fundamental design principles for writing code, discussing such topics as design, quality, architecture, modularity, abstraction, and hierarchy. Building on these concepts, the book then int...

  1. Preserving urban objects of historicaland architectural heritage

    Directory of Open Access Journals (Sweden)

    Bal'zannikova Ekaterina Mikhailovna

    2014-01-01

    structural elements, delivering building materials, preparing the construction site and the basic period when condemned structures are demolished, new design elements are formed and assembled, interior finishing work is performed and the object facade is restored. In contrast to it, our method includes additional periods and a performance list. In particular, it is proposed to carry out a research period prior to the preparatory period, and after the basic period there should be the ending period.Thus, during the research period it is necessary to study urban development features in architectural and town-planning environment, to identify the historical and architectural value of the object, to estimate its ramshackle state and whether it is habitable, to determine the relationship of the object with the architectural and aesthetic image of surrounding objects and to develop a conservation program; and during the ending period it is proposed to assess the historical and architectural significance of the reconstructed object in relation to the aesthetic and architectural image of the surrounding area. The proposed complex method will increase the attractiveness of a historical and architectural heritage object and its surrounding area for tourists and, consequently, raise the cultural level of the visitors. Furthermore, the method will ensure the construction of recreation zones, their more frequent usage and visiting surrounding objects of social infrastructure, because more opportunities for cultural and aesthetic pastime will be offered. The method will also provide a more reasonable and effective use of available funding due to the careful analysis and proper choice of the methods to preserve objects of historical and architectural heritage.

  2. An Architectural Modelfor Intelligent Network Management

    Institute of Scientific and Technical Information of China (English)

    罗军舟; 顾冠群; 费翔

    2000-01-01

    Traditional network management approach involves the management of each vendor's equipment and network segment in isolation through its own proprietary element management system. It is necessary to set up a new network management architecture that calls for operation consolidation across vendor and technology boundaries. In this paper, an architectural model for Intelligent Network Management (INM) is presented. The INM system includes a manager system, which controls all subsystems and coordinates different management tasks; an expert system, which is responsible for handling particularly difficult problems, and intelligent agents, which bring the management closer to applications and user requirements by spreading intelligent agents through network segments or domain. In the expert system model proposed, especially an intelligent fault management system is given.The architectural model is to build the INM system to meet the need of managing modern network systems.

  3. Simply architecture or bioclimatic architecture?; Arquitectura bioclimatica o simplemente Arquitectura?

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Torres, Juan Manuel [Universidad de Guanajuato (Mexico)

    2006-10-15

    The bioclimatic architecture is the one which profits from its position in the environment and its architectonic elements for the climate benefit. With the aim of reach the internal thermal comfort without using mechanical systems. This article states the story about this singular kind of architecture during centuries. And also emphasizes the sunlight utilization, in order to achieve the desired thermal well-being in edifications. [Spanish] El tipo de arquitectura que toma ventaja de su disposicion en el entorno y sus elementos arquitectonicos para el aprovechamiento del clima, con el fin de conseguir el confort termico interior sin utilizar sistemas mecanicos, se denomina bioclimatica. En este articulo se habla de la historia de este tipo tan singular de arquitectura con el paso de los siglos, y tambien se hace hincapie acerca de la luz solar, como un medio muy eficiente a traves del cual las edificaciones pueden ser disenadas para lograr el bienestar termico deseado.

  4. Continuous Materiality: Through a Hierarchy of Computational Codes

    Directory of Open Access Journals (Sweden)

    Jichen Zhu

    2008-01-01

    Full Text Available The legacy of Cartesian dualism inherent in linguistic theory deeply influences current views on the relation between natural language, computer code, and the physical world. However, the oversimplified distinction between mind and body falls short of capturing the complex interaction between the material and the immaterial. In this paper, we posit a hierarchy of codes to delineate a wide spectrum of continuous materiality. Our research suggests that diagrams in architecture provide a valuable analog for approaching computer code in emergent digital systems. After commenting on ways that Cartesian dualism continues to haunt discussions of code, we turn our attention to diagrams and design morphology. Finally we notice the implications a material understanding of code bears for further research on the relation between human cognition and digital code. Our discussion concludes by noticing several areas that we have projected for ongoing research.

  5. Art of Film – A Way of Architectural Communication

    Directory of Open Access Journals (Sweden)

    Liliana Petrovici

    2009-01-01

    Full Text Available The art of film, the most popular art of the 20th century, can represent for architecture a means of teaching and promoting its specific values, an inspirational source and a good example of efficient and accessible cultural communication. The architecture presents many resemblances with the world of film regarding the concept and space exploring for communicating some ideas or concepts. Both film and architecture have narrative qualities, work with the world of illusions and representations and compose various elements in order to carry on certain significances. The film makers are making available the suggestive and semantic potential of architecture to render states and attitudes, to outline certain meanings or to emit opinions and comments on political, psychological and social issues.

  6. Performance Analysis of Faulty Gallager-B Decoding of QC-LDPC Codes with Applications

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2014-06-01

    Full Text Available In this paper we evaluate the performance of Gallager-B algorithm, used for decoding low-density parity-check (LDPC codes, under unreliable message computation. Our analysis is restricted to LDPC codes constructed from circular matrices (QC-LDPC codes. Using Monte Carlo simulation we investigate the effects of different code parameters on coding system performance, under a binary symmetric communication channel and independent transient faults model. One possible application of the presented analysis in designing memory architecture with unreliable components is considered.

  7. Paraxial diffractive elements for space-variant linear transforms

    Science.gov (United States)

    Teiwes, Stephan; Schwarzer, Heiko; Gu, Ben-Yuan

    1998-06-01

    Optical linear transform architectures bear good potential for future developments of very powerful hybrid vision systems and neural network classifiers. The optical modules of such systems could be used as pre-processors to solve complex linear operations at very high speed in order to simplify an electronic data post-processing. However, the applicability of linear optical architectures is strongly connected with the fundamental question of how to implement a specific linear transform by optical means and physical imitations. The large majority of publications on this topic focusses on the optical implementation of space-invariant transforms by the well-known 4f-setup. Only few papers deal with approaches to implement selected space-variant transforms. In this paper, we propose a simple algebraic method to design diffractive elements for an optical architecture in order to realize arbitrary space-variant transforms. The design procedure is based on a digital model of scalar, paraxial wave theory and leads to optimal element transmission functions within the model. Its computational and physical limitations are discussed in terms of complexity measures. Finally, the design procedure is demonstrated by some examples. Firstly, diffractive elements for the realization of different rotation operations are computed and, secondly, a Hough transform element is presented. The correct optical functions of the elements are proved in computer simulation experiments.

  8. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    OpenAIRE

    Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar

    2017-01-01

    For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...

  9. HiMoP: A three-component architecture to create more human-acceptable social-assistive robots : Motivational architecture for assistive robots.

    Science.gov (United States)

    Rodríguez-Lera, Francisco J; Matellán-Olivera, Vicente; Conde-González, Miguel Á; Martín-Rico, Francisco

    2018-05-01

    Generation of autonomous behavior for robots is a general unsolved problem. Users perceive robots as repetitive tools that do not respond to dynamic situations. This research deals with the generation of natural behaviors in assistive service robots for dynamic domestic environments, particularly, a motivational-oriented cognitive architecture to generate more natural behaviors in autonomous robots. The proposed architecture, called HiMoP, is based on three elements: a Hierarchy of needs to define robot drives; a set of Motivational variables connected to robot needs; and a Pool of finite-state machines to run robot behaviors. The first element is inspired in Alderfer's hierarchy of needs, which specifies the variables defined in the motivational component. The pool of finite-state machine implements the available robot actions, and those actions are dynamically selected taking into account the motivational variables and the external stimuli. Thus, the robot is able to exhibit different behaviors even under similar conditions. A customized version of the "Speech Recognition and Audio Detection Test," proposed by the RoboCup Federation, has been used to illustrate how the architecture works and how it dynamically adapts and activates robots behaviors taking into account internal variables and external stimuli.

  10. HETERO code, heterogeneous procedure for reactor calculation

    International Nuclear Information System (INIS)

    Jovanovic, S.M.; Raisic, N.M.

    1966-11-01

    This report describes the procedure for calculating the parameters of heterogeneous reactor system taking into account the interaction between fuel elements related to established geometry. First part contains the analysis of single fuel element in a diffusion medium, and criticality condition of the reactor system described by superposition of elements interactions. the possibility of performing such analysis by determination of heterogeneous system lattice is described in the second part. Computer code HETERO with the code KETAP (calculation of criticality factor η n and flux distribution) is part of this report together with the example of RB reactor square lattice

  11. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    Science.gov (United States)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  12. High-Speed Soft-Decision Decoding of Two Reed-Muller Codes

    Science.gov (United States)

    Lin, Shu; Uehara, Gregory T.

    1996-01-01

    In his research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RNI subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a hi ch data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating and present some of the key architectural approaches being used to

  13. Optimized reversible binary-coded decimal adders

    DEFF Research Database (Denmark)

    Thomsen, Michael Kirkedal; Glück, Robert

    2008-01-01

    Abstract Babu and Chowdhury [H.M.H. Babu, A.R. Chowdhury, Design of a compact reversible binary coded decimal adder circuit, Journal of Systems Architecture 52 (5) (2006) 272-282] recently proposed, in this journal, a reversible adder for binary-coded decimals. This paper corrects and optimizes...... their design. The optimized 1-decimal BCD full-adder, a 13 × 13 reversible logic circuit, is faster, and has lower circuit cost and less garbage bits. It can be used to build a fast reversible m-decimal BCD full-adder that has a delay of only m + 17 low-power reversible CMOS gates. For a 32-decimal (128-bit....... Keywords: Reversible logic circuit; Full-adder; Half-adder; Parallel adder; Binary-coded decimal; Application of reversible logic synthesis...

  14. ISTTOK real-time architecture

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Ivo S., E-mail: ivoc@ipfn.ist.utl.pt; Duarte, Paulo; Fernandes, Horácio; Valcárcel, Daniel F.; Carvalho, Pedro J.; Silva, Carlos; Duarte, André S.; Neto, André; Sousa, Jorge; Batista, António J.N.; Hekkert, Tiago; Carvalho, Bernardo B.

    2014-03-15

    Highlights: • All real-time diagnostics and actuators were integrated in the same control platform. • A 100 μs control cycle was achieved under the MARTe framework. • Time-windows based control with several event-driven control strategies implemented. • AC discharges with exception handling on iron core flux saturation. • An HTML discharge configuration was developed for configuring the MARTe system. - Abstract: The ISTTOK tokamak was upgraded with a plasma control system based on the Advanced Telecommunications Computing Architecture (ATCA) standard. This control system was designed to improve the discharge stability and to extend the operational space to the alternate plasma current (AC) discharges as part of the ISTTOK scientific program. In order to accomplish these objectives all ISTTOK diagnostics and actuators relevant for real-time operation were integrated in the control system. The control system was programmed in C++ over the Multi-threaded Application Real-Time executor (MARTe) which provides, among other features, a real-time scheduler, an interrupt handler, an intercommunications interface between code blocks and a clearly bounded interface with the external devices. As a complement to the MARTe framework, the BaseLib2 library provides the foundations for the data, code introspection and also a Hypertext Transfer Protocol (HTTP) server service. Taking advantage of the modular nature of MARTe, the algorithms of each diagnostic data processing, discharge timing, context switch, control and actuators output reference generation, run on well-defined blocks of code named Generic Application Module (GAM). This approach allows reusability of the code, simplified simulation, replacement or editing without changing the remaining GAMs. The ISTTOK control system GAMs run sequentially each 100 μs cycle on an Intel{sup ®} Q8200 4-core processor running at 2.33 GHz located in the ATCA crate. Two boards (inside the ATCA crate) with 32 analog

  15. ISTTOK real-time architecture

    International Nuclear Information System (INIS)

    Carvalho, Ivo S.; Duarte, Paulo; Fernandes, Horácio; Valcárcel, Daniel F.; Carvalho, Pedro J.; Silva, Carlos; Duarte, André S.; Neto, André; Sousa, Jorge; Batista, António J.N.; Hekkert, Tiago; Carvalho, Bernardo B.

    2014-01-01

    Highlights: • All real-time diagnostics and actuators were integrated in the same control platform. • A 100 μs control cycle was achieved under the MARTe framework. • Time-windows based control with several event-driven control strategies implemented. • AC discharges with exception handling on iron core flux saturation. • An HTML discharge configuration was developed for configuring the MARTe system. - Abstract: The ISTTOK tokamak was upgraded with a plasma control system based on the Advanced Telecommunications Computing Architecture (ATCA) standard. This control system was designed to improve the discharge stability and to extend the operational space to the alternate plasma current (AC) discharges as part of the ISTTOK scientific program. In order to accomplish these objectives all ISTTOK diagnostics and actuators relevant for real-time operation were integrated in the control system. The control system was programmed in C++ over the Multi-threaded Application Real-Time executor (MARTe) which provides, among other features, a real-time scheduler, an interrupt handler, an intercommunications interface between code blocks and a clearly bounded interface with the external devices. As a complement to the MARTe framework, the BaseLib2 library provides the foundations for the data, code introspection and also a Hypertext Transfer Protocol (HTTP) server service. Taking advantage of the modular nature of MARTe, the algorithms of each diagnostic data processing, discharge timing, context switch, control and actuators output reference generation, run on well-defined blocks of code named Generic Application Module (GAM). This approach allows reusability of the code, simplified simulation, replacement or editing without changing the remaining GAMs. The ISTTOK control system GAMs run sequentially each 100 μs cycle on an Intel ® Q8200 4-core processor running at 2.33 GHz located in the ATCA crate. Two boards (inside the ATCA crate) with 32 analog

  16. AN ARCHITECTURE FOR AUTISM: CONCEPTS OF DESIGN INTERVENTION FOR THE AUTISTIC USER

    Directory of Open Access Journals (Sweden)

    Magda Mostafa

    2008-03-01

    Full Text Available One in every 150 children is estimated to fall within the autistic spectrum, regardless of socio-cultural and economic aspects, with a 4:1 prevalence of males over females (ADDM, 2007. Architecture, as a profession, is responsible for creating environments that accommodate the needs of all types of users. Special needs individuals should not be exempt from such accommodation. Despite this high incidence of autism, there are yet to be developed architectural design guidelines catering specifically to the scope of autistic needs. The primary goal of this research is to correct this exclusion by developing a preliminary framework of architectural design guidelines for autism. This will be done through a two phase study. The first phase will determine, through a questionnaire of first hand caregivers of autistic children, the impact of architectural design elements on autistic behaviour, to determine the most influential. The second phase, based on the findings of the first, will test the conclusive highest ranking architectural elements in an intervention study on autistic children in their school environment. Specific behavioural indicators, namely attention span, response time and behavioural temperament, will be tracked to determine each child’s progress pre and post intervention, for a control and study group. This study concludes in outlining the findings of both phases of the study, the first being the determination of the most influential architectural design elements on autistic behaviour, according to the sample surveyed. The second group of findings outlines design strategies for autism in three points. The first is the presentation of a "sensory design matrix" which matches architectural elements with autistic sensory issues and is used to generate suggested design guidelines. The second is the presentation of these hypothetical guidelines, two of which are tested in the presented study. These guidelines are presented as possible

  17. User's manual for DYNA2D: an explicit two-dimensional hydrodynamic finite-element code with interactive rezoning

    Energy Technology Data Exchange (ETDEWEB)

    Hallquist, J.O.

    1982-02-01

    This revised report provides an updated user's manual for DYNA2D, an explicit two-dimensional axisymmetric and plane strain finite element code for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 4-node solid elements, and the equations-of motion are integrated by the central difference method. An interactive rezoner eliminates the need to terminate the calculation when the mesh becomes too distorted. Rather, the mesh can be rezoned and the calculation continued. The command structure for the rezoner is described and illustrated by an example.

  18. Flexible digital modulation and coding synthesis for satellite communications

    Science.gov (United States)

    Vanderaar, Mark; Budinger, James; Hoerig, Craig; Tague, John

    1991-01-01

    An architecture and a hardware prototype of a flexible trellis modem/codec (FTMC) transmitter are presented. The theory of operation is built upon a pragmatic approach to trellis-coded modulation that emphasizes power and spectral efficiency. The system incorporates programmable modulation formats, variations of trellis-coding, digital baseband pulse-shaping, and digital channel precompensation. The modulation formats examined include (uncoded and coded) binary phase shift keying (BPSK), quatenary phase shift keying (QPSK), octal phase shift keying (8PSK), 16-ary quadrature amplitude modulation (16-QAM), and quadrature quadrature phase shift keying (Q squared PSK) at programmable rates up to 20 megabits per second (Mbps). The FTMC is part of the developing test bed to quantify modulation and coding concepts.

  19. Enhancer-derived lncRNAs regulate genome architecture: fact or fiction?

    CSIR Research Space (South Africa)

    Fanucchi, Stephanie

    2017-06-01

    Full Text Available How does the non-coding portion of the genome contribute to the regulation of genome architecture? A recent paper by Tan et al. focuses on the relationship between cis-acting complex-trait-associated lincRNAs and the formation of chromosomal...

  20. Enterprise Architecture in the Company Management Framework

    Directory of Open Access Journals (Sweden)

    Bojinov Bojidar Violinov

    2016-11-01

    Full Text Available The study aims to explore the role and importance of the concept of enterprise architecture in modern company management. For this purpose it clarifies the nature, scope, components of the enterprise architecture and relationships within it using the Zachman model. Based on the critical analysis of works by leading scientists, there presented a definition of enterprise architecture as a general description of all elements of strategic management of the company combined with description of its organizational, functional and operational structure, including the relationship between all tangible and intangible resources essential for its normal functioning and development. This in turn enables IT enterprise architecture to be defined as a set of corporate IT resources (hardware, software and technology, their interconnection and integration within the overall architecture of the company, as well as their formal description, methods and tools for their modeling and management in order to achieve strategic business goals of the organization. In conclusion the article summarizes the significance and role of enterprise architecture for strategic management of the company in today’s digital economy. The study underlines the importance of an integrated multidisciplinary approach to the work of a contemporary company, and the need for adequate matching and alignment of IT with business priorities and objectives of the company.

  1. Finite element code FENIA verification and application for 3D modelling of thermal state of radioactive waste deep geological repository

    Science.gov (United States)

    Butov, R. A.; Drobyshevsky, N. I.; Moiseenko, E. V.; Tokarev, U. N.

    2017-11-01

    The verification of the FENIA finite element code on some problems and an example of its application are presented in the paper. The code is being developing for 3D modelling of thermal, mechanical and hydrodynamical (THM) problems related to the functioning of deep geological repositories. Verification of the code for two analytical problems has been performed. The first one is point heat source with exponential heat decrease, the second one - linear heat source with similar behavior. Analytical solutions have been obtained by the authors. The problems have been chosen because they reflect the processes influencing the thermal state of deep geological repository of radioactive waste. Verification was performed for several meshes with different resolution. Good convergence between analytical and numerical solutions was achieved. The application of the FENIA code is illustrated by 3D modelling of thermal state of a prototypic deep geological repository of radioactive waste. The repository is designed for disposal of radioactive waste in a rock at depth of several hundred meters with no intention of later retrieval. Vitrified radioactive waste is placed in the containers, which are placed in vertical boreholes. The residual decay heat of radioactive waste leads to containers, engineered safety barriers and host rock heating. Maximum temperatures and corresponding times of their establishment have been determined.

  2. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    Science.gov (United States)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  3. The relationship between 3D bone architectural parameters and elastic moduli of three orthogonal directions predicted from finite elements analysis

    International Nuclear Information System (INIS)

    Park, Kwan Soo; Lee, Sam Sun; Huh, Kyung Hoe; Yi, Wan Jin; Heo, Min Suk; Choi, Soon Chul

    2008-01-01

    To investigate the relationship between 3D bone architectural parameters and direction-related elastic moduli of cancellous bone of mandibular condyle. Two micro-pigs (Micro-pigR, PWG Genetics Korea) were used. Each pig was about 12 months old and weighing around 44 kg. 31 cylindrical bone specimen were obtained from cancellous bone of condyles for 3D analysis and measured by micro-computed tomography. Six parameters were trabecular thickness (Tb.Th), bone specific surface (BS/BV), percent bone volume (BV/TV), structure model index (SMI), degree of anisotropy (DA) and 3-dimensional fractal dimension (3DFD). Elastic moduli of three orthogonal directions (superiorinferior (SI), medial-lateral (ML), andterior-posterior (AP) direction) were calculated through finite element analysis. Elastic modulus of superior-inferior direction was higher than those of other directions. Elastic moduli of 3 orthogonal directions showed different correlation with 3D architectural parameters. Elastic moduli of SI and ML directions showed significant strong to moderate correlation with BV/TV, SMI and 3DFD. Elastic modulus of cancellous bone of pig mandibular condyle was highest in the SI direction and it was supposed that the change into plate-like structure of trabeculae was mainly affected by increase of trabeculae of SI and ML directions.

  4. Current and anticipated uses of thermal-hydraulic codes in Germany

    Energy Technology Data Exchange (ETDEWEB)

    Teschendorff, V.; Sommer, F.; Depisch, F.

    1997-07-01

    In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.

  5. Current and anticipated uses of thermal-hydraulic codes in Germany

    International Nuclear Information System (INIS)

    Teschendorff, V.; Sommer, F.; Depisch, F.

    1997-01-01

    In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses

  6. Development of chemical equilibrium analysis code 'CHEEQ'

    International Nuclear Information System (INIS)

    Nagai, Shuichiro

    2006-08-01

    'CHEEQ' code which calculates the partial pressure and the mass of the system consisting of ideal gas and pure condensed phase compounds, was developed. Characteristics of 'CHEEQ' code are as follows. All the chemical equilibrium equations were described by the formation reactions from the mono-atomic gases in order to simplify the code structure and input preparation. Chemical equilibrium conditions, Σν i μ i =0 for the gaseous compounds and precipitated condensed phase compounds and Σν i μ i > 0 for the non-precipitated condensed phase compounds, were applied. Where, ν i and μ i are stoichiometric coefficient and chemical potential of component i. Virtual solid model was introduced to perform the calculation of constant partial pressure condition. 'CHEEQ' was consisted of following 3 parts, (1) analysis code, zc132. f. (2) thermodynamic data base, zmdb01 and (3) input data file, zindb. 'CHEEQ' code can calculate the system which consisted of elements (max.20), condensed phase compounds (max.100) and gaseous compounds. (max.200). Thermodynamic data base, zmdb01 contains about 1000 elements and compounds, and 200 of them were Actinide elements and their compounds. This report describes the basic equations, the outline of the solution procedure and instructions to prepare the input data and to evaluate the calculation results. (author)

  7. Scattering and/or diffusing elements in a variety of recently completed music auditoria

    Science.gov (United States)

    McKay, Ronald L.

    2002-11-01

    Architectural elements which provide effective acoustic scattering and/or diffusion in a variety of recently completed auditoria for music performance will be presented. Color slides depicting the various elements will be shown. Each will be discussed with respect to its acoustic performance and architectural logic. Measured time-energy reflection patterns will be presented in many cases.

  8. Decoding the function of nuclear long non-coding RNAs.

    Science.gov (United States)

    Chen, Ling-Ling; Carmichael, Gordon G

    2010-06-01

    Long non-coding RNAs (lncRNAs) are mRNA-like, non-protein-coding RNAs that are pervasively transcribed throughout eukaryotic genomes. Rather than silently accumulating in the nucleus, many of these are now known or suspected to play important roles in nuclear architecture or in the regulation of gene expression. In this review, we highlight some recent progress in how lncRNAs regulate these important nuclear processes at the molecular level. Copyright 2010 Elsevier Ltd. All rights reserved.

  9. Automatic Generation of Agents using Reusable Soft Computing Code Libraries to develop Multi Agent System for Healthcare

    OpenAIRE

    Priti Srinivas Sajja

    2015-01-01

    This paper illustrates architecture for a multi agent system in healthcare domain. The architecture is generic and designed in form of multiple layers. One of the layers of the architecture contains many proactive, co-operative and intelligent agents such as resource management agent, query agent, pattern detection agent and patient management agent. Another layer of the architecture is a collection of libraries to auto-generate code for agents using soft computing techni...

  10. Ellipse and Oval in Baroque Sacral Architecture in Slovakia

    Science.gov (United States)

    Grúňová, Zuzana; Holešová, Michaela

    2017-06-01

    Oval, circular and elliptic forms appear in the architecture from the very beginning. The basic problem of the geometric analysis of the spaces with an elliptic or oval ground plan is a great sensitivity of the outcome calculations to the plan's precision, mainly to distinguish between oval and ellipse. Sebastiano Serlio and Guarino Guarini belong to those architects, theoreticians, who analysed the potential of circular or oval forms and some of their ideas are analysed in the paper. Elliptical or oval plans were used also in Slovak baroque architecture or interior elements and the paper introduce some of the most known examples as a connection to the world architecture ideas.

  11. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  12. Advanced Architectures for Astrophysical Supercomputing

    Science.gov (United States)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  13. On User Interface Architectures and Implementation

    OpenAIRE

    KUKOLA, TERO

    2008-01-01

    The definition of MVC model has become distorted. Many MVC adaptations use a mediating controller between model and view layers, which is not part of the original MVC/80 model. While the separation of model and view has benefits, the mediating controller leads to excessive redundancy in code and should be avoided. Removing the mediating controller simplifies UI architectures. This simplification can be continued further by adopting dynamic features and ultimately by adopting dynamic languages...

  14. Business process architectures: overview, comparison and framework

    Science.gov (United States)

    Dijkman, Remco; Vanderfeesten, Irene; Reijers, Hajo A.

    2016-02-01

    With the uptake of business process modelling in practice, the demand grows for guidelines that lead to consistent and integrated collections of process models. The notion of a business process architecture has been explicitly proposed to address this. This paper provides an overview of the prevailing approaches to design a business process architecture. Furthermore, it includes evaluations of the usability and use of the identified approaches. Finally, it presents a framework for business process architecture design that can be used to develop a concrete architecture. The use and usability were evaluated in two ways. First, a survey was conducted among 39 practitioners, in which the opinion of the practitioners on the use and usefulness of the approaches was evaluated. Second, four case studies were conducted, in which process architectures from practice were analysed to determine the approaches or elements of approaches that were used in their design. Both evaluations showed that practitioners have a preference for using approaches that are based on reference models and approaches that are based on the identification of business functions or business objects. At the same time, the evaluations showed that practitioners use these approaches in combination, rather than selecting a single approach.

  15. 76 FR 11432 - Coding of Design Marks in Registrations

    Science.gov (United States)

    2011-03-02

    ...] Coding of Design Marks in Registrations AGENCY: United States Patent and Trademark Office, Commerce... practice of coding newly registered trademarks that include a design element with design mark codes based... notice and request for comments at 75 FR 81587, proposing to discontinue a secondary system of coding...

  16. KENO-V code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  17. Computer Assessed Design – A Vehicle of Architectural Communication and a Design Tool

    OpenAIRE

    Petrovici, Liliana-Mihaela

    2012-01-01

    In comparison with the limits of the traditional representation tools, the development of the computer graphics constitutes an opportunity to assert architectural values. The differences between communication codes of the architects and public are diminished; the architectural ideas can be represented in a coherent, intelligible and attractive way, so that they get more chances to be materialized according to the thinking of the creator. Concurrently, the graphic software have been improving ...

  18. A high throughput architecture for a low complexity soft-output demapping algorithm

    Science.gov (United States)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  19. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  20. An innovative methodology for the non-destructive diagnosis of architectural elements of ancient historical buildings.

    Science.gov (United States)

    Fais, Silvana; Casula, Giuseppe; Cuccuru, Francesco; Ligas, Paola; Bianchi, Maria Giovanna

    2018-03-12

    In the following we present a new non-invasive methodology aimed at the diagnosis of stone building materials used in historical buildings and architectural elements. This methodology consists of the integrated sequential application of in situ proximal sensing methodologies such as the 3D Terrestrial Laser Scanner for the 3D modelling of investigated objects together with laboratory and in situ non-invasive multi-techniques acoustic data, preceded by an accurate petrographical study of the investigated stone materials by optical and scanning electron microscopy. The increasing necessity to integrate different types of techniques in the safeguard of the Cultural Heritage is the result of the following two interdependent factors: 1) The diagnostic process on the building stone materials of monuments is increasingly focused on difficult targets in critical situations. In these cases, the diagnosis using only one type of non-invasive technique may not be sufficient to investigate the conservation status of the stone materials of the superficial and inner parts of the studied structures 2) Recent technological and scientific developments in the field of non-invasive diagnostic techniques for different types of materials favors and supports the acquisition, processing and interpretation of huge multidisciplinary datasets.

  1. Development of finite element code for the analysis of coupled thermo-hydro-mechanical behaviors of a saturated-unsaturated medium

    International Nuclear Information System (INIS)

    Ohnishi, Y.; Shibata, H.; Kobsayashi, A.

    1987-01-01

    A model is presented which describes fully coupled thermo-hydro-mechanical behavior of a porous geologic medium. The mathematical formulation for the model utilizes the Biot theory for the consolidation and the energy balance equation. If the medium is in the condition of saturated-unsaturated flow, then the free surfaces are taken into consideration in the model. The model, incorporated in a finite element numerical procedure, was implemented in a two-dimensional computer code. The code was developed under the assumptions that the medium is poro-elastic and in the plane strain condition; that water in the ground does not change its phase; and that heat is transferred by conductive and convective flow. Analytical solutions pertaining to consolidation theory for soils and rocks, thermoelasticity for solids and hydrothermal convection theory provided verification of stress and fluid flow couplings, respectively, in the coupled model. Several types of problems are analyzed

  2. Architectural design of the science complex at Elizabeth City State University

    Science.gov (United States)

    Jahromi, Soheila

    1993-01-01

    This paper gives an overall view of the architectural design process and elements in taking an idea from conception to execution. The project presented is an example for this process. Once the need for a new structure is established, an architect studies the requirements, opinions and limits in creating a structure that people will exist in, move through, and use. Elements in designing a building include factors such as volume and surface, light and form changes of scale and view, movement and stasis. Some of the other factors are functions and physical conditions of construction. Based on experience, intuition, and boundaries, an architect will utilize all elements in creating a new building. In general, the design process begins with studying the spatial needs which develop into an architectural program. A comprehensive and accurate architectural program is essential for having a successful building. The most attractive building which does not meet the functional needs of its users has failed at the primary reason for its existence. To have a good program an architect must have a full understanding of the daily functions that will take place in the building. The architectural program along with site characteristics are among a few of the important guidelines in studying the form, adjacencies, and circulation for the structure itself and also in relation to the adjacent structures. Conceptual studies are part of the schematic design, which is the first milestone in the design process. The other reference points are design development and construction documents. At each milestone, review and coordination with all the consultants is established, and the user is essential in refining the project. In design development phase, conceptual diagrams take shape, and architectural, structural, mechanical, and electrical systems are developed. The final phase construction documents convey all the information required to construct the building. The design process and elements

  3. Ethical codes. Fig leaf argument, ballast or cultural element for radiation protection?

    International Nuclear Information System (INIS)

    Gellermann, Rainer

    2014-01-01

    The international association for radiation protection (IRPA) adopted in May 2004 a Code of Ethics in order to allow their members to hold an adequate professional level of ethical line of action. Based on this code of ethics the professional body of radiation protection (Fachverband fuer Strahlenschutz) has developed its own ethical code and adopted in 2005.

  4. Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S.; Weatherby, J.R.; Attaway, S.W.; Swegle, J.W.; Matalucci, R.V.

    1998-06-01

    Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.

  5. System Level Evaluation of Innovative Coded MIMO-OFDM Systems for Broadcasting Digital TV

    Directory of Open Access Journals (Sweden)

    Y. Nasser

    2008-01-01

    Full Text Available Single-frequency networks (SFNs for broadcasting digital TV is a topic of theoretical and practical interest for future broadcasting systems. Although progress has been made in the characterization of its description, there are still considerable gaps in its deployment with MIMO technique. The contribution of this paper is multifold. First, we investigate the possibility of applying a space-time (ST encoder between the antennas of two sites in SFN. Then, we introduce a 3D space-time-space block code for future terrestrial digital TV in SFN architecture. The proposed 3D code is based on a double-layer structure designed for intercell and intracell space time-coded transmissions. Eventually, we propose to adapt a technique called effective exponential signal-to-noise ratio (SNR mapping (EESM to predict the bit error rate (BER at the output of the channel decoder in the MIMO systems. The EESM technique as well as the simulations results will be used to doubly check the efficiency of our 3D code. This efficiency is obtained for equal and unequal received powers whatever is the location of the receiver by adequately combining ST codes. The 3D code is then a very promising candidate for SFN architecture with MIMO transmission.

  6. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  7. Critical state and magnetization loss in multifilamentary superconducting wire solved through the commercial finite element code ANSYS

    Science.gov (United States)

    Farinon, S.; Fabbricatore, P.; Gömöry, F.

    2010-11-01

    The commercially available finite element code ANSYS has been adapted to solve the critical state of single strips and multifilamentary tapes. We studied a special algorithm which approaches the critical state by an iterative adjustment of the material resistivity. Then, we proved its validity by comparing the results obtained for a thin strip to the Brand theory for the transport current and magnetization cases. Also, the challenging calculation of the magnetization loss of a real multifilamentary BSCCO tape showed the usefulness of our method. Finally, we developed several methods to enhance the speed of convergence, making the proposed process quite competitive in the existing survey of ac losses simulations.

  8. Visual product architecture modelling for structuring data in a PLM system

    DEFF Research Database (Denmark)

    Bruun, Hans Peter Lomholt; Mortensen, Niels Henrik

    2012-01-01

    The goal of this paper is to determine the role of a product architecture model to support communication and to form the basis for developing and maintaining information of product structures in a PLM system. This paper contains descriptions of a modelling tool to represent a product architecture....... Moreover, it is discussed how the sometimes intangible elements and phenomena within an architecture model can be visually modeled in order to form the basis for a data model in a PLM system. © 2012 International Federation for Information Processing....

  9. Benchmarking and tuning the MILC code on clusters and supercomputers

    International Nuclear Information System (INIS)

    Gottlieb, Steven

    2002-01-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha

  10. Benchmarking and tuning the MILC code on clusters and supercomputers

    International Nuclear Information System (INIS)

    Steven A. Gottlieb

    2001-01-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha

  11. Benchmarking and tuning the MILC code on clusters and supercomputers

    Science.gov (United States)

    Gottlieb, Steven

    2002-03-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.

  12. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong

    2013-03-01

    In this paper we consider a wireless multimedia system by mapping scalable video coded (SVC) bit stream upon superposition coded (SPC) signals, referred to as (SVC-SPC) architecture. Empirical experiments using a software-defined radio(SDR) emulator are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer should be approximately four times higher than the power allocated to the enhancement layer.

  13. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    Science.gov (United States)

    Litinski, Daniel; Kesselring, Markus S.; Eisert, Jens; von Oppen, Felix

    2017-07-01

    We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall-superconductor hybrids.

  14. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    Directory of Open Access Journals (Sweden)

    Daniel Litinski

    2017-09-01

    Full Text Available We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall–superconductor hybrids.

  15. Gray Code for Cayley Permutations

    Directory of Open Access Journals (Sweden)

    J.-L. Baril

    2003-10-01

    Full Text Available A length-n Cayley permutation p of a total ordered set S is a length-n sequence of elements from S, subject to the condition that if an element x appears in p then all elements y < x also appear in p . In this paper, we give a Gray code list for the set of length-n Cayley permutations. Two successive permutations in this list differ at most in two positions.

  16. Architectural Analysis of Systems Based on the Publisher-Subscriber Style

    Science.gov (United States)

    Ganesun, Dharmalingam; Lindvall, Mikael; Ruley, Lamont; Wiegand, Robert; Ly, Vuong; Tsui, Tina

    2010-01-01

    Architectural styles impose constraints on both the topology and the interaction behavior of involved parties. In this paper, we propose an approach for analyzing implemented systems based on the publisher-subscriber architectural style. From the style definition, we derive a set of reusable questions and show that some of them can be answered statically whereas others are best answered using dynamic analysis. The paper explains how the results of static analysis can be used to orchestrate dynamic analysis. The proposed method was successfully applied on the NASA's Goddard Mission Services Evolution Center (GMSEC) software product line. The results show that the GMSEC has a) a novel reusable vendor-independent middleware abstraction layer that allows the NASA's missions to configure the middleware of interest without changing the publishers' or subscribers' source code, and b) some high priority bugs due to behavioral discrepancies, which were eluded during testing and code reviews, among different implementations of the same APIs for different vendors.

  17. Modelling Approach In Islamic Architectural Designs

    Directory of Open Access Journals (Sweden)

    Suhaimi Salleh

    2014-06-01

    Full Text Available Architectural designs contribute as one of the main factors that should be considered in minimizing negative impacts in planning and structural development in buildings such as in mosques. In this paper, the ergonomics perspective is revisited which hence focuses on the conditional factors involving organisational, psychological, social and population as a whole. This paper tries to highlight the functional and architectural integration with ecstatic elements in the form of decorative and ornamental outlay as well as incorporating the building structure such as wall, domes and gates. This paper further focuses the mathematical aspects of the architectural designs such as polar equations and the golden ratio. These designs are modelled into mathematical equations of various forms, while the golden ratio in mosque is verified using two techniques namely, the geometric construction and the numerical method. The exemplary designs are taken from theSabah Bandaraya Mosque in Likas, Kota Kinabalu and the Sarawak State Mosque in Kuching,while the Universiti Malaysia Sabah Mosque is used for the Golden Ratio. Results show thatIslamic architectural buildings and designs have long had mathematical concepts and techniques underlying its foundation, hence, a modelling approach is needed to rejuvenate these Islamic designs.

  18. Enabling Tussle-Agile Inter-networking Architectures by Underlay Virtualisation

    Science.gov (United States)

    Dianati, Mehrdad; Tafazolli, Rahim; Moessner, Klaus

    In this paper, we propose an underlay inter-network virtualisation framework in order to enable tussle-agile flexible networking over the existing inter-network infrastructures. The functionalities that inter-networking elements (transit nodes, access networks, etc.) need to support in order to enable virtualisation are discussed. We propose the base architectures of each the abstract elements to support the required inter-network virtualisation functionalities.

  19. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    Science.gov (United States)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  20. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.