WorldWideScience

Sample records for element code architecture

  1. Delta: An object-oriented finite element code architecture for massively parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Weatherby, J.R.; Schutt, J.A.; Peery, J.S.; Hogan, R.E.

    1996-02-01

    Delta is an object-oriented code architecture based on the finite element method which enables simulation of a wide range of engineering mechanics problems in a parallel processing environment. Written in C{sup ++}, Delta is a natural framework for algorithm development and for research involving coupling of mechanics from different Engineering Science disciplines. To enhance flexibility and encourage code reuse, the architecture provides a clean separation of the major aspects of finite element programming. Spatial discretization, temporal discretization, and the solution of linear and nonlinear systems of equations are each implemented separately, independent from the governing field equations. Other attractive features of the Delta architecture include support for constitutive models with internal variables, reusable ``matrix-free`` equation solvers, and support for region-to-region variations in the governing equations and the active degrees of freedom. A demonstration code built from the Delta architecture has been used in two-dimensional and three-dimensional simulations involving dynamic and quasi-static solid mechanics, transient and steady heat transport, and flow in porous media.

  2. Elements of Architecture

    DEFF Research Database (Denmark)

    Elements of Architecture explores new ways of engaging architecture in archaeology. It conceives of architecture both as the physical evidence of past societies and as existing beyond the physical environment, considering how people in the past have not just dwelled in buildings but have existed...

  3. Elements of Architecture

    DEFF Research Database (Denmark)

    Elements of Architecture explores new ways of engaging architecture in archaeology. It conceives of architecture both as the physical evidence of past societies and as existing beyond the physical environment, considering how people in the past have not just dwelled in buildings but have existed...... and affective impacts, of these material remains. The contributions in this volume investigate the way time, performance and movement, both physically and emotionally, are central aspects of understanding architectural assemblages. It is a book about the constellations of people, places and things that emerge...

  4. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  5. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  6. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  7. Research and Design in Unified Coding Architecture for Smart Grids

    Directory of Open Access Journals (Sweden)

    Gang Han

    2013-09-01

    Full Text Available Standardized and sharing information platform is the foundation of the Smart Grids. In order to improve the dispatching center information integration of the power grids and achieve efficient data exchange, sharing and interoperability, a unified coding architecture is proposed. The architecture includes coding management layer, coding generation layer, information models layer and application system layer. Hierarchical design makes the whole coding architecture to adapt to different application environments, different interfaces, loosely coupled requirements, which can realize the integration model management function of the power grids. The life cycle and evaluation method of survival of unified coding architecture is proposed. It can ensure the stability and availability of the coding architecture. Finally, the development direction of coding technology of the Smart Grids in future is prospected.

  8. Reversible machine code and its abstract processor architecture

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert; Yokoyama, Tetsuo

    2007-01-01

    A reversible abstract machine architecture and its reversible machine code are presented and formalized. For machine code to be reversible, both the underlying control logic and each instruction must be reversible. A general class of machine instruction sets was proven to be reversible, building ...... on our concept of reversible updates. The presentation is abstract and can serve as a guideline for a family of reversible processor designs. By example, we illustrate programming principles for the abstract machine architecture formalized in this paper....

  9. Novel power saving architecture for FBG based OCDMA code generation

    Science.gov (United States)

    Osadola, Tolulope B.; Idris, Siti K.; Glesk, Ivan

    2013-10-01

    A novel architecture for generating incoherent, 2-dimensional wavelength hopping-time spreading optical CDMA codes is presented. The architecture is designed to facilitate the reuse of optical source signal that is unused after an OCDMA code has been generated using fiber Bragg grating based encoders. Effective utilization of available optical power is therefore achieved by cascading several OCDMA encoders thereby enabling 3dB savings in optical power.

  10. Requirements for a multifunctional code architecture

    Energy Technology Data Exchange (ETDEWEB)

    Tiihonen, O. [VTT Energy (Finland); Juslin, K. [VTT Automation (Finland)

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed.

  11. Requirements for a multifunctional code architecture

    International Nuclear Information System (INIS)

    Tiihonen, O.; Juslin, K.

    1997-01-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed

  12. Reflections on agranular architecture: predictive coding in the motor cortex

    OpenAIRE

    Shipp, Stewart; Adams, Rick A.; Friston, Karl J.

    2013-01-01

    The agranular architecture of motor cortex lacks a functional interpretation. Here, we consider a ?predictive coding? account of this unique feature based on asymmetries in hierarchical cortical connections. In sensory cortex, layer 4 (the granular layer) is the target of ascending pathways. We theorise that the operation of predictive coding in the motor system (a process termed ?active inference?) provides a principled rationale for the apparent recession of the ascending pathway in motor c...

  13. NASA Lewis Steady-State Heat Pipe Code Architecture

    Science.gov (United States)

    Mi, Ye; Tower, Leonard K.

    2013-01-01

    NASA Glenn Research Center (GRC) has developed the LERCHP code. The PC-based LERCHP code can be used to predict the steady-state performance of heat pipes, including the determination of operating temperature and operating limits which might be encountered under specified conditions. The code contains a vapor flow algorithm which incorporates vapor compressibility and axially varying heat input. For the liquid flow in the wick, Darcy s formula is employed. Thermal boundary conditions and geometric structures can be defined through an interactive input interface. A variety of fluid and material options as well as user defined options can be chosen for the working fluid, wick, and pipe materials. This report documents the current effort at GRC to update the LERCHP code for operating in a Microsoft Windows (Microsoft Corporation) environment. A detailed analysis of the model is presented. The programming architecture for the numerical calculations is explained and flowcharts of the key subroutines are given

  14. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  15. Elements of neurogeometry functional architectures of vision

    CERN Document Server

    Petitot, Jean

    2017-01-01

    This book describes several mathematical models of the primary visual cortex, referring them to a vast ensemble of experimental data and putting forward an original geometrical model for its functional architecture, that is, the highly specific organization of its neural connections. The book spells out the geometrical algorithms implemented by this functional architecture, or put another way, the “neurogeometry” immanent in visual perception. Focusing on the neural origins of our spatial representations, it demonstrates three things: firstly, the way the visual neurons filter the optical signal is closely related to a wavelet analysis; secondly, the contact structure of the 1-jets of the curves in the plane (the retinal plane here) is implemented by the cortical functional architecture; and lastly, the visual algorithms for integrating contours from what may be rather incomplete sensory data can be modelled by the sub-Riemannian geometry associated with this contact structure. As such, it provides rea...

  16. Reflections on agranular architecture: predictive coding in the motor cortex.

    Science.gov (United States)

    Shipp, Stewart; Adams, Rick A; Friston, Karl J

    2013-12-01

    The agranular architecture of motor cortex lacks a functional interpretation. Here, we consider a 'predictive coding' account of this unique feature based on asymmetries in hierarchical cortical connections. In sensory cortex, layer 4 (the granular layer) is the target of ascending pathways. We theorise that the operation of predictive coding in the motor system (a process termed 'active inference') provides a principled rationale for the apparent recession of the ascending pathway in motor cortex. The extension of this theory to interlaminar circuitry also accounts for a sub-class of 'mirror neuron' in motor cortex--whose activity is suppressed when observing an action--explaining how predictive coding can gate hierarchical processing to switch between perception and action. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. A unified architecture of transcriptional regulatory elements

    DEFF Research Database (Denmark)

    Andersson, Robin; Sandelin, Albin Gustav; Danko, Charles G.

    2015-01-01

    Gene expression is precisely controlled in time and space through the integration of signals that act at gene promoters and gene-distal enhancers. Classically, promoters and enhancers are considered separate classes of regulatory elements, often distinguished by histone modifications. However, re...

  18. Building code challenging the ethics behind adobe architecture in North Cyprus.

    Science.gov (United States)

    Hurol, Yonca; Yüceer, Hülya; Şahali, Öznem

    2015-04-01

    Adobe masonry is part of the vernacular architecture of Cyprus. Thus, it is possible to use this technology in a meaningful way on the island. On the other hand, although adobe architecture is more sustainable in comparison to other building technologies, the use of it is diminishing in North Cyprus. The application of Turkish building code in the north of the island has created complications in respect of the use of adobe masonry, because this building code demands that reinforced concrete vertical tie-beams are used together with adobe masonry. The use of reinforced concrete elements together with adobe masonry causes problems in relation to the climatic response of the building as well as causing other technical and aesthetic problems. This situation makes the design of adobe masonry complicated and various types of ethical problems also emerge. The objective of this article is to analyse the ethical problems which arise as a consequence of the restrictive character of the building code, by analysing two case studies and conducting an interview with an architect who was involved with the use of adobe masonry in North Cyprus. According to the results of this article there are ethical problems at various levels in the design of both case studies. These problems are connected to the responsibilities of architects in respect of the social benefit, material production, aesthetics and affordability of the architecture as well as presenting distrustful behaviour where the obligations of architects to their clients is concerned.

  19. Implementation of collisions on GPU architecture in the Vorpal code

    Science.gov (United States)

    Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John

    2017-10-01

    The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.

  20. The coding and noncoding architecture of the Caulobacter crescentus genome.

    Directory of Open Access Journals (Sweden)

    Jared M Schrader

    2014-07-01

    Full Text Available Caulobacter crescentus undergoes an asymmetric cell division controlled by a genetic circuit that cycles in space and time. We provide a universal strategy for defining the coding potential of bacterial genomes by applying ribosome profiling, RNA-seq, global 5'-RACE, and liquid chromatography coupled with tandem mass spectrometry (LC-MS data to the 4-megabase C. crescentus genome. We mapped transcript units at single base-pair resolution using RNA-seq together with global 5'-RACE. Additionally, using ribosome profiling and LC-MS, we mapped translation start sites and coding regions with near complete coverage. We found most start codons lacked corresponding Shine-Dalgarno sites although ribosomes were observed to pause at internal Shine-Dalgarno sites within the coding DNA sequence (CDS. These data suggest a more prevalent use of the Shine-Dalgarno sequence for ribosome pausing rather than translation initiation in C. crescentus. Overall 19% of the transcribed and translated genomic elements were newly identified or significantly improved by this approach, providing a valuable genomic resource to elucidate the complete C. crescentus genetic circuitry that controls asymmetric cell division.

  1. Periodic Boundary Conditions in the ALEGRA Finite Element Code

    International Nuclear Information System (INIS)

    Aidun, John B.; Robinson, Allen C.; Weatherby, Joe R.

    1999-01-01

    This document describes the implementation of periodic boundary conditions in the ALEGRA finite element code. ALEGRA is an arbitrary Lagrangian-Eulerian multi-physics code with both explicit and implicit numerical algorithms. The periodic boundary implementation requires a consistent set of boundary input sets which are used to describe virtual periodic regions. The implementation is noninvasive to the majority of the ALEGRA coding and is based on the distributed memory parallel framework in ALEGRA. The technique involves extending the ghost element concept for interprocessor boundary communications in ALEGRA to additionally support on- and off-processor periodic boundary communications. The user interface, algorithmic details and sample computations are given

  2. Binary morphology with spatially variant structuring elements: algorithm and architecture.

    Science.gov (United States)

    Hedberg, Hugo; Dokladal, Petr; Owall, Viktor

    2009-03-01

    Mathematical morphology with spatially variant structuring elements outperforms translation-invariant structuring elements in various applications and has been studied in the literature over the years. However, supporting a variable structuring element shape imposes an overwhelming computational complexity, dramatically increasing with the size of the structuring element. Limiting the supported class of structuring elements to rectangles has allowed for a fast algorithm to be developed, which is efficient in terms of number of operations per pixel, has a low memory requirement, and a low latency. These properties make this algorithm useful in both software and hardware implementations, not only for spatially variant, but also translation-invariant morphology. This paper also presents a dedicated hardware architecture intended to be used as an accelerator in embedded system applications, with corresponding implementation results when targeted for both field programmable gate arrays and application specific integrated circuits.

  3. A code for obtaining temperature distribution by finite element method

    International Nuclear Information System (INIS)

    Bloch, M.

    1984-01-01

    The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt

  4. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  5. Computing element evolution towards Exascale and its impact on legacy simulation codes

    International Nuclear Information System (INIS)

    Colin de Verdiere, Guillaume J.L.

    2015-01-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes. (orig.)

  6. FLASH: A finite element computer code for variably saturated flow

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A

  7. FINELM: a multigroup finite element diffusion code. Part I

    International Nuclear Information System (INIS)

    Davierwalla, D.M.

    1980-12-01

    The author presents a two dimensional code for multigroup diffusion using the finite element method. It was realized that the extensive connectivity which contributes significantly to the accuracy, results in a matrix which, although symmetric and positive definite, is wide band and possesses an irregular profile. Hence, it was decided to introduce sparsity techniques into the code. The introduction of the R-Z geometry lead to a great deal of changes in the code since the rotational invariance of the removal matrices in X-Y geometry did not carry over in R-Z geometry. Rectangular elements were introduced to remedy the inability of the triangles to model essentially one dimensional problems such as slab geometry. The matter is discussed briefly in the text in the section on benchmark problems. This report is restricted to the general theory of the triangular elements and to the sparsity techniques viz. incomplete disections. The latter makes the size of the problem that can be handled independent of core memory and dependent only on disc storage capacity which is virtually unlimited. (Auth.)

  8. Network Coding Parallelization Based on Matrix Operations for Multicore Architectures

    DEFF Research Database (Denmark)

    Wunderlich, Simon; Cabrera, Juan; Fitzek, Frank

    2015-01-01

    such as the Raspberry Pi2 with four cores in the order of up to one full magnitude. The speed increase gain is even higher than the number of cores of the Raspberry Pi2 since the newly introduced approach exploits the cache architecture way better than by-the-book matrix operations. Copyright © 2015 by the Institute...

  9. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  10. Architectural elements of hybrid navigation systems for future space transportation

    Science.gov (United States)

    Trigo, Guilherme F.; Theil, Stephan

    2017-12-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  11. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    Science.gov (United States)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  12. The Intertwining of Transposable Elements and Non-Coding RNAs

    Directory of Open Access Journals (Sweden)

    Nicholas Delihas

    2013-06-01

    Full Text Available Growing evidence shows a close association of transposable elements (TE with non-coding RNAs (ncRNA, and a significant number of small ncRNAs originate from TEs. Further, ncRNAs linked with TE sequences participate in a wide-range of regulatory functions. Alu elements in particular are critical players in gene regulation and molecular pathways. Alu sequences embedded in both long non-coding RNAs (lncRNA and mRNAs form the basis of targeted mRNA decay via short imperfect base-pairing. Imperfect pairing is prominent in most ncRNA/target RNA interactions and found throughout all biological kingdoms. The piRNA-Piwi complex is multifunctional, but plays a major role in protection against invasion by transposons. This is an RNA-based genetic immune system similar to the one found in prokaryotes, the CRISPR system. Thousands of long intergenic non-coding RNAs (lincRNAs are associated with endogenous retrovirus LTR transposable elements in human cells. These TEs can provide regulatory signals for lincRNA genes. A surprisingly large number of long circular ncRNAs have been discovered in human fibroblasts. These serve as “sponges” for miRNAs. Alu sequences, encoded in introns that flank exons are proposed to participate in RNA circularization via Alu/Alu base-pairing. Diseases are increasingly found to have a TE/ncRNA etiology. A single point mutation in a SINE/Alu sequence in a human long non-coding RNA leads to brainstem atrophy and death. On the other hand, genomic clusters of repeat sequences as well as lncRNAs function in epigenetic regulation. Some clusters are unstable, which can lead to formation of diseases such as facioscapulohumeral muscular dystrophy. The future may hold more surprises regarding diseases associated with ncRNAs andTEs.

  13. Do Performance-Based Codes Support Universal Design in Architecture?

    DEFF Research Database (Denmark)

    Grangaard, Sidse; Frandsen, Anne Kathrine

    2016-01-01

    The research project ‘An analysis of the accessibility requirements’ studies how Danish architectural firms experience the accessibility requirements of the Danish Building Regulations and it examines their opinions on how future regulative models can support innovative and inclusive design...... understanding of accessibility and UD is directly related to buildings like hospitals and care centers. When the objective is both innovative and inclusive architecture, the request of a performance-based model should be followed up by a knowledge enhancement effort in the building sector. Bloom´s taxonomy...... of educational objectives is suggested as a tool for such a boost. The research project has been financed by the Danish Transport and Construction Agency....

  14. Researching on knowledge architecture of design by analysis based on ASME code

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2003-01-01

    The quality of knowledge-based system's knowledge architecture is one of decisive factors of knowledge-based system's validity and rationality. For designing the ASME code knowledge based system, this paper presents a knowledge acquisition method which is extracting knowledge through document analysis consulted domain experts' knowledge. Then the paper describes knowledge architecture of design by analysis based on the related rules in ASME code. The knowledge of the knowledge architecture is divided into two categories: one is empirical knowledge, and another is ASME code knowledge. Applied as the basement of the knowledge architecture, a general procedural process of design by analysis that is met the engineering design requirements and designers' conventional mode is generalized and explained detailed in the paper. For the sake of improving inference efficiency and concurrent computation of KBS, a kind of knowledge Petri net (KPN) model is proposed and adopted in expressing the knowledge architecture. Furthermore, for validating and verifying of the empirical rules, five knowledge validation and verification theorems are given in the paper. Moreover the research production is applicable to design the knowledge architecture of ASME codes or other engineering standards. (author)

  15. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell

    2016-06-14

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  16. Implementing the Freight Transportation Data Architecture : Data Element Dictionary

    Science.gov (United States)

    2015-01-01

    NCFRP Report 9: Guidance for Developing a Freight Data Architecture articulates the value of establishing architecture for linking data across modes, subjects, and levels of geography to obtain essential information for decision making. Central to th...

  17. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  18. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  19. The evolving grid paradigm and code "tuning" for modern architectures- are the two mutually exclusive?

    Science.gov (United States)

    Long, Robin

    2015-12-01

    With the data output from the LHC increasing, many of the LHC experiments have made significant improvements to their code to take advantage of modern CPU architecture and the accompanying advanced features. With the grid environment changing to heavily include virtualisation and cloud services, we look at whether these two systems can be compatible, or whether improvements in code are lost through virtualisation. We compare the runtime speed improvements achieved in more recent versions of ATLAS code and see if these improvements hold up on various grid paradigms.

  20. Verdon: code of mechanical and thermal behavior of fuel element

    International Nuclear Information System (INIS)

    Courtois, C.; Truffert, J.

    1979-01-01

    Verdon code must be used for analysis and simulation of mechanical, two-dimensional, thermal and physico-chemical behavior of fuel oxide pin in steady-state and transient conditions. Calculation can be done in plane or axisymmetric geometry. Radial, one dimensional, thermal analysis works with finite differences. It takes into account the fissile material's evolution (radial redistribution, flux deepening ...) and the main fuel physico-chemical properties (conductivity, migration, fission gas release ...). Only thermal consequences of fuel mechanical behavior: fuel-cladding gap width, crack formation, creep ... are submitted to a two dimensional analysis. Mechanical analysis works in two dimensional, finite elements, plane or axisymmetric geometry. The mesh represents a part of fuel and cladding pin [fr

  1. Finite element code development for modeling detonation of HMX composites

    Science.gov (United States)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  2. ELLIPT2D: A Flexible Finite Element Code Written Python

    International Nuclear Information System (INIS)

    Pletzer, A.; Mollis, J.C.

    2001-01-01

    The use of the Python scripting language for scientific applications and in particular to solve partial differential equations is explored. It is shown that Python's rich data structure and object-oriented features can be exploited to write programs that are not only significantly more concise than their counter parts written in Fortran, C or C++, but are also numerically efficient. To illustrate this, a two-dimensional finite element code (ELLIPT2D) has been written. ELLIPT2D provides a flexible and easy-to-use framework for solving a large class of second-order elliptic problems. The program allows for structured or unstructured meshes. All functions defining the elliptic operator are user supplied and so are the boundary conditions, which can be of Dirichlet, Neumann or Robbins type. ELLIPT2D makes extensive use of dictionaries (hash tables) as a way to represent sparse matrices.Other key features of the Python language that have been widely used include: operator over loading, error handling, array slicing, and the Tkinter module for building graphical use interfaces. As an example of the utility of ELLIPT2D, a nonlinear solution of the Grad-Shafranov equation is computed using a Newton iterative scheme. A second application focuses on a solution of the toroidal Laplace equation coupled to a magnetohydrodynamic stability code, a problem arising in the context of magnetic fusion research

  3. Architectural and Algorithmic Requirements for a Next-Generation System Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    V.A. Mousseau

    2010-05-01

    This document presents high-level architectural and system requirements for a next-generation system analysis code (NGSAC) to support reactor safety decision-making by plant operators and others, especially in the context of light water reactor plant life extension. The capabilities of NGSAC will be different from those of current-generation codes, not only because computers have evolved significantly in the generations since the current paradigm was first implemented, but because the decision-making processes that need the support of next-generation codes are very different from the decision-making processes that drove the licensing and design of the current fleet of commercial nuclear power reactors. The implications of these newer decision-making processes for NGSAC requirements are discussed, and resulting top-level goals for the NGSAC are formulated. From these goals, the general architectural and system requirements for the NGSAC are derived.

  4. Propel: A Discontinuous-Galerkin Finite Element Code for Solving the Reacting Navier-Stokes Equations

    Science.gov (United States)

    Johnson, Ryan; Kercher, Andrew; Schwer, Douglas; Corrigan, Andrew; Kailasanath, Kazhikathra

    2017-11-01

    This presentation focuses on the development of a Discontinuous Galerkin (DG) method for application to chemically reacting flows. The in-house code, called Propel, was developed by the Laboratory of Computational Physics and Fluid Dynamics at the Naval Research Laboratory. It was designed specifically for developing advanced multi-dimensional algorithms to run efficiently on new and innovative architectures such as GPUs. For these results, Propel solves for convection and diffusion simultaneously with detailed transport and thermodynamics. Chemistry is currently solved in a time-split approach using Strang-splitting with finite element DG time integration of chemical source terms. Results presented here show canonical unsteady reacting flow cases, such as co-flow and splitter plate, and we report performance for higher order DG on CPU and GPUs.

  5. CONDOR: a database resource of developmentally associated conserved non-coding elements

    Directory of Open Access Journals (Sweden)

    Smith Sarah

    2007-08-01

    Full Text Available Abstract Background Comparative genomics is currently one of the most popular approaches to study the regulatory architecture of vertebrate genomes. Fish-mammal genomic comparisons have proved powerful in identifying conserved non-coding elements likely to be distal cis-regulatory modules such as enhancers, silencers or insulators that control the expression of genes involved in the regulation of early development. The scientific community is showing increasing interest in characterizing the function, evolution and language of these sequences. Despite this, there remains little in the way of user-friendly access to a large dataset of such elements in conjunction with the analysis and the visualization tools needed to study them. Description Here we present CONDOR (COnserved Non-coDing Orthologous Regions available at: http://condor.fugu.biology.qmul.ac.uk. In an interactive and intuitive way the website displays data on > 6800 non-coding elements associated with over 120 early developmental genes and conserved across vertebrates. The database regularly incorporates results of ongoing in vivo zebrafish enhancer assays of the CNEs carried out in-house, which currently number ~100. Included and highlighted within this set are elements derived from duplication events both at the origin of vertebrates and more recently in the teleost lineage, thus providing valuable data for studying the divergence of regulatory roles between paralogs. CONDOR therefore provides a number of tools and facilities to allow scientists to progress in their own studies on the function and evolution of developmental cis-regulation. Conclusion By providing access to data with an approachable graphics interface, the CONDOR database presents a rich resource for further studies into the regulation and evolution of genes involved in early development.

  6. Analysis of central enterprise architecture elements in models of six eHealth projects.

    Science.gov (United States)

    Virkanen, Hannu; Mykkänen, Juha

    2014-01-01

    Large-scale initiatives for eHealth services have been established in many countries on regional or national level. The use of Enterprise Architecture has been suggested as a methodology to govern and support the initiation, specification and implementation of large-scale initiatives including the governance of business changes as well as information technology. This study reports an analysis of six health IT projects in relation to Enterprise Architecture elements, focusing on central EA elements and viewpoints in different projects.

  7. Architecture proposal for the use of QR code in supply chain management

    Directory of Open Access Journals (Sweden)

    Dalton Matsuo Tavares

    2012-01-01

    Full Text Available Supply chain traceability and visibility are key concerns for many companies. Radio-Frequency Identification (RFID is an enabling technology that allows identification of objects in a fully automated manner via radio waves. Nevertheless, this technology has limited acceptance and high costs. This paper presents a research effort undertaken to design a track and trace solution in supply chains, using quick response code (or QR Code for short as a less complex and cost-effective alternative for RFID in supply chain management (SCM. A first architecture proposal using open source software will be presented as a proof of concept. The system architecture is presented in order to achieve tag generation, the image acquisition and pre-processing, product inventory and tracking. A prototype system for the tag identification is developed and discussed at the end of the paper to demonstrate its feasibility.

  8. Bridging The Gap Between The Past And The Present: A Reconsideration Of Mosque Architectural Elements

    Directory of Open Access Journals (Sweden)

    Omar S Asfour

    2016-12-01

    Full Text Available Mosques are among the most important building types for any community, where Muslims gather for their prayers and social activities. Mosque architecture has developed over history and faced several dramatic changes. This raises a question regarding the reality of mosque architecture and how it should look like today. This paper discusses this issue through a historical overview and some critical observations. Firstly, the paper discusses the historical functional role of mosque basic elements. Validity of these elements within the context of modern architecture has been argued considering the contemporary inputs that have a significant impact on mosque architecture. Several cases are presented and discussed in this regard. The study concluded that there is a great symbolic and spiritual value of these elements that should be maintained. The analysis carried out of several contemporary cases revealed that there is a wide margin to revive and reintroduce these elements in the light of the modern architectural trends. In addition to their functional roles, mosque architectural elements could be used as identity elements of the Islamic city, microclimatic modifiers, and linking tools between the past and the present.

  9. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    Science.gov (United States)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  10. A self-organized internal models architecture for coding sensory-motor schemes

    Directory of Open Access Journals (Sweden)

    Esaú eEscobar Juárez

    2016-04-01

    Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.

  11. FPGA-Based Channel Coding Architectures for 5G Wireless Using High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Swapnil Mhaske

    2017-01-01

    Full Text Available We propose strategies to achieve a high-throughput FPGA architecture for quasi-cyclic low-density parity-check codes based on circulant-1 identity matrix construction. By splitting the node processing operation in the min-sum approximation algorithm, we achieve pipelining in the layered decoding schedule without utilizing additional hardware resources. High-level synthesis compilation is used to design and develop the architecture on the FPGA hardware platform. To validate this architecture, an IEEE 802.11n compliant 608 Mb/s decoder is implemented on the Xilinx Kintex-7 FPGA using the LabVIEW FPGA Compiler in the LabVIEW Communication System Design Suite. Architecture scalability was leveraged to accomplish a 2.48 Gb/s decoder on a single Xilinx Kintex-7 FPGA. Further, we present rapidly prototyped experimentation of an IEEE 802.16 compliant hybrid automatic repeat request system based on the efficient decoder architecture developed. In spite of the mixed nature of data processing—digital signal processing and finite-state machines—LabVIEW FPGA Compiler significantly reduced time to explore the system parameter space and to optimize in terms of error performance and resource utilization. A 4x improvement in the system throughput, relative to a CPU-based implementation, was achieved to measure the error-rate performance of the system over large, realistic data sets using accelerated, in-hardware simulation.

  12. Receiver architecture of the thousand-element array (THEA)

    NARCIS (Netherlands)

    Kant, G.W.; Kokkeler, Andre B.J.; Smolders, A.B.; Gunst, A.W.

    2000-01-01

    As part of the development of a new international radio-telescope SKA (Square Kilometre Array), an outdoor phasedarray prototype, the THousand Element Array (THEA), is being developed at NFRA. THEA is a phased array with 1024 active elements distributed on a regular grid over a surface of

  13. A Novel Architecture for Adaptive Traffic Control in Network on Chip using Code Division Multiple Access Technique

    OpenAIRE

    Fatemeh. Dehghani; Shahram. Darooei

    2016-01-01

    Network on chip has emerged as a long-term and effective method in Multiprocessor System-on-Chip communications in order to overcome the bottleneck in bus based communication architectures. Efficiency and performance of network on chip is so dependent on the architecture and structure of the network. In this paper a new structure and architecture for adaptive traffic control in network on chip using Code Division Multiple Access technique is presented. To solve the problem of synchronous acce...

  14. Improvement of implicit finite element code performance in deep drawing simulations by dynamics contributions

    NARCIS (Netherlands)

    Meinders, Vincent T.; van den Boogaard, Antonius H.; Huetink, Han

    2003-01-01

    To intensify the use of implicit finite element codes for solving large scale problems, the computation time of these codes has to be decreased drastically. A method is developed which decreases the computational time of implicit codes by factors. The method is based on introducing inertia effects

  15. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    Science.gov (United States)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  16. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Luca Fanucci

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or “hazards” between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of “idle” cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  17. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  18. Architecture of villas as an element of identity of Vrnjačka Banja

    Directory of Open Access Journals (Sweden)

    Marić Igor

    2009-01-01

    Full Text Available The architecture of villas stands out as one of the recognizable characteristics of Vrnjačka Banja's physical structure and as a distinctive type of objects forming the tourist offer. Analyses of position, function and importance of villa architecture in shaping the identity of Vrnjačka Banja are based on the conviction that organized space and built environment, as a result of design and planning processes, have decisive role in forming the identity of places and making them recognizable. In this paper, architectural elements that characterize this type of objects are being analyzed. Also, the role that singled out elements have at higher urban levels, firstly in creating characteristic, recognizable architectural and urban ensembles, and consequently in identity forming, is pointed out. These issues are considered in terms of tourism development, as a precondition for making recognizable city with possibilities for branding, that consequently improve the exploiting of the tourist potentials.

  19. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    Science.gov (United States)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  20. Review and Evaluation of a Turbomachinery Throughflow Finite Element Code

    Science.gov (United States)

    1989-06-01

    stare. and ZIP code) 7b Address (city, state, and ZIP code,, Monterev, CA 93943- 5000 Monterey, CA 93943- 50001 Sa Name of Funding Sponsoring...Rotor Tip Section ............................. 52 Figure 27. Computational Mesh for the Blade-to-Blade Solution .............. 53 Figure 28. Iso -pressure...computation, the results are presented here for Case 5 at a flow coefficient of 0.61. The computed iso -pressure lines on the axisvrmrnetric stream surface at

  1. INGEN: a general-purpose mesh generator for finite element codes

    International Nuclear Information System (INIS)

    Cook, W.A.

    1979-05-01

    INGEN is a general-purpose mesh generator for two- and three-dimensional finite element codes. The basic parts of the code are surface and three-dimensional region generators that use linear-blending interpolation formulas. These generators are based on an i, j, k index scheme that is used to number nodal points, construct elements, and develop displacement and traction boundary conditions. This code can generate truss elements (2 modal points); plane stress, plane strain, and axisymmetry two-dimensional continuum elements (4 to 8 nodal points); plate elements (4 to 8 nodal points); and three-dimensional continuum elements (8 to 21 nodal points). The traction loads generated are consistent with the element generated. The expansion--contraction option is of special interest. This option makes it possible to change an existing mesh such that some regions are refined and others are made coarser than the original mesh. 9 figures

  2. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    Science.gov (United States)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  3. MESHJET. A mesh generation package for finite element MHD equilibrium codes at JET

    International Nuclear Information System (INIS)

    Springmann, E.; Taroni, A.

    1984-01-01

    MESHJET is a fairly general package and can be used to generate meshes for any finite element code in two space dimensions. These finite element codes are widely used at JET. The first code is for the identification of the plasma boundary and internal flux surfaces from measurements of external fluxes and fields under the assumption that the plasma toroidal density can be represented within a given class of functions. The second code computes plasma equilibrium configurations taking into account a two-dimensional model of the transformer iron core in JET. (author)

  4. Understanding Epistatic Interactions between Genes Targeted by Non-coding Regulatory Elements in Complex Diseases

    Directory of Open Access Journals (Sweden)

    Min Kyung Sung

    2014-12-01

    Full Text Available Genome-wide association studies have proven the highly polygenic architecture of complex diseases or traits; therefore, single-locus-based methods are usually unable to detect all involved loci, especially when individual loci exert small effects. Moreover, the majority of associated single-nucleotide polymorphisms resides in non-coding regions, making it difficult to understand their phenotypic contribution. In this work, we studied epistatic interactions associated with three common diseases using Korea Association Resource (KARE data: type 2 diabetes mellitus (DM, hypertension (HT, and coronary artery disease (CAD. We showed that epistatic single-nucleotide polymorphisms (SNPs were enriched in enhancers, as well as in DNase I footprints (the Encyclopedia of DNA Elements [ENCODE] Project Consortium 2012, which suggested that the disruption of the regulatory regions where transcription factors bind may be involved in the disease mechanism. Accordingly, to identify the genes affected by the SNPs, we employed whole-genome multiple-cell-type enhancer data which discovered using DNase I profiles and Cap Analysis Gene Expression (CAGE. Assigned genes were significantly enriched in known disease associated gene sets, which were explored based on the literature, suggesting that this approach is useful for detecting relevant affected genes. In our knowledge-based epistatic network, the three diseases share many associated genes and are also closely related with each other through many epistatic interactions. These findings elucidate the genetic basis of the close relationship between DM, HT, and CAD.

  5. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    Science.gov (United States)

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  6. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications

    International Nuclear Information System (INIS)

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J.

    2004-01-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1x10 8 or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8x10 8 histories. For a smaller number of histories (1x10 8 ) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1x10 8 histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy

  7. Recent progress of an integrated implosion code and modeling of element physics

    International Nuclear Information System (INIS)

    Nagatomo, H.; Takabe, H.; Mima, K.; Ohnishi, N.; Sunahara, A.; Takeda, T.; Nishihara, K.; Nishiguchu, A.; Sawada, K.

    2001-01-01

    Physics of the inertial fusion is based on a variety of elements such as compressible hydrodynamics, radiation transport, non-ideal equation of state, non-LTE atomic process, and relativistic laser plasma interaction. In addition, implosion process is not in stationary state and fluid dynamics, energy transport and instabilities should be solved simultaneously. In order to study such complex physics, an integrated implosion code including all physics important in the implosion process should be developed. The details of physics elements should be studied and the resultant numerical modeling should be installed in the integrated code so that the implosion can be simulated with available computer within realistic CPU time. Therefore, this task can be basically separated into two parts. One is to integrate all physics elements into a code, which is strongly related to the development of hydrodynamic equation solver. We have developed 2-D integrated implosion code which solves mass, momentum, electron energy, ion energy, equation of states, laser ray-trace, laser absorption radiation, surface tracing and so on. The reasonable results in simulating Rayleigh-Taylor instability and cylindrical implosion are obtained using this code. The other is code development on each element physics and verification of these codes. We had progress in developing a nonlocal electron transport code and 2 and 3 dimension radiation hydrodynamic code. (author)

  8. A Study on Architecture of Malicious Code Blocking Scheme with White List in Smartphone Environment

    Science.gov (United States)

    Lee, Kijeong; Tolentino, Randy S.; Park, Gil-Cheol; Kim, Yong-Tae

    Recently, the interest and demands for mobile communications are growing so fast because of the increasing prevalence of smartphones around the world. In addition, the existing feature phones were replaced by smartphones and it has widely improved while using the explosive growth of Internet users using smartphones, e-commerce enabled Internet banking transactions and the importance of protecting personal information. Therefore, the development of smartphones antivirus products was developed and launched in order to prevent malicious code or virus infection. In this paper, we proposed a new scheme to protect the smartphone from malicious codes and malicious applications that are element of security threats in mobile environment and to prevent information leakage from malicious code infection. The proposed scheme is based on the white list smartphone application which only allows installing authorized applications and to prevent the installation of malicious and untrusted mobile applications which can possibly infect the applications and programs of smartphones.

  9. Kine-Mould : Manufacturing technology for curved architectural elements in concrete

    NARCIS (Netherlands)

    Schipper, H.R.; Eigenraam, P.; Grünewald, S.; Soru, M.; Nap, P.; Van Overveld, B.; Vermeulen, J.

    2015-01-01

    The production of architectural elements with complex geometry is challenging for concrete manufacturers. Computer-numerically-controlled (CNC) milled foam moulds have been applied frequently in the last decades, resulting in good aesthetical performance. However, still the costs are high and a

  10. Software Abstractions and Methodologies for HPC Simulation Codes on Future Architectures

    Directory of Open Access Journals (Sweden)

    Anshu Dubey

    2014-07-01

    Full Text Available Simulations with multi-physics modeling have become crucial to many science and engineering fields, and multi-physics capable scientific software is as important to these fields as instruments and facilities are to experimental sciences. The current generation of mature multi-physics codes would have sustainably served their target communities with modest amount of ongoing investment for enhancing capabilities. However, the revolution occurring in the hardware architecture has made it necessary to tackle the parallelism and performance management in these codes at multiple levels. The requirements of various levels are often at cross-purposes with one another, and therefore hugely complicate the software design. All of these considerations make it essential to approach this challenge cooperatively as a community. We conducted a series of workshops under an NSF-SI2 conceptualization grant to get input from various stakeholders, and to identify broad approaches that might lead to a solution. In this position paper we detail the major concerns articulated by the application code developers, and emerging trends in utilization of programming abstractions that we found through these workshops.

  11. An Interactive Preprocessor Program with Graphics for a Three-Dimensional Finite Element Code.

    Science.gov (United States)

    Hamilton, Claude Hayden, III

    The development and capabilities of an interactive preprocessor program with graphics for an existing three-dimensional finite element code is presented. This preprocessor program, EDGAP3D, is designed to be used in conjunction with the Texas Three Dimensional Grain Analysis Program (TXCAP3D). The code presented in this research is capable of the…

  12. APPLICATION FOR DESIGN OF STRUCTURAL ELEMENT USING VISUAL BASIC CODING

    OpenAIRE

    T. Thenmozhi; K. Nithya; M. Arun Kumar; M. Ravichandran

    2017-01-01

    The increasing reliance of engineers on computer software in the performance of their tasks requires engineers, the future professional engineers, must be knowledgeable of sound engineering concepts, updated on the latest computer technology used in the industry and aware of the limitations and capabilities of the computer in solving engineering problems. Computer Methods in Civil Engineering to developed structural design program for design of structural element using Visual Basic. By creati...

  13. The architecture of cartilage: Elemental maps and scanning transmission ion microscopy/tomography

    International Nuclear Information System (INIS)

    Reinert, Tilo; Reibetanz, Uta; Schwertner, Michael; Vogt, Juergen; Butz, Tilman; Sakellariou, Arthur

    2002-01-01

    Articular cartilage is not just a jelly-like cover of the bone within the joints but a highly sophisticated architecture of hydrated macromolecules, collagen fibrils and cartilage cells. Influences on the physiological balance due to age-related or pathological changes can lead to malfunction and subsequently to degradation of the cartilage. Many activities in cartilage research are dealing with the architecture of joint cartilage but have limited access to elemental distributions. Nuclear microscopy is able to yield spatially resolved elemental concentrations, provides density information and can visualise the arrangement of the collagen fibres. The distribution of the cartilage matrix can be deduced from the elemental and density maps. The findings showed a varying content of collagen and proteoglycan between zones of different cell maturation. Zones of higher collagen content are characterised by aligned collagen fibres that can form tubular structures. Recently we focused on STIM tomography to investigate the three dimensional arrangement of the collagen structures

  14. Non-coding sequence retrieval system for comparative genomic analysis of gene regulatory elements

    Directory of Open Access Journals (Sweden)

    Temple Matthew H

    2007-03-01

    Full Text Available Abstract Background Completion of the human genome sequence along with other species allows for greater understanding of the biochemical mechanisms and processes that govern healthy as well as diseased states. The large size of the genome sequences has made them difficult to study using traditional methods. There are many studies focusing on the protein coding sequences, however, not much is known about the function of non-coding regions of the genome. It has been demonstrated that parts of the non-coding region play a critical role as gene regulatory elements. Enhancers that regulate transcription processes have been found in intergenic regions. Furthermore, it is observed that regulatory elements found in non-coding regions are highly conserved across different species. However, the analysis of these regulatory elements is not as straightforward as it may first seem. The development of a centralized resource that allows for the quick and easy retrieval of non-coding sequences from multiple species and is capable of handing multi-gene queries is critical for the analysis of non-coding sequences. Here we describe the development of a web-based non-coding sequence retrieval system. Results This paper presents a Non-Coding Sequences Retrieval System (NCSRS. The NCSRS is a web-based bioinformatics tool that performs fast and convenient retrieval of non-coding and coding sequences from multiple species related to a specific gene or set of genes. This tool has compiled resources from multiple sources into one easy to use and convenient web based interface. With no software installation necessary, the user needs only internet access to use this tool. Conclusion The unique features of this tool will be very helpful for those studying gene regulatory elements that exist in non-coding regions. The web based application can be accessed on the internet at: http://cell.rutgers.edu/ncsrs/.

  15. High performance 3D neutron transport on peta scale and hybrid architectures within APOLLO3 code

    International Nuclear Information System (INIS)

    Jamelot, E.; Dubois, J.; Lautard, J-J.; Calvin, C.; Baudron, A-M.

    2011-01-01

    APOLLO3 code is a common project of CEA, AREVA and EDF for the development of a new generation system for core physics analysis. We present here the parallelization of two deterministic transport solvers of APOLLO3: MINOS, a simplified 3D transport solver on structured Cartesian and hexagonal grids, and MINARET, a transport solver based on triangular meshes on 2D and prismatic ones in 3D. We used two different techniques to accelerate MINOS: a domain decomposition method, combined with an accelerated algorithm using GPU. The domain decomposition is based on the Schwarz iterative algorithm, with Robin boundary conditions to exchange information. The Robin parameters influence the convergence and we detail how we optimized the choice of these parameters. MINARET parallelization is based on angular directions calculation using explicit message passing. Fine grain parallelization is also available for each angular direction using shared memory multithreaded acceleration. Many performance results are presented on massively parallel architectures using more than 103 cores and on hybrid architectures using some tens of GPUs. This work contributes to the HPC development in reactor physics at the CEA Nuclear Energy Division. (author)

  16. Promiscuity of enhancer, coding and non-coding transcription functions in ultraconserved elements

    Directory of Open Access Journals (Sweden)

    Sanges Remo

    2010-03-01

    Full Text Available Abstract Background Ultraconserved elements (UCEs are highly constrained elements of mammalian genomes, whose functional role has not been completely elucidated yet. Previous studies have shown that some of them act as enhancers in mouse, while some others are expressed in both normal and cancer-derived human tissues. Only one UCE element so far was shown to present these two functions concomitantly, as had been observed in other isolated instances of single, non ultraconserved enhancer elements. Results We used a custom microarray to assess the levels of UCE transcription during mouse development and integrated these data with published microarray and next-generation sequencing datasets as well as with newly produced PCR validation experiments. We show that a large fraction of non-exonic UCEs is transcribed across all developmental stages examined from only one DNA strand. Although the nature of these transcripts remains a mistery, our meta-analysis of RNA-Seq datasets indicates that they are unlikely to be short RNAs and that some of them might encode nuclear transcripts. In the majority of cases this function overlaps with the already established enhancer function of these elements during mouse development. Utilizing several next-generation sequencing datasets, we were further able to show that the level of expression observed in non-exonic UCEs is significantly higher than in random regions of the genome and that this is also seen in other regions which act as enhancers. Conclusion Our data shows that the concurrent presence of enhancer and transcript function in non-exonic UCE elements is more widespread than previously shown. Moreover through our own experiments as well as the use of next-generation sequencing datasets, we were able to show that the RNAs encoded by non-exonic UCEs are likely to be long RNAs transcribed from only one DNA strand.

  17. Convenience of Statistical Approach in Studies of Architectural Ornament and Other Decorative Elements Specific Application

    Science.gov (United States)

    Priemetz, O.; Samoilov, K.; Mukasheva, M.

    2017-11-01

    An ornament is an actual phenomenon of the architecture modern theory, a common element in the practice of design and construction. It has been an important aspect of shaping for millennia. The description of the methods of its application occupies a large place in the studies on the theory and practice of architecture. However, the problem of the saturation of compositions with ornamentation, the specificity of its themes and forms have not been sufficiently studied yet. This aspect requires accumulation of additional knowledge. The application of quantitative methods for the plastic solutions types and a thematic diversity of facade compositions of buildings constructed in different periods creates another tool for an objective analysis of ornament development. It demonstrates the application of this approach for studying the features of the architectural development in Kazakhstan at the end of the XIX - XXI centuries.

  18. The effect of traditional architecture elements on architectureal and planning forming develop and raise the efficency of using the traditional energy (study case Crater/Aden, Yemen)

    International Nuclear Information System (INIS)

    Ghanem, Wadee Ahmed

    2006-01-01

    This paper discuss the role of architecture in Center city-Aden, Republic of Yemen which has a historical traditional architecture which is a unique sample with many elements that make the building of this city as an effective helper in keeping the sources traditional energy. This architecture could be meritoriously described as courtyards, high ceiling for suitable air circling are used as well as the main building material used are local and environmental such as stones, wood and lime stone (Pumic). The research aim at studying and analyzing the planning forming and architectural specification of this city through studying some examples of its buildings to recognize the traditional building role in saving the traditional energy by studying the building material, ventilation system, orientation and opening, for using these elements to raise the efficiency of using the resources of traditional sources. The research is abbreviated to several results such as: 1. Urbanization planning side: a. Elements of urban planning represented in the mass and opening their environmental role. b. Method of forming the urban planning. c. Series in arrangement of elements of urban planning. 2. Architectural side: a. Ratio between solid and void. b. opening shapes. c. internal courtyards. d. Unique architectural elements (Mashrabiyas (Oriels), sky lines, opening covering...etc). e. Building material used . f. building construction methods. g. Kind of walls.(Author)

  19. Impacts of traditional architecture on the use of wood as an element of facade covering in Serbian contemporary architecture

    Directory of Open Access Journals (Sweden)

    Ivanović-Šekularac Jelena

    2011-01-01

    Full Text Available The world trend of re-use of wood and wood products as materials for construction and covering of architectural structures is present not only because of the need to meet the aesthetic, artistic and formal requirements or to seek inspiration in the return to the tradition and nature, but also because of its ecological, economic and energetic feasibility. Furthermore, the use of wood fits into contemporary trends of sustainable development and application of modern technical and technological solutions in the production of materials, in order to maintain a connection to nature, environment and tradition. In this study the author focuses on wood and wood products as an element of facade covering on buildings in our country, in order to extend knowledge about possibilities and limitations of their use and create a base for their greater and correct application. The subject of this research is to examine the application of wood and wood products as an element covering the exterior in combination with other materials applied in our traditional and contemporary homes with the emphasis on functional, representational art and the various possibilities of wood. In this study all the factors that affect the application of wood and wood products have been analyzed and the conclusions have been drawn about the manner of their implementation and the types of wood and wood products protection. The development of modern technological solutions in wood processing led to the production of composite materials based on wood that are highly resistant, stable and much longer lasting than wood. Those materials have maintained in an aesthetic sense all the characteristics of wood that make it unique and inimitable. This is why modern facade coating based on wood should be applied as a facade covering in the exterior of modern architectural buildings in Serbia, and the use wood reduced to a minimum.

  20. Apolux : an innovative computer code for daylight design and analysis in architecture and urbanism

    Energy Technology Data Exchange (ETDEWEB)

    Claro, A.; Pereira, F.O.R.; Ledo, R.Z. [Santa Catarina Federal Univ., Florianopolis, SC (Brazil)

    2005-07-01

    The main capabilities of a new computer program for calculating and analyzing daylighting in architectural space were discussed. Apolux 1.0 was designed to use three-dimensional files generated in graphic editors in the data exchange file (DXF) format and was developed to integrate an architect's design characteristics. An example of its use in a design context development was presented. The program offers fast and flexible manipulation of video card models in different visualization conditions. The algorithm for working with the physics of light is based on the radiosity method representing the surfaces through finite elements divided in small triangular units of area which are fully confronted to each other. The form factors of each triangle are determined in relation to all others in the primary calculation. Visible directions of the sky are also included according to the modular units of a subdivided globe. Following these primary calculations, the different and successive daylighting solutions can be determined under different sky conditions. The program can also change the properties of the materials to quickly recalculate the solutions. The program has been applied in an office building in Florianopolis, Brazil. The four stages of design include initial discussion with the architects about the conceptual possibilities; development of a comparative study based on 2 architectural designs with different conceptual elements regarding daylighting exploitation in order to compare internal daylighting levels and distribution of the 2 options exposed to the same external conditions; study the solar shading devices for specific facades; and, simulations to test the performance of different designs. The program has proven to be very flexible with reliable results. It has the possibility of incorporating situations of the real sky through the input of the Spherical model of real sky luminance values. 3 refs., 14 figs.

  1. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    Science.gov (United States)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  2. Archaeometric characterization and provenance determination of sculptures and architectural elements from Gerasa, Jordan

    Science.gov (United States)

    Al-Bashaireh, Khaled

    2018-02-01

    This study aims at the identification of the provenance of white marble sculptures and architectural elements uncovered from the archaeological site of Gerasa and neighboring areas, north Jordan. Most of the marbles are probably of the Roman or Byzantine periods. Optical microscopy, X-ray diffraction, and mass spectrometry were used to investigate petrographic, mineralogical and isotopic characteristics of the samples, respectively. Analytical results were compared with the main reference databases of known Mediterranean marble quarries exploited in antiquity. The collected data show that the most likely main sources of the sculptures were the Greek marble quarries of Paros-2 (Lakkoi), Penteli (Mount Pentelikon, Attica), and Thasos-3 (Thasos Island, Cape Vathy, Aliki); the Asia Minor marble quarries of Proconessus-1 (Marmara) and Aphrodisias (Aphrodisias); and the Italian quarry of Carrara (Apuan Alps). Similarly, the Asia Minor quarries of the fine-grained Docimium (Afyon) and the coarse-grained Proconessus-1 (Marmara) and Thasos-3 are the most likely sources of the architectural elements. The results agree with published data on the wide use of these marbles for sculpture and architectural elements.

  3. On the influence of microscopic architecture elements to the global viscoelastic properties of soft biological tissue

    Science.gov (United States)

    Posnansky, Oleg P.

    2014-12-01

    In this work we introduce a 2D minimal model of random scale-invariant network structures embedded in a matrix to study the influence of microscopic architecture elements on the viscoelastic behavior of soft biological tissue. Viscoelastic properties at a microscale are modeled by a cohort of basic elements with varying complexity integrated into multi-hierarchic lattice obeying self-similar geometry. It is found that this hierarchy of structure elements yields a global nonlinear frequency dependent complex-valued shear modulus. In the dynamic range of external frequency load, the modeled shear modulus proved sensitive to the network concentration and viscoelastic characteristics of basic elements. The proposed model provides a theoretical framework for the interpretation of dynamic viscoelastic parameters in the context of microstructural variations under different conditions.

  4. Architecture

    OpenAIRE

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  5. Profiling of conserved non-coding elements upstream of SHOX and functional characterisation of the SHOX cis-regulatory landscape.

    Science.gov (United States)

    Verdin, Hannah; Fernández-Miñán, Ana; Benito-Sanz, Sara; Janssens, Sandra; Callewaert, Bert; De Waele, Kathleen; De Schepper, Jean; François, Inge; Menten, Björn; Heath, Karen E; Gómez-Skarmeta, José Luis; De Baere, Elfride

    2015-12-03

    Genetic defects such as copy number variations (CNVs) in non-coding regions containing conserved non-coding elements (CNEs) outside the transcription unit of their target gene, can underlie genetic disease. An example of this is the short stature homeobox (SHOX) gene, regulated by seven CNEs located downstream and upstream of SHOX, with proven enhancer capacity in chicken limbs. CNVs of the downstream CNEs have been reported in many idiopathic short stature (ISS) cases, however, only recently have a few CNVs of the upstream enhancers been identified. Here, we set out to provide insight into: (i) the cis-regulatory role of these upstream CNEs in human cells, (ii) the prevalence of upstream CNVs in ISS, and (iii) the chromatin architecture of the SHOX cis-regulatory landscape in chicken and human cells. Firstly, luciferase assays in human U2OS cells, and 4C-seq both in chicken limb buds and human U2OS cells, demonstrated cis-regulatory enhancer capacities of the upstream CNEs. Secondly, CNVs of these upstream CNEs were found in three of 501 ISS patients. Finally, our 4C-seq interaction map of the SHOX region reveals a cis-regulatory domain spanning more than 1 Mb and harbouring putative new cis-regulatory elements.

  6. A framework for developing finite element codes for multi-disciplinary applications.

    OpenAIRE

    Dadvand, Pooyan

    2007-01-01

    The world of computing simulation has experienced great progresses in recent years and requires more exigent multidisciplinary challenges to satisfy the new upcoming demands. Increasing the importance of solving multi-disciplinary problems makes developers put more attention to these problems and deal with difficulties involved in developing software in this area. Conventional finite element codes have several difficulties in dealing with multi-disciplinary problems. Many of these codes are d...

  7. Development of three-dimensional transport code by the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1985-01-01

    Development of a three-dimensional neutron transport code by the double finite element method is described. Both of the Galerkin and variational methods are adopted to solve the problem, and then the characteristics of them are compared. Computational results of the collocation method, developed as a technique for the vaviational one, are illustrated in comparison with those of an Ssub(n) code. (author)

  8. Engine dynamic analysis with general nonlinear finite element codes. II - Bearing element implementation, overall numerical characteristics and benchmarking

    Science.gov (United States)

    Padovan, J.; Adams, M.; Lam, P.; Fertis, D.; Zeid, I.

    1982-01-01

    Second-year efforts within a three-year study to develop and extend finite element (FE) methodology to efficiently handle the transient/steady state response of rotor-bearing-stator structure associated with gas turbine engines are outlined. The two main areas aim at (1) implanting the squeeze film damper element into a general purpose FE code for testing and evaluation; and (2) determining the numerical characteristics of the FE-generated rotor-bearing-stator simulation scheme. The governing FE field equations are set out and the solution methodology is presented. The choice of ADINA as the general-purpose FE code is explained, and the numerical operational characteristics of the direct integration approach of FE-generated rotor-bearing-stator simulations is determined, including benchmarking, comparison of explicit vs. implicit methodologies of direct integration, and demonstration problems.

  9. Infill architecture: Design approaches for in-between buildings and 'bond' as integrative element

    Directory of Open Access Journals (Sweden)

    Alfirević Đorđe

    2015-01-01

    Full Text Available The aim of the paper is to draw attention to the view that the two key elements in achieving good quality of architecture infill in immediate, current surroundings, are the selection of optimal creative method of infill architecture and adequate application of 'the bond' as integrative element, The success of achievement and the quality of architectural infill mainly depend on the assessment of various circumstances, but also on the professionalism, creativity, sensibility, and finally innovativeness of the architect, In order for the infill procedure to be carried out adequately, it is necessary to carry out the assessment of quality of the current surroundings that the object will be integrated into, and then to choose the creative approach that will allow the object to establish an optimal dialogue with its surroundings, On a wider scale, both theory and the practice differentiate thee main creative approaches to infill objects: amimetic approach (mimesis, bassociative approach and ccontrasting approach, Which of the stated approaches will be chosen depends primarily on the fact whether the existing physical structure into which the object is being infilled is 'distinct', 'specific' or 'indistinct', but it also depends on the inclination of the designer, 'The bond' is a term which in architecture denotes an element or zone of one object, but in some instances it can refer to the whole object which has been articulated in a specific way, with an aim of reaching the solution for the visual conflict as is often the case in situations when there is a clash between the existing objects and the newly designed or reconstructed object, This paper provides in-depth analysis of different types of bonds, such as 'direction as bond', 'cornice as bond', 'structure as bond', 'texture as bond' and 'material as bond', which indicate complexity and multiple layers of the designing process of object interpolation.

  10. Validation of a tetrahedral spectral element code for solving the Navier Stokes equation

    International Nuclear Information System (INIS)

    Niewiadomski, C.; Paraschivoiu, M.

    2004-01-01

    The tetrahedral spectral element method is considered to solve the incompressible Navier-Stokes equations because it is capable to capture complex geometries and obtain highly accurate solutions. This method allows accuracy improvements both by decreasing the spatial discretization as well as increasing the expansion order. The method is presented here-in as a modification of an standard finite element code. Some recent improvement to the baseline spectral element method for the tetrahedron described in References 3 and 2 are presented. These improvements include: the continuity enforcement procedure avoiding the need to change the global assembly operation and the removal of the reference coordinate system from the elemental evaluations thus simplifying greatly the method. A study is performed on the Stokes and Navier-Stokes equations to validate the method and the resulting code. (author)

  11. Architecture for the Elderly and Frail People, Well-Being Elements Realizations and Outcomes

    DEFF Research Database (Denmark)

    Knudstrup, Mary-Ann

    2011-01-01

    The relationship between architecture, housing and well-being of elderly and frail people is a topic of growing interest to consultants and political decision makers working on welfare solutions for elderly citizens. The objective of the research presented here is to highlight which well......-being elements in the nursing home environments that contribute to enhancing the well-being of the elderly and how these elements is ensured attention during a decision making process related to the design and the establishing of nursing homes. With basis in four Danish representative case studies, various case...... knowledge and scientific evidence and use them as design drivers through the design process. To secure well-being elements in the coming residential care facilities for the elderly....

  12. Tri-Lab Co-Design Milestone: In-Depth Performance Portability Analysis of Improved Integrated Codes on Advanced Architecture.

    Energy Technology Data Exchange (ETDEWEB)

    Hoekstra, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Simon David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Richards, David [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bergen, Ben [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-01

    This milestone is a tri-lab deliverable supporting ongoing Co-Design efforts impacting applications in the Integrated Codes (IC) program element Advanced Technology Development and Mitigation (ATDM) program element. In FY14, the trilabs looked at porting proxy application to technologies of interest for ATS procurements. In FY15, a milestone was completed evaluating proxy applications in multiple programming models and in FY16, a milestone was completed focusing on the migration of lessons learned back into production code development. This year, the co-design milestone focuses on extracting the knowledge gained and/or code revisions back into production applications.

  13. Finite Macro-Element Mesh Deformation in a Structured Multi-Block Navier-Stokes Code

    Science.gov (United States)

    Bartels, Robert E.

    2005-01-01

    A mesh deformation scheme is developed for a structured multi-block Navier-Stokes code consisting of two steps. The first step is a finite element solution of either user defined or automatically generated macro-elements. Macro-elements are hexagonal finite elements created from a subset of points from the full mesh. When assembled, the finite element system spans the complete flow domain. Macro-element moduli vary according to the distance to the nearest surface, resulting in extremely stiff elements near a moving surface and very pliable elements away from boundaries. Solution of the finite element system for the imposed boundary deflections generally produces smoothly varying nodal deflections. The manner in which distance to the nearest surface has been found to critically influence the quality of the element deformation. The second step is a transfinite interpolation which distributes the macro-element nodal deflections to the remaining fluid mesh points. The scheme is demonstrated for several two-dimensional applications.

  14. Axisym finite element code: modifications for pellet-cladding mechanical interaction analysis

    International Nuclear Information System (INIS)

    Engelman, G.P.

    1978-10-01

    Local strain concentrations in nuclear fuel rods are known to be potential sites for failure initiation. Assessment of such strain concentrations requires a two-dimensional analysis of stress and strain in both the fuel and the cladding during pellet-cladding mechanical interaction. To provide such a capability in the FRAP (Fuel Rod Analysis Program) codes, the AXISYM code (a small finite element program developed at the Idaho National Engineering Laboratory) was modified to perform a detailed fuel rod deformation analysis. This report describes the modifications which were made to the AXISYM code to adapt it for fuel rod analysis and presents comparisons made between the two-dimensional AXISYM code and the FRACAS-II code. FRACAS-II is the one-dimensional (generalized plane strain) fuel rod mechanical deformation subcode used in the FRAP codes. Predictions of these two codes should be comparable away from the fuel pellet free ends if the state of deformation at the pellet midplane is near that of generalized plane strain. The excellent agreement obtained in these comparisons checks both the correctness of the AXISYM code modifications as well as the validity of the assumption of generalized plane strain upon which the FRACAS-II subcode is based

  15. SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages

    Energy Technology Data Exchange (ETDEWEB)

    Russel, E. [Lawrence Livermore National Lab., CA (United States)

    1997-11-01

    This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.

  16. FEAST: a two-dimensional non-linear finite element code for calculating stresses

    International Nuclear Information System (INIS)

    Tayal, M.

    1986-06-01

    The computer code FEAST calculates stresses, strains, and displacements. The code is two-dimensional. That is, either plane or axisymmetric calculations can be done. The code models elastic, plastic, creep, and thermal strains and stresses. Cracking can also be simulated. The finite element method is used to solve equations describing the following fundamental laws of mechanics: equilibrium; compatibility; constitutive relations; yield criterion; and flow rule. FEAST combines several unique features that permit large time-steps in even severely non-linear situations. The features include a special formulation for permitting many finite elements to simultaneously cross the boundary from elastic to plastic behaviour; accomodation of large drops in yield-strength due to changes in local temperature and a three-step predictor-corrector method for plastic analyses. These features reduce computing costs. Comparisons against twenty analytical solutions and against experimental measurements show that predictions of FEAST are generally accurate to ± 5%

  17. A sliding point contact model for the finite element structures code EURDYN

    International Nuclear Information System (INIS)

    Smith, B.L.

    1986-01-01

    A method is developed by which sliding point contact between two moving deformable structures may be incorporated within a lumped mass finite element formulation based on displacements. The method relies on a simple mechanical interpretation of the contact constraint in terms of equivalent nodal forces and avoids the use of nodal connectivity via a master slave arrangement or pseudo contact element. The methodology has been iplemented into the EURDYN finite element program for the (2D axisymmetric) version coupled to the hydro code SEURBNUK. Sample calculations are presented illustrating the use of the model in various contact situations. Effects due to separation and impact of structures are also included. (author)

  18. Free material stiffness design of laminated composite structures using commercial finite element analysis codes

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup; Lindgaard, Esben; Lund, Erik

    2015-01-01

    In this work optimum stiffness design of laminated composite structures is performed using the commercially available programs ANSYS and MATLAB. Within these programs a Free Material Optimization algorithm is implemented based on an optimality condition and a heuristic update scheme. The heuristic...... update scheme is needed because commercially available finite element analysis software is used. When using a commercial finite element analysis code it is not straight forward to implement a computationally efficient gradient based optimization algorithm. Examples considered in this work are a clamped......, where full access to the finite element analysis core is granted. This comparison displays the possibility of using commercially available programs for stiffness design of laminated composite structures....

  19. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    Science.gov (United States)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  20. Modeling in architectural-planning solutions of agrarian technoparks as elements of the infrastructure

    Science.gov (United States)

    Abdrassilova, Gulnara S.

    2017-09-01

    In the context of development of the agriculture as the driver of the economy of Kazakhstan it is imperative to study new types of agrarian constructions (agroparks, agrotourists complexes, "vertical" farms, conservatories, greenhouses) that can be combined into complexes - agrarian technoparks. Creation of agrarian technoparks as elements of the infrastructure of the agglomeration shall ensure the breakthrough in the field of agrarian goods production, storing and recycling. Modeling of architectural-planning solutions of agrarian technoparks supports development of the theory and practice of designing objects based on innovative approaches.

  1. Looking back on 10 years of the ATLAS Metadata Interface. Reflections on architecture, code design and development methods.

    CERN Document Server

    Fulachier, J; The ATLAS collaboration; Albrand, S; Lambert, F

    2014-01-01

    The “ATLAS Metadata Interface” framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we will briefly describe the architecture and the main uses of the framework within the experiment (TagCollector for release management and Dataset Discovery). These two applications, which share almost 2000 registered users, are superficially quite different, however much of the code is shared and they have been developed and maintained over a decade almost completely by the same team of 3 people. We will discuss how the architectural principles established at the beginning of the project have allowed us to continue both to integrate the new technologies and to respond to the new metadata use cases which inevitably appear over such a time period.

  2. Looking back on 10 years of the ATLAS Metadata Interface. Reflections on architecture, code design and development methods

    Science.gov (United States)

    Fulachier, J.; Aidel, O.; Albrand, S.; Lambert, F.; Atlas Collaboration

    2014-06-01

    The "ATLAS Metadata Interface" framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we describe briefly the architecture and the main uses of the framework within the experiment (TagCollector for release management and Dataset Discovery). These two applications, which share almost 2000 registered users, are superficially quite different, however much of the code is shared and they have been developed and maintained over a decade almost completely by the same team of 3 people. We discuss how the architectural principles established at the beginning of the project have allowed us to continue both to integrate the new technologies and to respond to the new metadata use cases which inevitably appear over such a time period.

  3. Looking back on 10 years of the ATLAS Metadata Interface. Reflections on architecture, code design and development methods

    International Nuclear Information System (INIS)

    Fulachier, J; Albrand, S; Lambert, F; Aidel, O

    2014-01-01

    The 'ATLAS Metadata Interface' framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we describe briefly the architecture and the main uses of the framework within the experiment (TagCollector for release management and Dataset Discovery). These two applications, which share almost 2000 registered users, are superficially quite different, however much of the code is shared and they have been developed and maintained over a decade almost completely by the same team of 3 people. We discuss how the architectural principles established at the beginning of the project have allowed us to continue both to integrate the new technologies and to respond to the new metadata use cases which inevitably appear over such a time period.

  4. Engine dynamic analysis with general nonlinear finite-element codes. I Overall approach and development of bearing damper element

    Science.gov (United States)

    Adams, M. L.; Padovan, J.; Fertis, D. G.

    1981-01-01

    NASA-sponsored research on engine dynamic simulation using general finite element nonlinear time transient computer codes available on the open market is reviewed. The approach taken was to develop software packages to model engine components which are not typically found on dynamical structures and are therefore not already computer codes. The software package developed for squeeze-film bearing dampers is outlined, and the results of a parametric study of damper pressure for a variety of specified circular orbits are presented for both long-bearing and short-bearing solutions. The data from a four-degree-of-freedom rotor-damper-stator model under conditions of small rotor unbalance through large rotor unbalance are also given.

  5. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide-Semiconductor Image Sensors.

    Science.gov (United States)

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-05-02

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.

  6. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A

  7. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    Energy Technology Data Exchange (ETDEWEB)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.

  8. Determination of Major and Minor Elements in the Code River Sediments

    International Nuclear Information System (INIS)

    Sri Murniasih; Sukirno; Bambang Irianto

    2007-01-01

    Analyze major and minor elements in the Code river sediments has been done. The aim of this research is to determine the concentration of major and minor elements in the Code river sediments from upstream to downstream. The instrument used were X-ray Fluorescence using Si(Li) detector. The results show that major elements were Fe (1.66 ± 0.1% - 4.20 ± 0.7%) and Ca (4.43 ± 0.6% - 9.08 ± 1.3%); while minor elements were Ba (178.791 ± 21.1 ppm - 616.56 ± 59.4 ppm); Sr (148.22 ± 21.9 ppm - 410.25 ± 30.5 ppm); and Zr (9.71 ± 1.1 ppm - 22.11 ± 3.4 ppm). ANAVA method (confidence level of α 0.05 ) for statistic test was used. It was showed that there were significant influence of the sampling location difference on the concentration of major and minor elements in the sediment samples. (author)

  9. ABAQUS/EPGEN - a general purpose finite element code with emphasis on nonlinear applications

    International Nuclear Information System (INIS)

    Hibbitt, H.D.

    1984-01-01

    The article contains a summary description of ABAQUS, a finite element program designed for general use in nonlinear as well as linear structural problems, in the context of its application to nuclear structural integrity analysis. The article begins with a discussion of the design criteria and methods upon which the code development has been based. The engineering modelling capabilities, currently implemented in the program - elements, constitutive models and analysis procedures - are then described. Finally, a few demonstration examples are presented, to illustrate some of the program's features that are of interest in structural integrity analysis associated with nuclear power plants. (orig.)

  10. DANIO-CODE: Toward an Encyclopedia of DNA Elements in Zebrafish.

    Science.gov (United States)

    Tan, Haihan; Onichtchouk, Daria; Winata, Cecilia

    2016-02-01

    The zebrafish has emerged as a model organism for genomics studies. The symposium "Toward an encyclopedia of DNA elements in zebrafish" held in London in December 2014, was coorganized by Ferenc Müller and Fiona Wardle. This meeting is a follow-up of a similar previous workshop held 2 years earlier and represents a push toward the formalization of a community effort to annotate functional elements in the zebrafish genome. The meeting brought together zebrafish researchers, bioinformaticians, as well as members of established consortia, to exchange scientific findings and experience, as well as to discuss the initial steps toward the formation of a DANIO-CODE consortium. In this study, we provide the latest updates on the current progress of the consortium's efforts, opening up a broad invitation to researchers to join in and contribute to DANIO-CODE.

  11. Analysis of piping systems by finite element method using code SAP-IV

    International Nuclear Information System (INIS)

    Cizelj, L.; Ogrizek, D.

    1987-01-01

    Due to extensive and multiple use of the computer code SAP-IV we have decided to install it on VAX 11/750 machine. Installation required a large quantity of programming due to great discrepancies between the CDC (the original program version) and the VAX. Testing was performed basically in the field of pipe elements, based on a comparison between results obtained with the codes PSAFE2, DOCIJEV, PIPESD and SAP -V. Besides, the model of reactor pressure vessel with 3-D thick shell elements was done. The capabilities show good agreement with the results of other programs mentioned above. Along with the package installation, the graphical postprocessors being developed for mesh plotting. (author)

  12. Establishing Base Elements of Perspective in Order to Reconstruct Architectural Buildings from Photographs

    Science.gov (United States)

    Dzwierzynska, Jolanta

    2017-12-01

    The use of perspective images, especially historical photographs for retrieving information about presented architectural environment is a fast developing field recently. The photography image is a perspective image with secure geometrical connection with reality, therefore it is possible to reverse this process. The aim of the herby study is establishing requirements which a photographic perspective representation should meet for a reconstruction purpose, as well as determination of base elements of perspective such as a horizon line and a circle of depth, which is a key issue in any reconstruction. The starting point in the reconstruction process is geometrical analysis of the photograph, especially determination of the kind of perspective projection applied, which is defined by the building location towards a projection plane. Next, proper constructions can be used. The paper addresses the problem of establishing base elements of perspective on the basis of the photograph image in the case when camera calibration is impossible to establish. It presents different geometric construction methods selected dependently on the starting assumptions. Therefore, the methods described in the paper seem to be universal. Moreover, they can be used even in the case of poor quality photographs with poor perspective geometry. Such constructions can be realized with computer aid when the photographs are in digital form as it is presented in the paper. The accuracy of the applied methods depends on the photography image accuracy, as well as drawing accuracy, however, it is sufficient for further reconstruction. Establishing base elements of perspective presented in the paper is especially useful in difficult cases of reconstruction, when one lacks information about reconstructed architectural form and it is necessary to lean on solid geometry.

  13. Finite element study of scaffold architecture design and culture conditions for tissue engineering.

    Science.gov (United States)

    Olivares, Andy L; Marsal, Elia; Planell, Josep A; Lacroix, Damien

    2009-10-01

    Tissue engineering scaffolds provide temporary mechanical support for tissue regeneration and transfer global mechanical load to mechanical stimuli to cells through its architecture. In this study the interactions between scaffold pore morphology, mechanical stimuli developed at the cell microscopic level, and culture conditions applied at the macroscopic scale are studied on two regular scaffold structures. Gyroid and hexagonal scaffolds of 55% and 70% porosity were modeled in a finite element analysis and were submitted to an inlet fluid flow or compressive strain. A mechanoregulation theory based on scaffold shear strain and fluid shear stress was applied for determining the influence of each structures on the mechanical stimuli on initial conditions. Results indicate that the distribution of shear stress induced by fluid perfusion is very dependent on pore distribution within the scaffold. Gyroid architectures provide a better accessibility of the fluid than hexagonal structures. Based on the mechanoregulation theory, the differentiation process in these structures was more sensitive to inlet fluid flow than axial strain of the scaffold. This study provides a computational approach to determine the mechanical stimuli at the cellular level when cells are cultured in a bioreactor and to relate mechanical stimuli with cell differentiation.

  14. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    Science.gov (United States)

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  15. Recurrent Coding Sequence Variation Explains Only A Small Fraction of the Genetic Architecture of Colorectal Cancer

    NARCIS (Netherlands)

    Timofeeva, Maria N.; Ben Kinnersley, [Unknown; Farrington, Susan M.; Whiffin, Nicola; Palles, Claire; Svinti, Victoria; Lloyd, Amy; Gorman, Maggie; Ooi, Li-Yin; Hosking, Fay; Barclay, Ella; Zgaga, Lina; Dobbins, Sara; Martin, Lynn; Theodoratou, Evropi; Broderick, Peter; Tenesa, Albert; Smillie, Claire; Grimes, Graeme; Hayward, Caroline; Campbell, Archie; Porteous, David; Deary, Ian J.; Harris, Sarah E.; Northwood, Emma L.; Barrett, Jennifer H.; Smith, Gillian; Wolf, Roland; Forman, David; Morreau, Hans; Ruano, Dina; Tops, Carli; Wijnen, Juul; Schrumpf, Melanie; Boot, Arnoud; Vasen, Hans F. A.; Hes, Frederik J.; van Wezel, Tom; Franke, Andre; Lieb, Wolgang; Schafmayer, Clemens; Hampe, Jochen; Buch, Stephan; Propping, Peter; Hemminki, Kari; Foersti, Asta; Westers, Helga; Hofstra, Robert; Pinheiro, Manuela; Pinto, Carla; Teixeira, Manuel; Ruiz-Ponte, Clara; Fernandez-Rozadilla, Ceres; Carracedo, Angel; Castells, Antoni; Castellvi-Bel, Sergi; Campbell, Harry; Bishop, D. Timothy; Tomlinson, Ian P. M.; Dunlop, Malcolm G.; Houlston, Richard S.

    2015-01-01

    Whilst common genetic variation in many non-coding genomic regulatory regions are known to impart risk of colorectal cancer (CRC), much of the heritability of CRC remains unexplained. To examine the role of recurrent coding sequence variation in CRC aetiology, we genotyped 12,638 CRCs cases and

  16. Current status of the transient integral fuel element performance code URANUS

    International Nuclear Information System (INIS)

    Preusser, T.; Lassmann, K.

    1983-01-01

    To investigate the behavior of fuel pins during normal and off-normal operation, the integral fuel rod code URANUS has been extended to include a transient version. The paper describes the current status of the program system including a presentation of newly developed models for hypothetical accident investigation. The main objective of current development work is to improve the modelling of fuel and clad material behavior during fast transients. URANUS allows detailed analysis of experiments until the onset of strong material transport phenomena. Transient fission gas analysis is carried out due to the coupling with a special version of the LANGZEIT-KURZZEIT-code (KfK). Fuel restructuring and grain growth kinetics models have been improved recently to better characterize pre-experimental steady-state operation; transient models are under development. Extensive verification of the new version has been carried out by comparison with analytical solutions, experimental evidence, and code-to-code evaluation studies. URANUS, with all these improvements, has been successfully applied to difficult fast breeder fuel rod analysis including TOP, LOF, TUCOP, local coolant blockage and specific carbide fuel experiments. Objective of further studies is the description of transient PCMI. It is expected that the results of these developments will contribute significantly to the understanding of fuel element structural behavior during severe transients. (orig.)

  17. A comparison of two three-dimensional shell-element transient electromagnetics codes

    International Nuclear Information System (INIS)

    Yugo, J.J.; Williamson, D.E.

    1992-01-01

    Electromagnetic forces due to eddy currents strongly influence the design of components for the next generation of fusion devices. An effort has been made to benchmark two computer programs used to generate transient electromagnetic loads: SPARK and EddyCuFF. Two simple transient field problems were analyzed, both of which had been previously analyzed by the SPARK code with results recorded in the literature. A third problem that uses an ITER inboard blanket benchmark model was analyzed as well. This problem was driven with a self-consistent, distributed multifilament plasma model generated by an axisymmetric physics code. The benchmark problems showed good agreement between the two shell-element codes. Variations in calculated eddy currents of 1--3% have been found for similar, finely meshed models. A difference of 8% was found in induced current and 20% in force for a coarse mesh and complex, multifilament field driver. Because comparisons were made to results obtained from literature, model preparation and code execution times were not evaluated

  18. Genomic context analysis reveals dense interaction network between vertebrate ultraconserved non-coding elements.

    Science.gov (United States)

    Dimitrieva, Slavica; Bucher, Philipp

    2012-09-15

    Genomic context analysis, also known as phylogenetic profiling, is widely used to infer functional interactions between proteins but rarely applied to non-coding cis-regulatory DNA elements. We were wondering whether this approach could provide insights about utlraconserved non-coding elements (UCNEs). These elements are organized as large clusters, so-called gene regulatory blocks (GRBs) around key developmental genes. Their molecular functions and the reasons for their high degree of conservation remain enigmatic. In a special setting of genomic context analysis, we analyzed the fate of GRBs after a whole-genome duplication event in five fish genomes. We found that in most cases all UCNEs were retained together as a single block, whereas the corresponding target genes were often retained in two copies, one completely devoid of UCNEs. This 'winner-takes-all' pattern suggests that UCNEs of a GRB function in a highly cooperative manner. We propose that the multitude of interactions between UCNEs is the reason for their extreme sequence conservation. Supplementary data are available at Bioinformatics online and at http://ccg.vital-it.ch/ucne/

  19. Verification of Advective Bar Elements Implemented in the Aria Thermal Response Code.

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Brantley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    A verification effort was undertaken to evaluate the implementation of the new advective bar capability in the Aria thermal response code. Several approaches to the verification process were taken : a mesh refinement study to demonstrate solution convergence in the fluid and the solid, visually examining the mapping of the advective bar element nodes to the surrounding surfaces, and a comparison of solutions produced using the advective bars for simple geometries with solutions from commercial CFD software . The mesh refinement study has shown solution convergence for simple pipe flow in both temperature and velocity . Guidelines were provided to achieve appropriate meshes between the advective bar elements and the surrounding volume. Simulations of pipe flow using advective bars elements in Aria have been compared to simulations using the commercial CFD software ANSYS Fluent (r) and provided comparable solutions in temperature and velocity supporting proper implementation of the new capability. Verification of Advective Bar Elements iv Acknowledgements A special thanks goes to Dean Dobranich for his guidance and expertise through all stages of this effort . His advice and feedback was instrumental to its completion. Thanks also goes to Sam Subia and Tolu Okusanya for helping to plan many of the verification activities performed in this document. Thank you to Sam, Justin Lamb and Victor Brunini for their assistance in resolving issues encountered with running the advective bar element model. Finally, thanks goes to Dean, Sam, and Adam Hetzler for reviewing the document and providing very valuable comments.

  20. Simulation of Aircraft Landing Gears with a Nonlinear Dynamic Finite Element Code

    Science.gov (United States)

    Lyle, Karen H.; Jackson, Karen E.; Fasanella, Edwin L.

    2000-01-01

    Recent advances in computational speed have made aircraft and spacecraft crash simulations using an explicit, nonlinear, transient-dynamic, finite element analysis code more feasible. This paper describes the development of a simple landing gear model, which accurately simulates the energy absorbed by the gear without adding substantial complexity to the model. For a crash model, the landing gear response is approximated with a spring where the force applied to the fuselage is computed in a user-written subroutine. Helicopter crash simulations using this approach are compared with previously acquired experimental data from a full-scale crash test of a composite helicopter.

  1. Introduction of polycrystal constitutive laws in a finite element code with applications to zirconium forming

    International Nuclear Information System (INIS)

    Maudlin, P.J.; Tome, C.N.; Kaschner, G.C.; Gray, G.T. III

    1998-01-01

    In this work the authors simulate the compressive deformation of heavily textured zirconium sheet using a finite element code with the constitutive response given by a polycrystal self-consistent model. They show that the strong anisotropy of the response can be explained in terms of the texture and the relative activity of prismatic (easy) and pyramidal (hard) slip modes. The simulations capture the yield anisotropy observed for so-called through-thickness and in-plane compression tests in terMs of the loading curves and final specimen geometries

  2. Implementation of thermo-viscoplastic constitutive equations into the finite element code ABAQUS

    International Nuclear Information System (INIS)

    Youn, Sam Son; Lee, Soon Bok; Kim, Jong Bum; Lee, Hyeong Yeon; Yoo, Bong

    1998-01-01

    Sophisticated viscoplatic constitutive laws describing material behavior at high temperature have been implemented in the general-purpose finite element code ABAQUS to predict the viscoplastic response of structures to cyclic loading. Because of the complexity of viscoplastic constitutive equation, the general implementation methods are developed. The solution of the non-linear system of algebraic equations arising from time discretization is determined using line-search and back-tracking in combination with Newton method. The time integration method of the constitutive equations is based on semi-implicit method with efficient time step control. For numerical examples, the viscoplastic model proposed by Chaboche is implemented and several applications are illustrated

  3. An Investigation of the Methods of Logicalizing the Code-Checking System for Architectural Design Review in New Taipei City

    Directory of Open Access Journals (Sweden)

    Wei-I Lee

    2016-12-01

    Full Text Available The New Taipei City Government developed a Code-checking System (CCS using Building Information Modeling (BIM technology to facilitate an architectural design review in 2014. This system was intended to solve problems caused by cognitive gaps between designer and reviewer in the design review process. Along with considering information technology, the most important issue for the system’s development has been the logicalization of literal building codes. Therefore, to enhance the reliability and performance of the CCS, this study uses the Fuzzy Delphi Method (FDM on the basis of design thinking and communication theory to investigate the semantic difference and cognitive gaps among participants in the design review process and to propose the direction of system development. Our empirical results lead us to recommend grouping multi-stage screening and weighted assisted logicalization of non-quantitative building codes to improve the operability of CCS. Furthermore, CCS should integrate the Expert Evaluation System (EES to evaluate the design value under qualitative building codes.

  4. SP_Ace: a new code to derive stellar parameters and elemental abundances

    Science.gov (United States)

    Boeche, C.; Grebel, E. K.

    2016-03-01

    Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters

  5. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    Science.gov (United States)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  6. Solution of the neutronics code dynamic benchmark by finite element method

    Science.gov (United States)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  7. Objective Oriented Design of Architecture for TH System Safety Analysis Code and Verification

    International Nuclear Information System (INIS)

    Chung, Bub Dong

    2008-03-01

    In this work, objective oriented design of generic system analysis code has been tried based on the previous works in KAERI for two phase three field Pilot code. It has been performed to implement of input and output design, TH solver, component model, special TH models, heat structure solver, general table, trip and control, and on-line graphics. All essential features for system analysis has been designed and implemented in the final product SYSTF code. The computer language C was used for implementation in the Visual studio 2008 IDE (Integrated Development Environment) since it has easier and lighter than C++ feature. The code has simple and essential features of models and correlation, special component, special TH model and heat structure model. However the input features is able to simulate the various scenarios, such as steady state, non LOCA transient and LOCA accident. The structure validity has been tested through the various verification tests and it has been shown that the developed code can treat the non LOCA and LOCA simulation. However more detailed design and implementation of models are required to get the physical validity of SYSTF code simulation

  8. Governance of extended lifecycle in large-scale eHealth initiatives: analyzing variability of enterprise architecture elements.

    Science.gov (United States)

    Mykkänen, Juha; Virkanen, Hannu; Tuomainen, Mika

    2013-01-01

    The governance of large eHealth initiatives requires traceability of many requirements and design decisions. We provide a model which we use to conceptually analyze variability of several enterprise architecture (EA) elements throughout the extended lifecycle of development goals using interrelated projects related to the national ePrescription in Finland.

  9. Turbulence statistics in a spectral element code: a toolbox for High-Fidelity Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Vinuesa, Ricardo [KTH Mechanics, Stockholm (Sweden); Swedish e-Science Research Center (SeRC), Stockholm (Sweden); Fick, Lambert [Argonne National Lab. (ANL), Argonne, IL (United States); Negi, Prabal [KTH Mechanics, Stockholm (Sweden); Swedish e-Science Research Center (SeRC), Stockholm (Sweden); Marin, Oana [Argonne National Lab. (ANL), Argonne, IL (United States); Merzari, Elia [Argonne National Lab. (ANL), Argonne, IL (United States); Schlatter, Phillip [KTH Mechanics, Stockholm (Sweden); Swedish e-Science Research Center (SeRC), Stockholm (Sweden)

    2017-02-01

    In the present document we describe a toolbox for the spectral-element code Nek5000, aimed at computing turbulence statistics. The toolbox is presented for a small test case, namely a square duct with Lx = 2h, Ly = 2h and Lz = 4h, where x, y and z are the horizontal, vertical and streamwise directions, respectively. The number of elements in the xy-plane is 16 X 16 = 256, and the number of elements in z is 4, leading to a total of 1,204 spectral elements. A polynomial order of N = 5 is chosen, and the mesh is generated using the Nek5000 tool genbox. The toolbox presented here allows to compute mean-velocity components, the Reynolds-stress tensor as well as turbulent kinetic energy (TKE) and Reynolds-stress budgets. Note that the present toolbox allows to compute turbulence statistics in turbulent flows with one homogeneous direction (where the statistics are based on time-averaging as well as averaging in the homogeneous direction), as well as in fully three-dimensional flows (with no periodic directions, where only time-averaging is considered).

  10. Authentic reverse transcriptase is coded by jockey, a mobile Drosophila element related to mammalian LINEs.

    Science.gov (United States)

    Ivanov, V A; Melnikov, A A; Siunov, A V; Fodor, I I; Ilyin, Y V

    1991-01-01

    The mobile element jockey is similar in structural organization and coding potential to the LINEs of various organisms. It is transcribed at different stages of Drosophila ontogenesis. The Drosophila LINE family includes active transposable elements. Current models for the mechanism of transposition involve reverse transcription of an RNA intermediate and utilization of element-encoded proteins. As demonstrated here, a 2.23 kb DNA fragment from the region of jockey encoding the putative reverse transcriptase was stably introduced into an expression system under inducible control of the Escherichia coli lac regulatory elements. We describe the expression of the 92 kDa protein and identify this polypeptide alone as the authentic jockey reverse transcriptase based on some of its physical and enzymic properties. The jockey polymerase demonstrates RNA and DNA-directed DNA polymerase activities but lacks detectable RNase H, has a temperature optimum at 26 degrees C, requires Mg2+ or Mn2+ as a cofactor and is inactivated by sulphydryl reagent. The enzyme prefers poly(rC) and poly(rA) as template and 'activated' DNA is not effective. Images PMID:1714378

  11. Impacts of traditional architecture on the use of wood as an element of facade covering in Serbian contemporary architecture

    OpenAIRE

    Ivanović-Šekularac Jelena; Šekularac Nenad

    2011-01-01

    The world trend of re-use of wood and wood products as materials for construction and covering of architectural structures is present not only because of the need to meet the aesthetic, artistic and formal requirements or to seek inspiration in the return to the tradition and nature, but also because of its ecological, economic and energetic feasibility. Furthermore, the use of wood fits into contemporary trends of sustainable development and application of...

  12. Fuel element thermo-mechanical analysis during transient events using the FMS and FETMA codes

    International Nuclear Information System (INIS)

    Hernandez Lopez Hector; Hernandez Martinez Jose Luis; Ortiz Villafuerte Javier

    2005-01-01

    In the Instituto Nacional de Investigaciones Nucleares of Mexico, the Fuel Management System (FMS) software package has been used for long time to simulate the operation of a BWR nuclear power plant in steady state, as well as in transient events. To evaluate the fuel element thermo-mechanical performance during transient events, an interface between the FMS codes and our own Fuel Element Thermo Mechanical Analysis (FETMA) code is currently being developed and implemented. In this work, the results of the thermo-mechanical behavior of fuel rods in the hot channel during the simulation of transient events of a BWR nuclear power plant are shown. The transient events considered for this work are a load rejection and a feedwater control failure, which among the most important events that can occur in a BWR. The results showed that conditions leading to fuel rod failure at no time appeared for both events. Also, it is shown that a transient due load rejection is more demanding on terms of safety that the failure of a controller of the feedwater. (authors)

  13. The use of the MCNP code for the quantitative analysis of elements in geological formations

    Energy Technology Data Exchange (ETDEWEB)

    Cywicka-Jakiel, T.; Woynicka, U. [The Henryk Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Zorski, T. [University of Mining and Metallurgy, Faculty of Geology, Geophysics and Environmental Protection, Krakow (Poland)

    2003-07-01

    The Monte Carlo modelling calculations using the MCNP code have been performed, which support the spectrometric neutron-gamma (SNGL) borehole logging. The SNGL enables the lithology identification through the quantitative analysis of the elements in geological formations and thus can be very useful for the oil and gas industry as well as for prospecting of the potential host rocks for radioactive waste disposal. In the SNGL experiment, gamma-rays induced by the neutron interactions with the nuclei of the rock elements are detected using the gamma-ray probe of complex mechanical and electronic construction. The probe has to be calibrated for a wide range of the elemental concentrations, to assure the proper quantitative analysis. The Polish Calibration Station in Zielona Gora is equipped with a limited number of calibration standards. An extension of the experimental calibration and the evaluation of the effect of the so-called side effects (for example the borehole and formation salinity variation) on the accuracy of the SNGL method can be done by the use of the MCNP code. The preliminary MCNP results showing the effect of the borehole and formation fluids salinity variations on the accuracy of silicon (Si), calcium (Ca) and iron (Fe) content determination are presented in the paper. The main effort has been focused on a modelling of the complex SNGL probe situated in a fluid filled borehole, surrounded by a geological formation. Track length estimate of the photon flux from the (n,gamma) interactions as a function of gamma-rays energy was used. Calculations were run on the PC computer with AMD Athlon 1.33 GHz processor. Neutron and photon cross-sections libraries were taken from the MCNP4c package and based mainly on the ENDF/B-6, ENDF/B-5 and MCPLIB02 data. The results of simulated experiment are in conformity with results of the real experiment performed with the use of the main lithology models (sandstones, limestones and dolomite). (authors)

  14. The use of the MCNP code for the quantitative analysis of elements in geological formations

    International Nuclear Information System (INIS)

    Cywicka-Jakiel, T.; Woynicka, U.; Zorski, T.

    2003-01-01

    The Monte Carlo modelling calculations using the MCNP code have been performed, which support the spectrometric neutron-gamma (SNGL) borehole logging. The SNGL enables the lithology identification through the quantitative analysis of the elements in geological formations and thus can be very useful for the oil and gas industry as well as for prospecting of the potential host rocks for radioactive waste disposal. In the SNGL experiment, gamma-rays induced by the neutron interactions with the nuclei of the rock elements are detected using the gamma-ray probe of complex mechanical and electronic construction. The probe has to be calibrated for a wide range of the elemental concentrations, to assure the proper quantitative analysis. The Polish Calibration Station in Zielona Gora is equipped with a limited number of calibration standards. An extension of the experimental calibration and the evaluation of the effect of the so-called side effects (for example the borehole and formation salinity variation) on the accuracy of the SNGL method can be done by the use of the MCNP code. The preliminary MCNP results showing the effect of the borehole and formation fluids salinity variations on the accuracy of silicon (Si), calcium (Ca) and iron (Fe) content determination are presented in the paper. The main effort has been focused on a modelling of the complex SNGL probe situated in a fluid filled borehole, surrounded by a geological formation. Track length estimate of the photon flux from the (n,gamma) interactions as a function of gamma-rays energy was used. Calculations were run on the PC computer with AMD Athlon 1.33 GHz processor. Neutron and photon cross-sections libraries were taken from the MCNP4c package and based mainly on the ENDF/B-6, ENDF/B-5 and MCPLIB02 data. The results of simulated experiment are in conformity with results of the real experiment performed with the use of the main lithology models (sandstones, limestones and dolomite). (authors)

  15. Code conforming determination of cumulative usage factors for general elastic-plastic finite element analyses

    International Nuclear Information System (INIS)

    Rudolph, Juergen; Goetz, Andreas; Hilpert, Roland

    2012-01-01

    The procedures of fatigue analyses of several relevant nuclear and conventional design codes (ASME, KTA, EN, AD) for power plant components differentiate between an elastic, simplified elastic-plastic and elastic-plastic fatigue check. As a rule, operational load levels will exclude the purely elastic fatigue check. The application of the code procedure of the simplified elastic-plastic fatigue check is common practice. Nevertheless, resulting cumulative usage factors may be overly conservative mainly due to high code based plastification penalty factors Ke. As a consequence, the more complex and still code conforming general elastic-plastic fatigue analysis methodology based on non-linear finite element analysis (FEA) is applied for fatigue design as an alternative. The requirements of the FEA and the material law to be applied have to be clarified in a first step. Current design codes only give rough guidelines on these relevant items. While the procedure for the simplified elastic-plastic fatigue analysis and the associated code passages are based on stress related cycle counting and the determination of pseudo elastic equivalent stress ranges, an adaptation to elastic-plastic strains and strain ranges is required for the elastic-plastic fatigue check. The associated requirements are explained in detail in the paper. If the established and implemented evaluation mechanism (cycle counting according to the peak and valley respectively the rainflow method, calculation of stress ranges from arbitrary load-time histories and determination of cumulative usage factors based on all load events) is to be retained, a conversion of elastic-plastic strains and strain ranges into pseudo elastic stress ranges is required. The algorithm to be applied is described in the paper. It has to be implemented in the sense of an extended post processing operation of FEA e.g. by APDL scripts in ANSYS registered . Variations of principal stress (strain) directions during the loading

  16. Parallelization of the molecular dynamics code GROMOS87 for distributed memory parallel architectures

    NARCIS (Netherlands)

    Green, DG; Meacham, KE; vanHoesel, F; Hertzberger, B; Serazzi, G

    1995-01-01

    This paper describes the techniques and methodologies employed during parallelization of the Molecular Dynamics (MD) code GROMOS87, with the specific requirement that the program run efficiently on a range of distributed-memory parallel platforms. We discuss the preliminary results of our parallel

  17. ABAQUS-EPGEN: a general-purpose finite element code. Volume 3. Example problems manual

    International Nuclear Information System (INIS)

    Hibbitt, H.D.; Karlsson, B.I.; Sorensen, E.P.

    1983-03-01

    This volume is the Example and Verification Problems Manual for ABAQUS/EPGEN. Companion volumes are the User's, Theory and Systems Manuals. This volume contains two major parts. The bulk of the manual (Sections 1-8) contains worked examples that are discussed in detail, while Appendix A documents a large set of basic verification cases that provide the fundamental check of the elements in the code. The examples in Sections 1-8 illustrate and verify significant aspects of the program's capability. Most of these problems provide verification, but they have also been chosen to allow discussion of modeling and analysis techniques. Appendix A contains basic verification cases. Each of these cases verifies one element in the program's library. The verification consists of applying all possible load or flux types (including thermal loading of stress elements), and all possible foundation or film/radiation conditions, and checking the resulting force and stress solutions or flux and temperature results. This manual provides program verification. All of the problems described in the manual are run and the results checked, for each release of the program, and these verification results are made available

  18. Guided waves dispersion equations for orthotropic multilayered pipes solved using standard finite elements code.

    Science.gov (United States)

    Predoi, Mihai Valentin

    2014-09-01

    The dispersion curves for hollow multilayered cylinders are prerequisites in any practical guided waves application on such structures. The equations for homogeneous isotropic materials have been established more than 120 years ago. The difficulties in finding numerical solutions to analytic expressions remain considerable, especially if the materials are orthotropic visco-elastic as in the composites used for pipes in the last decades. Among other numerical techniques, the semi-analytical finite elements method has proven its capability of solving this problem. Two possibilities exist to model a finite elements eigenvalue problem: a two-dimensional cross-section model of the pipe or a radial segment model, intersecting the layers between the inner and the outer radius of the pipe. The last possibility is here adopted and distinct differential problems are deduced for longitudinal L(0,n), torsional T(0,n) and flexural F(m,n) modes. Eigenvalue problems are deduced for the three modes classes, offering explicit forms of each coefficient for the matrices used in an available general purpose finite elements code. Comparisons with existing solutions for pipes filled with non-linear viscoelastic fluid or visco-elastic coatings as well as for a fully orthotropic hollow cylinder are all proving the reliability and ease of use of this method. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Porting the 3D Gyrokinetic Particle-in-cell Code GTC to the CRAY/NEC SX-6 Vector Architecture: Perspectives and Challenges

    International Nuclear Information System (INIS)

    Ethier, S.; Lin, Z.

    2003-01-01

    Several years of optimization on the super-scalar architecture has made it more difficult to port the current version of the 3D particle-in-cell code GTC to the CRAY/NEC SX-6 vector architecture. This paper explains the initial work that has been done to port this code to the SX-6 computer and to optimize the most time consuming parts. Early performance results are shown and compared to the same test done on the IBM SP Power 3 and Power 4 machines

  20. Development of Galerkin Finite Element Method Three-dimensional Computational Code for the Multigroup Neutron Diffusion Equation with Unstructured Tetrahedron Elements

    Directory of Open Access Journals (Sweden)

    Seyed Abolfazl Hosseini

    2016-02-01

    Full Text Available In the present paper, development of the three-dimensional (3D computational code based on Galerkin finite element method (GFEM for solving the multigroup forward/adjoint diffusion equation in both rectangular and hexagonal geometries is reported. Linear approximation of shape functions in the GFEM with unstructured tetrahedron elements is used in the calculation. Both criticality and fixed source calculations may be performed using the developed GFEM-3D computational code. An acceptable level of accuracy at a low computational cost is the main advantage of applying the unstructured tetrahedron elements. The unstructured tetrahedron elements generated with Gambit software are used in the GFEM-3D computational code through a developed interface. The forward/adjoint multiplication factor, forward/adjoint flux distribution, and power distribution in the reactor core are calculated using the power iteration method. Criticality calculations are benchmarked against the valid solution of the neutron diffusion equation for International Atomic Energy Agency (IAEA-3D and Water-Water Energetic Reactor (VVER-1000 reactor cores. In addition, validation of the calculations against the P1 approximation of the transport theory is investigated in relation to the liquid metal fast breeder reactor benchmark problem. The neutron fixed source calculations are benchmarked through a comparison with the results obtained from similar computational codes. Finally, an analysis of the sensitivity of calculations to the number of elements is performed.

  1. Cellulose as an Architectural Element in Spatially Structured Escherichia coli Biofilms

    Science.gov (United States)

    Serra, Diego O.; Richter, Anja M.

    2013-01-01

    Morphological form in multicellular aggregates emerges from the interplay of genetic constitution and environmental signals. Bacterial macrocolony biofilms, which form intricate three-dimensional structures, such as large and often radially oriented ridges, concentric rings, and elaborate wrinkles, provide a unique opportunity to understand this interplay of “nature and nurture” in morphogenesis at the molecular level. Macrocolony morphology depends on self-produced extracellular matrix components. In Escherichia coli, these are stationary phase-induced amyloid curli fibers and cellulose. While the widely used “domesticated” E. coli K-12 laboratory strains are unable to generate cellulose, we could restore cellulose production and macrocolony morphology of E. coli K-12 strain W3110 by “repairing” a single chromosomal SNP in the bcs operon. Using scanning electron and fluorescence microscopy, cellulose filaments, sheets and nanocomposites with curli fibers were localized in situ at cellular resolution within the physiologically two-layered macrocolony biofilms of this “de-domesticated” strain. As an architectural element, cellulose confers cohesion and elasticity, i.e., tissue-like properties that—together with the cell-encasing curli fiber network and geometrical constraints in a growing colony—explain the formation of long and high ridges and elaborate wrinkles of wild-type macrocolonies. In contrast, a biofilm matrix consisting of the curli fiber network only is brittle and breaks into a pattern of concentric dome-shaped rings separated by deep crevices. These studies now set the stage for clarifying how regulatory networks and in particular c-di-GMP signaling operate in the three-dimensional space of highly structured and “tissue-like” bacterial biofilms. PMID:24097954

  2. Modification of the finite element heat and mass transfer code (FEHMN) to model multicomponent reactive transport

    International Nuclear Information System (INIS)

    Viswanathan, H.S.

    1995-01-01

    The finite element code FEHMN is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developed hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent K d model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also provide that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies

  3. Exposing Hierarchical Parallelism in the FLASH Code for Supernova Simulation on Summit and Other Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Papatheodore, Thomas L. [ORNL; Messer, Bronson [ORNL

    2017-11-01

    Since roughly 100 million years after the big bang, the primordial elements hydrogen (H), helium (He), and lithium (Li) have been synthesized into heavier elements by thermonuclear reactions inside of the stars. The change in stellar composition resulting from these reactions causes stars to evolve over the course of their lives. Although most stars burn through their nuclear fuel and end their lives quietly as inert, compact objects, whereas others end in explosive deaths. These stellar explosions are called supernovae and are among the most energetic events known to occur in our universe. Supernovae themselves further process the matter of their progenitor stars and distribute this material into the interstellar medium of their host galaxies. In the process, they generate ∼1051 ergs of kinetic energy by sending shock waves into their surroundings, thereby contributing to galactic dynamics as well.

  4. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    International Nuclear Information System (INIS)

    Xia, Yidong; Wang, Chuanjin; Luo, Hong; Christon, Mark; Bakosi, Jozsef

    2016-01-01

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in the simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the Hydra-TH code. -- Highlights: •We performed a comprehensive study to verify and validate the turbulence models in Hydra-TH. •Hydra-TH delivers 2nd-order grid convergence for the incompressible Navier–Stokes equations. •Hydra-TH can accurately simulate the laminar boundary layers. •Hydra-TH can accurately simulate the turbulent boundary layers with RANS turbulence models. •Hydra-TH delivers high-fidelity LES capability for simulating turbulent flows in confined space.

  5. SAFE: A computer code for the steady-state and transient thermal analysis of LMR fuel elements

    International Nuclear Information System (INIS)

    Hayes, S.L.

    1993-12-01

    SAFE is a computer code developed for both the steady-state and transient thermal analysis of single LMR fuel elements. The code employs a two-dimensional control-volume based finite difference methodology with fully implicit time marching to calculate the temperatures throughout a fuel element and its associated coolant channel for both the steady-state and transient events. The code makes no structural calculations or predictions whatsoever. It does, however, accept as input structural parameters within the fuel such as the distributions of porosity and fuel composition, as well as heat generation, to allow a thermal analysis to be performed on a user-specified fuel structure. The code was developed with ease of use in mind. An interactive input file generator and material property correlations internal to the code are available to expedite analyses using SAFE. This report serves as a complete design description of the code as well as a user's manual. A sample calculation made with SAFE is included to highlight some of the code's features. Complete input and output files for the sample problem are provided

  6. Modification of the finite element heat and mass transfer code (FEHM) to model multicomponent reactive transport

    International Nuclear Information System (INIS)

    Viswanathan, H.S.

    1996-08-01

    The finite element code FEHMN, developed by scientists at Los Alamos National Laboratory (LANL), is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developing hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent Kd model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The new chemical capabilities of FEHMN are illustrated by using Los Alamos National Laboratory's site scale model of Yucca Mountain to model two-dimensional, vadose zone 14 C transport. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also prove that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies

  7. THE MEDIEVAL AND OTTOMAN HAMMAMS OF ALGERIA; ELEMENTS FOR A HISTORICAL STUDY OF BATHS ARCHITECTURE IN NORTH AFRICA

    Directory of Open Access Journals (Sweden)

    Nabila Cherif-Seffadj

    2009-03-01

    Full Text Available Algerian medinas (Islamic cities have several traditional public baths (hammams. However, these hammams are the least known in the Maghreb countries. The first French archaeological surveys carried out on Islamic monuments and sites in Algeria, have found few historic baths in medieval towns. All along the highlands route, from Algiers (capital city of Algeria located in the North to Tlemcen (city in the Western part of Algeria, these structures are found in all the cities founded after the Islamic religion expanded in the Western North Africa. These buildings are often associated to large mosques. In architectural history, these baths illustrate original spatial and organizational compositions under form proportions, methods of construction, ornamental elements and the technical skills of their builders. The ancient traditions of bathing interpreted in this building type are an undeniable legacy. They are present through architectural typology and technical implementation reflecting the important architectural heritage of the great Roman cities in Algeria. Furthermore, these traditions and buildings evolved through different eras. Master builders, who left Andalusia to seek refuge in the Maghreb countries, added the construction and ornamentation skills and techniques brought from Muslim Spain, while the Ottomans contribution in the history of many urban cities is important. Hence, the dual appellation of the hammam as “Moorish bath” and “Turkish bath” in Algeria is the perfect illustration of the evolution of bath architecture in Algeria.

  8. A nonlinear, implicit, three-dimensional finite element code for solid and structural mechanics - User`s Manual

    Energy Technology Data Exchange (ETDEWEB)

    Maker, B.N.

    1995-04-14

    This report provides a user`s manual for NIKE3D, a fully implicit three-dimensional finite element code for analyzing the finite strain static and dynamic response of inelastic solids, shells, and beams. Spatial discretization is achieved by the use of 8-node solid elements, 2-node truss and beam elements, and 4-node membrane and shell elements. Over twenty constitutive models are available for representing a wide range of elastic, plastic, viscous, and thermally dependent material behavior. Contact-impact algorithms permit gaps, frictional sliding, and mesh discontinuities along material interfaces. Several nonlinear solution strategies are available, including Full-, Modified-, and Quasi-Newton methods. The resulting system of simultaneous linear equations is either solved iteratively by an element-by-element method, or directly by a factorization method, for which case bandwidth minimization is optional. Data may be stored either in or out of core memory to allow for large analyses.

  9. Finite elements numerical codes as primary tool to improve beam optics in NIO1

    Science.gov (United States)

    Baltador, C.; Cavenago, M.; Veltri, P.; Serianni, G.

    2017-08-01

    The RF negative ion source NIO1, built at Consorzio RFX in Padua (Italy), is aimed to investigate general issues on ion source physics in view of the full-size ITER injector MITICA as well as DEMO relevant solutions, like energy recovery and alternative neutralization systems, crucial for neutral beam injectors in future fusion experiments. NIO1 has been designed to produce 9 H-beamlets (in a 3x3 pattern) of 15mA each and 60keV, using a three electrodes system downstream the plasma source. At the moment the source is at its early operational stage and only operation at low power and low beam energy is possible. In particular, NIO1 presents a too strong set of SmCo co-extraction electron suppression magnets (CESM) in the extraction grid (EG) that will be replaced by a weaker set of Ferrite magnets. A completely new set of magnets will be also designed and mounted on the new EG that will be installed next year, replacing the present one. In this paper, the finite element code OPERA 3D is used to investigate the effects of the three sets of magnets on beamlet optics. A comparison of numerical results with measurements will be provided where possible.

  10. High-Fidelity Buckling Analysis of Composite Cylinders Using the STAGS Finite Element Code

    Science.gov (United States)

    Hilburger, Mark W.

    2014-01-01

    Results from previous shell buckling studies are presented that illustrate some of the unique and powerful capabilities in the STAGS finite element analysis code that have made it an indispensable tool in structures research at NASA over the past few decades. In particular, prototypical results from the development and validation of high-fidelity buckling simulations are presented for several unstiffened thin-walled compression-loaded graphite-epoxy cylindrical shells along with a discussion on the specific methods and user-defined subroutines in STAGS that are used to carry out the high-fidelity simulations. These simulations accurately account for the effects of geometric shell-wall imperfections, shell-wall thickness variations, local shell-wall ply-gaps associated with the fabrication process, shell-end geometric imperfections, nonuniform applied end loads, and elastic boundary conditions. The analysis procedure uses a combination of nonlinear quasi-static and transient dynamic solution algorithms to predict the prebuckling and unstable collapse response characteristics of the cylinders. Finally, the use of high-fidelity models in the development of analysis-based shell-buckling knockdown (design) factors is demonstrated.

  11. Ethical codes. Fig leaf argument, ballast or cultural element for radiation protection?

    International Nuclear Information System (INIS)

    Gellermann, Rainer

    2014-01-01

    The international association for radiation protection (IRPA) adopted in May 2004 a Code of Ethics in order to allow their members to hold an adequate professional level of ethical line of action. Based on this code of ethics the professional body of radiation protection (Fachverband fuer Strahlenschutz) has developed its own ethical code and adopted in 2005.

  12. A dual origin of the Xist gene from a protein-coding gene and a set of transposable elements.

    Directory of Open Access Journals (Sweden)

    Eugeny A Elisaphenko

    2008-06-01

    Full Text Available X-chromosome inactivation, which occurs in female eutherian mammals is controlled by a complex X-linked locus termed the X-inactivation center (XIC. Previously it was proposed that genes of the XIC evolved, at least in part, as a result of pseudogenization of protein-coding genes. In this study we show that the key XIC gene Xist, which displays fragmentary homology to a protein-coding gene Lnx3, emerged de novo in early eutherians by integration of mobile elements which gave rise to simple tandem repeats. The Xist gene promoter region and four out of ten exons found in eutherians retain homology to exons of the Lnx3 gene. The remaining six Xist exons including those with simple tandem repeats detectable in their structure have similarity to different transposable elements. Integration of mobile elements into Xist accompanies the overall evolution of the gene and presumably continues in contemporary eutherian species. Additionally we showed that the combination of remnants of protein-coding sequences and mobile elements is not unique to the Xist gene and is found in other XIC genes producing non-coding nuclear RNA.

  13. Transduplication resulted in the incorporation of two protein-coding sequences into the Turmoil-1 transposable element of C. elegans

    Directory of Open Access Journals (Sweden)

    Pupko Tal

    2008-10-01

    Full Text Available Abstract Transposable elements may acquire unrelated gene fragments into their sequences in a process called transduplication. Transduplication of protein-coding genes is common in plants, but is unknown of in animals. Here, we report that the Turmoil-1 transposable element in C. elegans has incorporated two protein-coding sequences into its inverted terminal repeat (ITR sequences. The ITRs of Turmoil-1 contain a conserved RNA recognition motif (RRM that originated from the rsp-2 gene and a fragment from the protein-coding region of the cpg-3 gene. We further report that an open reading frame specific to C. elegans may have been created as a result of a Turmoil-1 insertion. Mutations at the 5' splice site of this open reading frame may have reactivated the transduplicated RRM motif. Reviewers This article was reviewed by Dan Graur and William Martin. For the full reviews, please go to the Reviewers' Reports section.

  14. Evaluation of finite element codes for demonstrating the performance of radioactive material packages in hypothetical accident drop scenarios

    International Nuclear Information System (INIS)

    Tso, C.F.; Hueggenberg, R.

    2004-01-01

    Drop testing and analysis are the two methods for demonstrating the performance of packages in hypothetical drop accident scenarios. The exact purpose of the tests and the analyses, and the relative prominence of the two in the license application, may depend on the Competent Authority and will vary between countries. The Finite Element Method (FEM) is a powerful analysis tool. A reliable finite element (FE) code when used correctly and appropriately, will allow a package's behaviour to be simulated reliably. With improvements in computing power, and in sophistication and reliability of FE codes, it is likely that FEM calculations will increasingly be used as evidence of drop test performance when seeking Competent Authority approval. What is lacking at the moment, however, is a standardised method of assessing a FE code in order to determine whether it is sufficiently reliable or pessimistic. To this end, the project Evaluation of Codes for Analysing the Drop Test Performance of Radioactive Material Transport Containers, funded by the European Commission Directorate-General XVII (now Directorate-General for Energy and Transport) and jointly performed by Arup and Gesellschaft fuer Nuklear-Behaelter mbH, was carried out in 1998. The work consisted of three components: Survey of existing finite element software, with a view to finding codes that may be capable of analysing drop test performance of radioactive material packages, and to produce an inventory of them. Develop a set of benchmark problems to evaluate software used for analysing the drop test performance of packages. Evaluate the finite element codes by testing them against the benchmarks This paper presents a summary of this work

  15. RegSEM: a versatile code based on the spectral element method to compute seismic wave propagation at the regional scale

    Science.gov (United States)

    Cupillard, Paul; Delavaud, Elise; Burgos, Gaël.; Festa, Geatano; Vilotte, Jean-Pierre; Capdeville, Yann; Montagner, Jean-Paul

    2012-03-01

    The spectral element method, which provides an accurate solution of the elastodynamic problem in heterogeneous media, is implemented in a code, called RegSEM, to compute seismic wave propagation at the regional scale. By regional scale we here mean distances ranging from about 1 km (local scale) to 90° (continental scale). The advantage of RegSEM resides in its ability to accurately take into account 3-D discontinuities such as the sediment-rock interface and the Moho. For this purpose, one version of the code handles local unstructured meshes and another version manages continental structured meshes. The wave equation can be solved in any velocity model, including anisotropy and intrinsic attenuation in the continental version. To validate the code, results from RegSEM are compared to analytical and semi-analytical solutions available in simple cases (e.g. explosion in PREM, plane wave in a hemispherical basin). In addition, realistic simulations of an earthquake in different tomographic models of Europe are performed. All these simulations show the great flexibility of the code and point out the large influence of the shallow layers on the propagation of seismic waves at the regional scale. RegSEM is written in Fortran 90 but it also contains a couple of C routines. It is an open-source software which runs on distributed memory architectures. It can give rise to interesting applications, such as testing regional tomographic models, developing tomography using either passive (i.e. noise correlations) or active (i.e. earthquakes) data, or improving our knowledge on effects linked with sedimentary basins.

  16. Development of a three-dimensional neutron transport code DFEM based on the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1996-01-01

    A three-dimensional neutron transport code DFEM has been developed by the double finite element method to analyze reactor cores with complex geometry as large fast reactors. Solution algorithm is based on the double finite element method in which the space and angle finite elements are employed. A reactor core system can be divided into some triangular and/or quadrangular prism elements, and the spatial distribution of neutron flux in each element is approximated with linear basis functions. As for the angular variables, various basis functions are applied, and their characteristics were clarified by comparison. In order to enhance the accuracy, a general method is derived to remedy the truncation errors at reflective boundaries, which are inherent in the conventional FEM. An adaptive acceleration method and the source extrapolation method were applied to accelerate the convergence of the iterations. The code structure is outlined and explanations are given on how to prepare input data. A sample input list is shown for reference. The eigenvalue and flux distribution for real scale fast reactors and the NEA benchmark problems were presented and discussed in comparison with the results of other transport codes. (author)

  17. Development of CAPP code based on the finite element method for the analysis of VHTR cores - HTR2008-58169

    International Nuclear Information System (INIS)

    Lee, H. C.; Jo, C. K.; Noh, J. M.

    2008-01-01

    In this study, we developed a neutron diffusion equation solver based on the finite element method for CAPP code. Three types of triangular finite elements and five types of rectangular depending on the order of the shape functions were implemented for 2-D application. Ten types of triangular prismatic finite elements and seventeen types of rectangular prismatic finite elements were also implemented for 3-D application. Two types of polynomial mapping from the master finite element to a real finite element were adopted for flexibility in dealing with complex geometry. They are linear mapping and iso-parametric mapping. In linear mapping, only the vertex nodes are used as the mapping points. In iso-parametric mapping, all the nodal points in the finite element are used as the mapping points, which enables the real finite elements to have curved surfaces. For the treatment of spatial dependency of cross-sections in the finite elements, three types of polynomial expansion of the cross-sections in the finite elements were implemented. They are constant, linear, and iso-parametric cross-section expansions. The power method with the Wielandt acceleration technique was adopted as the outer iteration algorithm. The BiCGSTAB algorithm with the ILU (Incomplete LU) decomposition pre-conditioner was used as the linear equation solver in the inner iteration. The neutron diffusion equation solver developed in this study was verified against two well known benchmark problems, IAEA PWR benchmark problem and OECD/NEA PBMR400 benchmark problem. Results of numerical tests showed that the solution converged to the reference solution as the finite elements are refined and as the order of the finite elements increases. Numerical tests also showed that the higher order finite element method is much efficient than lower order finite element method or finite difference method. (authors)

  18. Elements of regional architecture in the works of architect Ivan Antić

    Directory of Open Access Journals (Sweden)

    Milašinović-Marić Dijana

    2017-01-01

    Full Text Available The body of work of Ivan Antić (Belgrade, 1923-2005, one of the most important Serbian architects who created his works in the period from 1955 to 1990, represents almost a reification of ideals of the times he lived in, both in terms of form and in structural and substantive terms. His work is placed within a rationalistic concept which is essentially experienced as an undisturbed harmony between his personality and the contemporary architectural expression. However, besides such way of interpretation, his architecture also includes examples indicating the thinking about the folk tradition, architectural heritage, the primordial, as well as the archetypal, typical for a region. In the context of the body of work of architect Ivan Antić, this paper will particularly place accent and track such threads of thinking which are, in an obvious or transparent sense, expressed in a series of realized solutions and designs such as the Guard's Home in Dedinje (Belgrade, 1957-1958, Children's Home (Jermenovci, 1956-1957, Museum of the Genocide in Šumarice which he designed together with I. Raspopović (Kragujevac, 1968-1975, 'Politika' Cultural Centre (Krupanj, 1976-1981, '25th May' Sports Center (Belgrade, 1971-1973, or his own house in Lisović near Belgrade. All the abovementioned buildings, as well as numerous other, which belong at the top of Serbian architecture, reflect the spirit of the time in which he created them. They clearly indicate the unbreakable bond which exists in architecture between the inherited, vernacular, contemporary and personal architect's attitude.

  19. Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures

    International Nuclear Information System (INIS)

    Ohana, N; Lanti, E; Tran, T M; Brunner, S; Hariri, F; Villard, L; Jocksch, A; Gheller, C

    2016-01-01

    With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices. (paper)

  20. Implementation of the full viscoresistive magnetohydrodynamic equations in a nonlinear finite element code

    Energy Technology Data Exchange (ETDEWEB)

    Haverkort, J.W. [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090 GB Amsterdam (Netherlands); Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven (Netherlands); Blank, H.J. de [Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven (Netherlands); Huysmans, G.T.A. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Pratt, J. [Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven (Netherlands); Koren, B., E-mail: b.koren@tue.nl [Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

    2016-07-01

    Numerical simulations form an indispensable tool to understand the behavior of a hot plasma that is created inside a tokamak for providing nuclear fusion energy. Various aspects of tokamak plasmas have been successfully studied through the reduced magnetohydrodynamic (MHD) model. The need for more complete modeling through the full MHD equations is addressed here. Our computational method is presented along with measures against possible problems regarding pollution, stability, and regularity. The problem of ensuring continuity of solutions in the center of a polar grid is addressed in the context of a finite element discretization of the full MHD equations. A rigorous and generally applicable solution is proposed here. Useful analytical test cases are devised to verify the correct implementation of the momentum and induction equation, the hyperdiffusive terms, and the accuracy with which highly anisotropic diffusion can be simulated. A striking observation is that highly anisotropic diffusion can be treated with the same order of accuracy as isotropic diffusion, even on non-aligned grids, as long as these grids are generated with sufficient care. This property is shown to be associated with our use of a magnetic vector potential to describe the magnetic field. Several well-known instabilities are simulated to demonstrate the capabilities of the new method. The linear growth rate of an internal kink mode and a tearing mode are benchmarked against the results of a linear MHD code. The evolution of a tearing mode and the resulting magnetic islands are simulated well into the nonlinear regime. The results are compared with predictions from the reduced MHD model. Finally, a simulation of a ballooning mode illustrates the possibility to use our method as an ideal MHD method without the need to add any physical dissipation.

  1. Relational Architecture

    DEFF Research Database (Denmark)

    Reeh, Henrik

    2018-01-01

    The present study of PhD education and its impact on architectural research singles out three layers of relational architecture. A first layer of relationality appears in a graphic model in which an intimate link between PhD education and architectural research is outlined. The model reflects...... in a scholarly institution (element #3), as well as the certified PhD scholar (element #4) and the architectural profession, notably its labour market (element #5). This first layer outlines the contemporary context which allows architectural research to take place in a dynamic relationship to doctoral education....... A second layer of relational architecture is revealed when one examines the conception of architecture generated in selected PhD dissertations. Focusing on six dissertations with which the author of the present article was involved as a supervisor, the analysis lays bare a series of dynamic...

  2. Rn3D: A finite element code for simulating gas flow and radon transport in variably saturated, nonisothermal porous media

    International Nuclear Information System (INIS)

    Holford, D.J.

    1994-01-01

    This document is a user's manual for the Rn3D finite element code. Rn3D was developed to simulate gas flow and radon transport in variably saturated, nonisothermal porous media. The Rn3D model is applicable to a wide range of problems involving radon transport in soil because it can simulate either steady-state or transient flow and transport in one-, two- or three-dimensions (including radially symmetric two-dimensional problems). The porous materials may be heterogeneous and anisotropic. This manual describes all pertinent mathematics related to the governing, boundary, and constitutive equations of the model, as well as the development of the finite element equations used in the code. Instructions are given for constructing Rn3D input files and executing the code, as well as a description of all output files generated by the code. Five verification problems are given that test various aspects of code operation, complete with example input files, FORTRAN programs for the respective analytical solutions, and plots of model results. An example simulation is presented to illustrate the type of problem Rn3D is designed to solve. Finally, instructions are given on how to convert Rn3D to simulate systems other than radon, air, and water

  3. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    International Nuclear Information System (INIS)

    Kirk, B.L.; Sartori, E.; Viedma, L.G. de

    1997-01-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee's Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community's computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management

  4. Slides for AMI CHEP presentation: Looking back on 10 years of the ATLAS Metadata Interface. Reflections on architecture, code design and development methods.

    CERN Document Server

    Fulachier, J; The ATLAS collaboration; Albrand, S; Lambert, F

    2013-01-01

    . The “ATLAS Metadata Interface” framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we describe briefly the architecture and the main uses of the framework within the experiment (TagCollector for release management and Dataset Discovery). These two applications, which share almost 2000 registered users, are superficially quite different, however much of the code is shared and they have been developed and maintained over a decade almost completely by the same team of 3 people. We discuss how the architectural principles established at the beginning of the project have allowed us to continue both to integrate the new technologies and to respond to the new metadata use cases which inevitably appear over such a time period.

  5. Technology of Oak Architectural and Decorative Elements Manufacturing for Iconostasis Recreating in Krestovozdvizhensky Temple in Village of Syrostan, Chelyabinsk region

    Science.gov (United States)

    Yudin, V.

    2017-11-01

    Due to the historical peculiarities of Russia, by the end of the 20th century many temples were destroyed or they lost their iconostases which most often were made of wood. When it became necessary to revive the traditional craft it turned out that it was lost almost completely which negatively affects the quality of the wooden iconostases restoration and their new construction. The article aims to fill the loss of knowledge and skills that make up the content of one of the most interesting types of the architectural and monumental and decorative art through study of the forms of preserved fragments once being a very rich historical and cultural heritage. Similar works on the study of wooden iconostases aimed at the recreation of oak decorative wooden elements and restoration practice have not been performed so far which gives it a character of particular relevance for the architectural science. New and relevant technological improvements are not rejected but skillfully introduced into the arsenal of techniques and means of modern restorers and carvers to facilitate the recovery of iconostasis construction from a crisis state and the transition to the subsequent continuation of the tradition development. The deep knowledge of the research subject allowed one to use oak decorative elements in the manufacture for recreating the iconostasis of the Krestovozdvizhensky temple in the village of Syrostan, the Chelyabinsk region. This material is undoubtedly of a scientific and reference nature as well as economic efficiency for all those who wish to join the noble traditional iconostasis making art.

  6. Contribution of transposable elements and distal enhancers to evolution of human-specific features of interphase chromatin architecture in embryonic stem cells.

    Science.gov (United States)

    Glinsky, Gennadi V

    2018-03-01

    Transposable elements have made major evolutionary impacts on creation of primate-specific and human-specific genomic regulatory loci and species-specific genomic regulatory networks (GRNs). Molecular and genetic definitions of human-specific changes to GRNs contributing to development of unique to human phenotypes remain a highly significant challenge. Genome-wide proximity placement analysis of diverse families of human-specific genomic regulatory loci (HSGRL) identified topologically associating domains (TADs) that are significantly enriched for HSGRL and designated rapidly evolving in human TADs. Here, the analysis of HSGRL, hESC-enriched enhancers, super-enhancers (SEs), and specific sub-TAD structures termed super-enhancer domains (SEDs) has been performed. In the hESC genome, 331 of 504 (66%) of SED-harboring TADs contain HSGRL and 68% of SEDs co-localize with HSGRL, suggesting that emergence of HSGRL may have rewired SED-associated GRNs within specific TADs by inserting novel and/or erasing existing non-coding regulatory sequences. Consequently, markedly distinct features of the principal regulatory structures of interphase chromatin evolved in the hESC genome compared to mouse: the SED quantity is 3-fold higher and the median SED size is significantly larger. Concomitantly, the overall TAD quantity is increased by 42% while the median TAD size is significantly decreased (p = 9.11E-37) in the hESC genome. Present analyses illustrate a putative global role for transposable elements and HSGRL in shaping the human-specific features of the interphase chromatin organization and functions, which are facilitated by accelerated creation of novel transcription factor binding sites and new enhancers driven by targeted placement of HSGRL at defined genomic coordinates. A trend toward the convergence of TAD and SED architectures of interphase chromatin in the hESC genome may reflect changes of 3D-folding patterns of linear chromatin fibers designed to enhance both

  7. User's Manual for the FEHM Application-A Finite-Element Heat- and Mass-Transfer Code

    Energy Technology Data Exchange (ETDEWEB)

    George A. Zyvoloski; Bruce A. Robinson; Zora V. Dash; Lynn L. Trease

    1997-07-07

    This document is a manual for the use of the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multicomponent flow in porous media. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved in the FEHM application by using the finite-element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat- and mass-transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. In fact, FEHM is capable of describing flow that is dominated in many areas by fracture and fault flow, including the inherently three-dimensional flow that results from permeation to and from faults and fractures. The code can handle coupled heat and mass-transfer effects, such as boiling, dryout, and condensation that can occur in the near-field region surrounding the potential repository and the natural convection that occurs through Yucca Mountain due to seasonal temperature changes. The code is also capable of incorporating the various adsorption mechanisms, ranging from simple linear relations to nonlinear isotherms, needed to describe the very complex transport processes at Yucca Mountain. This report outlines the uses and capabilities of the FEHM application, initialization of code variables, restart procedures, and error processing. The report describes all the data files, the input data

  8. Implementation of second moment closure turbulence model for incompressible flows in the industrial finite element code N3S

    International Nuclear Information System (INIS)

    Pot, G.; Laurence, D.; Rharif, N.E.; Leal de Sousa, L.; Compe, C.

    1995-12-01

    This paper deals with the introduction of a second moment closure turbulence model (Reynolds Stress Model) in an industrial finite element code, N3S, developed at Electricite de France.The numerical implementation of the model in N3S will be detailed in 2D and 3D. Some details are given concerning finite element computations and solvers. Then, some results will be given, including a comparison between standard k-ε model, R.S.M. model and experimental data for some test case. (authors). 22 refs., 3 figs

  9. Collision detection of convex polyhedra on the NVIDIA GPU architecture for the discrete element method

    CSIR Research Space (South Africa)

    Govender, Nicolin

    2015-09-01

    Full Text Available Convex polyhedra represent granular media well. This geometric representation may be critical in obtaining realistic simulations of many industrial processes using the discrete element method (DEM). However detecting collisions between the polyhedra...

  10. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  11. A non-linear, finite element, heat conduction code to calculate temperatures in solids of arbitrary geometry

    International Nuclear Information System (INIS)

    Tayal, M.

    1987-01-01

    Structures often operate at elevated temperatures. Temperature calculations are needed so that the design can accommodate thermally induced stresses and material changes. A finite element computer called FEAT has been developed to calculate temperatures in solids of arbitrary shapes. FEAT solves the classical equation for steady state conduction of heat. The solution is obtained for two-dimensional (plane or axisymmetric) or for three-dimensional problems. Gap elements are use to simulate interfaces between neighbouring surfaces. The code can model: conduction; internal generation of heat; prescribed convection to a heat sink; prescribed temperatures at boundaries; prescribed heat fluxes on some surfaces; and temperature-dependence of material properties like thermal conductivity. The user has a option of specifying the detailed variation of thermal conductivity with temperature. For convenience to the nuclear fuel industry, the user can also opt for pre-coded values of thermal conductivity, which are obtained from the MATPRO data base (sponsored by the U.S. Nuclear Regulatory Commission). The finite element method makes FEAT versatile, and enables it to accurately accommodate complex geometries. The optional link to MATPRO makes it convenient for the nuclear fuel industry to use FEAT, without loss of generality. Special numerical techniques make the code inexpensive to run, for the type of material non-linearities often encounter in the analysis of nuclear fuel. The code, however, is general, and can be used for other components of the reactor, or even for non-nuclear systems. The predictions of FEAT have been compared against several analytical solutions. The agreement is usually better than 5%. Thermocouple measurements show that the FEAT predictions are consistent with measured changes in temperatures in simulated pressure tubes. FEAT was also found to predict well, the axial variations in temperatures in the end-pellets(UO 2 ) of two fuel elements irradiated

  12. Analysis of experiments of the University of Hannover with the Cathare code on fluid dynamic effects in the fuel element top nozzle area during refilling and reflooding

    International Nuclear Information System (INIS)

    Bestion, D.

    1989-11-01

    The CATHARE code is used to calculate the experiment of the University of Hannover concerning the flooding limit at the fuel element top nozzle area. Some qualitative and quantitativ limit at the fuel element top nozzle area. on both the actual fluid dynamics which is observed in the experiments and on the corresponding code behaviour. Shortcomings of the present models are clearly identified. New developments are proposed which should extend the code capabilities

  13. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    Science.gov (United States)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  14. Full scale seismic simulation of a nuclear reactor with parallel finite element analysis code for assembled structure

    International Nuclear Information System (INIS)

    Yamada, Tomonori

    2010-01-01

    The safety requirement of nuclear power plant attracts much attention nowadays. With the growing computing power, numerical simulation is one of key technologies to meet this safety requirement. Center for Computational Science and e-Systems of Japan Atomic Energy Agency has been developing a finite element analysis code for assembled structure to accurately evaluate the structural integrity of nuclear power plant in its entirety under seismic events. Because nuclear power plant is very huge assembled structure with tens of millions of mechanical components, the finite element model of each component is assembled into one structure and non-conforming meshes of mechanical components are bonded together inside the code. The main technique to bond these mechanical components is triple sparse matrix multiplication with multiple point constrains and global stiffness matrix. In our code, this procedure is conducted in a component by component manner, so that the working memory size and computing time for this multiplication are available on the current computing environment. As an illustrative example, seismic simulation of a real nuclear reactor of High Temperature engineering Test Reactor, which is located at the O-arai research and development center of JAEA, with 80 major mechanical components was conducted. Consequently, our code successfully simulated detailed elasto-plastic deformation of nuclear reactor and its computational performance was investigated. (author)

  15. Quantifying patterns of dynamics in eye movement to measure goodness in organization of design elements in interior architecture

    Science.gov (United States)

    Mirkia, Hasti; Sangari, Arash; Nelson, Mark; Assadi, Amir H.

    2013-03-01

    Architecture brings together diverse elements to enhance the observer's measure of esthetics and the convenience of functionality. Architects often conceptualize synthesis of design elements to invoke the observer's sense of harmony and positive affect. How does an observer's brain respond to harmony of design in interior spaces? One implicit consideration by architects is the role of guided visual attention by observers while navigating indoors. Prior visual experience of natural scenes provides the perceptual basis for Gestalt of design elements. In contrast, Gestalt of organization in design varies according to the architect's decision. We outline a quantitative theory to measure the success in utilizing the observer's psychological factors to achieve the desired positive affect. We outline a unified framework for perception of geometry and motion in interior spaces, which integrates affective and cognitive aspects of human vision in the context of anthropocentric interior design. The affective criteria are derived from contemporary theories of interior design. Our contribution is to demonstrate that the neural computations in an observer's eye movement could be used to elucidate harmony in perception of form, space and motion, thus a measure of goodness of interior design. Through mathematical modeling, we argue the plausibility of the relevant hypotheses.

  16. From structure from motion to historical building information modeling: populating a semantic-aware library of architectural elements

    Science.gov (United States)

    Santagati, Cettina; Lo Turco, Massimiliano

    2017-01-01

    In recent years, we have witnessed a huge diffusion of building information modeling (BIM) approaches in the field of architectural design, although very little research has been undertaken to explore the value, criticalities, and advantages attributable to the application of these methodologies in the cultural heritage domain. Furthermore, the last developments in digital photogrammetry lead to the easy generation of reliable low-cost three-dimensional textured models that could be used in BIM platforms to create semantic-aware objects that could compose a specific library of historical architectural elements. In this case, the transfer between the point cloud and its corresponding parametric model is not so trivial and the level of geometrical abstraction could not be suitable with the scope of the BIM. The aim of this paper is to explore and retrace the milestone works on this crucial topic in order to identify the unsolved issues and to propose and test a unique and simple workflow practitioner centered and based on the use of the latest available solutions for point cloud managing into commercial BIM platforms.

  17. The Wims-Traca code for the calculation of fuel elements. User's manual and input data

    International Nuclear Information System (INIS)

    Anhert, C.

    1980-01-01

    The set of modifications and new options developped for the Wims-D code is explained. The input data of the new version Wims-Traca are described. The printed output of results is also explained. The contents and the source of the nuclear data in the basic library is exposed. (author)

  18. Finite Element Prediction of Multi-Size Particulate Flow through Three-Dimensional Channel: Code Validation

    OpenAIRE

    K. V. Pagalthivarthi; R. J. Visintainer

    2013-01-01

    Multi-size particulate dense slurry flow through three-dimensional rectangular channel is modeled using penalty finite elements with 8-noded hexahedral elements. The methodology previously developed for two-dimensional channel is extended. The computed eddy viscosity of the pure carrier flow is modified to account for the presence of solid particles. Predictions from Spalart-Almaras and k-ε turbulence models are compared to show consistency of trends in results. Results are also found to comp...

  19. The fuel performance code Celaeno, conception and simulation of fuel elements for gas-cooled fast reactor

    Energy Technology Data Exchange (ETDEWEB)

    Helfer, Thomas; Brunon, E.; Castelier, E.; Ravenet, A.; Chauvin, N. [CEA, Saint Paul Lez Durance, 04 100 (France)

    2009-06-15

    Gas-cooled fast reactor are extensively studied at Atomic Energy Commission for the fourth generation reactors. An innovative plate-type fuel element, made of two plates enclosing a honeycomb structure containing cylindrical fuel pellets, has been proposed to meet the specifications of these reactors. To sustain high coolant temperature, refractory materials have to be used for plates and honeycomb structure. The reference material is a silicon carbide composite matrix ceramic SiCf/SiC, but studies on refractory metals are also underway. The most regarded fuel material is a mixed uranium-plutonium carbide UPuC. To analyse and evaluate the performance of such fuel elements and materials, reactor concept design studies and experimental irradiations are being performed, both requiring advanced modelling tools. Based on the PLEIADES software platform, which uses Cast3M finite-element code as its thermomechanical component, the fuel-performance code celaeno has been designed for studying the thermal, mechanical and physical evolutions of the fuel-element concepts of interest under the following constraints: - provide a unified approach for all case studies, including experimental irradiations and basic material characterisation; - account for all relevant phenomena, such as mechanical non linear behaviour, irradiation induced swelling, fission gas release, or material properties evolution under irradiation, under normal and off-normal conditions; - provide robust and efficient numerical algorithms; - allow fast developpement of new case studies; - guarantee the flexibility of the code for almost all aspects of the fuel element, from geometrical changes to material changes; - assess the quality of studies by enabling designers to focus on physics, which is by far the most important and difficult task. This paper provides an overview of celaeno abilities. We demonstrate how complex simulations can easily be set up, the most time consuming part being the meshing. On the

  20. Modelling the attenuation in the ATHENA finite elements code for the ultrasonic testing of austenitic stainless steel welds.

    Science.gov (United States)

    Chassignole, B; Duwig, V; Ploix, M-A; Guy, P; El Guerjouma, R

    2009-12-01

    Multipass welds made in austenitic stainless steel, in the primary circuit of nuclear power plants with pressurized water reactors, are characterized by an anisotropic and heterogeneous structure that disturbs the ultrasonic propagation and makes ultrasonic non-destructive testing difficult. The ATHENA 2D finite element simulation code was developed to help understand the various physical phenomena at play. In this paper, we shall describe the attenuation model implemented in this code to give an account of wave scattering phenomenon through polycrystalline materials. This model is in particular based on the optimization of two tensors that characterize this material on the basis of experimental values of ultrasonic velocities attenuation coefficients. Three experimental configurations, two of which are representative of the industrial welds assessment case, are studied in view of validating the model through comparison with the simulation results. We shall thus provide a quantitative proof that taking into account the attenuation in the ATHENA code dramatically improves the results in terms of the amplitude of the echoes. The association of the code and detailed characterization of a weld's structure constitutes a remarkable breakthrough in the interpretation of the ultrasonic testing on this type of component.

  1. Finite element code FENIA verification and application for 3D modelling of thermal state of radioactive waste deep geological repository

    Science.gov (United States)

    Butov, R. A.; Drobyshevsky, N. I.; Moiseenko, E. V.; Tokarev, U. N.

    2017-11-01

    The verification of the FENIA finite element code on some problems and an example of its application are presented in the paper. The code is being developing for 3D modelling of thermal, mechanical and hydrodynamical (THM) problems related to the functioning of deep geological repositories. Verification of the code for two analytical problems has been performed. The first one is point heat source with exponential heat decrease, the second one - linear heat source with similar behavior. Analytical solutions have been obtained by the authors. The problems have been chosen because they reflect the processes influencing the thermal state of deep geological repository of radioactive waste. Verification was performed for several meshes with different resolution. Good convergence between analytical and numerical solutions was achieved. The application of the FENIA code is illustrated by 3D modelling of thermal state of a prototypic deep geological repository of radioactive waste. The repository is designed for disposal of radioactive waste in a rock at depth of several hundred meters with no intention of later retrieval. Vitrified radioactive waste is placed in the containers, which are placed in vertical boreholes. The residual decay heat of radioactive waste leads to containers, engineered safety barriers and host rock heating. Maximum temperatures and corresponding times of their establishment have been determined.

  2. Using elements of game engine architecture to simulate sensor networks for eldercare.

    Science.gov (United States)

    Godsey, Chad; Skubic, Marjorie

    2009-01-01

    When dealing with a real time sensor network, building test data with a known ground truth is a tedious and cumbersome task. In order to quickly build test data for such a network, a simulation solution is a viable option. Simulation environments have a close relationship with computer game environments, and therefore there is much to be learned from game engine design. In this paper, we present our vision for a simulated in-home sensor network and describe ongoing work on using elements of game engines for building the simulator. Validation results are included to show agreement on motion sensor simulation with the physical environment.

  3. Parallel Finite Element Particle-In-Cell Code for Simulations of Space-charge Dominated Beam-Cavity Interactions

    International Nuclear Information System (INIS)

    Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.

    2007-01-01

    Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented

  4. Algorithms and data structures for massively parallel generic adaptive finite element codes

    KAUST Repository

    Bangerth, Wolfgang

    2011-12-01

    Today\\'s largest supercomputers have 100,000s of processor cores and offer the potential to solve partial differential equations discretized by billions of unknowns. However, the complexity of scaling to such large machines and problem sizes has so far prevented the emergence of generic software libraries that support such computations, although these would lower the threshold of entry and enable many more applications to benefit from large-scale computing. We are concerned with providing this functionality for mesh-adaptive finite element computations. We assume the existence of an "oracle" that implements the generation and modification of an adaptive mesh distributed across many processors, and that responds to queries about its structure. Based on querying the oracle, we develop scalable algorithms and data structures for generic finite element methods. Specifically, we consider the parallel distribution of mesh data, global enumeration of degrees of freedom, constraints, and postprocessing. Our algorithms remove the bottlenecks that typically limit large-scale adaptive finite element analyses. We demonstrate scalability of complete finite element workflows on up to 16,384 processors. An implementation of the proposed algorithms, based on the open source software p4est as mesh oracle, is provided under an open source license through the widely used deal.II finite element software library. © 2011 ACM 0098-3500/2011/12-ART10 $10.00.

  5. The Genomic Architecture of Novel Simulium damnosum Wolbachia Prophage Sequence Elements and Implications for Onchocerciasis Epidemiology

    Directory of Open Access Journals (Sweden)

    James L. Crainey

    2017-05-01

    Full Text Available Research interest in Wolbachia is growing as new discoveries and technical advancements reveal the public health importance of both naturally occurring and artificial infections. Improved understanding of the Wolbachia bacteriophages (WOs WOcauB2 and WOcauB3 [belonging to a sub-group of four WOs encoding serine recombinases group 1 (sr1WOs], has enhanced the prospect of novel tools for the genetic manipulation of Wolbachia. The basic biology of sr1WOs, including host range and mode of genomic integration is, however, still poorly understood. Very few sr1WOs have been described, with two such elements putatively resulting from integrations at the same Wolbachia genome loci, about 2 kb downstream from the FtsZ cell-division gene. Here, we characterize the DNA sequence flanking the FtsZ gene of wDam, a genetically distinct line of Wolbachia isolated from the West African onchocerciasis vector Simulium squamosum E. Using Roche 454 shot-gun and Sanger sequencing, we have resolved >32 kb of WO prophage sequence into three contigs representing three distinct prophage elements. Spanning ≥36 distinct WO open reading frame gene sequences, these prophage elements correspond roughly to three different WO modules: a serine recombinase and replication module (sr1RRM, a head and base-plate module and a tail module. The sr1RRM module contains replication genes and a Holliday junction recombinase and is unique to the sr1 group WOs. In the extreme terminal of the tail module there is a SpvB protein homolog—believed to have insecticidal properties and proposed to have a role in how Wolbachia parasitize their insect hosts. We propose that these wDam prophage modules all derive from a single WO genome, which we have named here sr1WOdamA1. The best-match database sequence for all of our sr1WOdamA1-predicted gene sequences was annotated as of Wolbachia or Wolbachia phage sourced from an arthropod. Clear evidence of exchange between sr1WOdamA1 and other Wolbachia

  6. Finite Element Prediction of Multi-Size Particulate Flow through Three-Dimensional Channel: Code Validation

    Directory of Open Access Journals (Sweden)

    K. V. Pagalthivarthi

    2013-03-01

    Full Text Available Multi-size particulate dense slurry flow through three-dimensional rectangular channel is modeled using penalty finite elements with 8-noded hexahedral elements. The methodology previously developed for two-dimensional channel is extended. The computed eddy viscosity of the pure carrier flow is modified to account for the presence of solid particles. Predictions from Spalart-Almaras and k-ε turbulence models are compared to show consistency of trends in results. Results are also found to compare well with experimental results from the literature.

  7. Critical state and magnetization loss in multifilamentary superconducting wire solved through the commercial finite element code ANSYS

    Science.gov (United States)

    Farinon, S.; Fabbricatore, P.; Gömöry, F.

    2010-11-01

    The commercially available finite element code ANSYS has been adapted to solve the critical state of single strips and multifilamentary tapes. We studied a special algorithm which approaches the critical state by an iterative adjustment of the material resistivity. Then, we proved its validity by comparing the results obtained for a thin strip to the Brand theory for the transport current and magnetization cases. Also, the challenging calculation of the magnetization loss of a real multifilamentary BSCCO tape showed the usefulness of our method. Finally, we developed several methods to enhance the speed of convergence, making the proposed process quite competitive in the existing survey of ac losses simulations.

  8. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    Energy Technology Data Exchange (ETDEWEB)

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Ng, C.; Schussman, G.; Ko, K.; /SLAC; Syratchev, I.; /CERN

    2009-06-19

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  9. ABCXYZ: vector potential (A) and magnetic field (B) code (C) for Cartesian (XYZ) geometry using general current elements

    International Nuclear Information System (INIS)

    Anderson, D.V.; Breazeal, J.; Finan, C.H.; Johnston, B.M.

    1976-01-01

    ABCXYZ is a computer code for obtaining the Cartesian components of the vector potential and the magnetic field on an observed grid from an arrangement of current-carrying wires. Arbitrary combinations of straight line segments, arcs, and loops are allowed in the specification of the currents. Arbitrary positions and orientations of the current-carrying elements are also allowed. Specification of the wire diameter permits the computation of well-defined fields, even in the interiors of the conductors. An optical feature generates magnetic field lines. Extensive graphical and printed output is available to the user including contour, grid-line, and field-line plots. 12 figures, 1 table

  10. NCEL: two dimensional finite element code for steady-state temperature distribution in seven rod-bundle

    International Nuclear Information System (INIS)

    Hrehor, M.

    1979-01-01

    The paper deals with an application of the finite element method to the heat transfer study in seven-pin models of LMFBR fuel subassembly. The developed code NCEL solves two-dimensional steady state heat conduction equation in the whole subassembly model cross-section and enebles to perform the analysis of thermal behaviour in both normal and accidental operational conditions as eccentricity of the central rod or full or partial (porous) blockage of some part of the cross-flow area. The heat removal is simulated by heat sinks in coolant under conditions of subchannels slug flow approximation

  11. Common architecture of nuclear receptor heterodimers on DNA direct repeat elements with different spacings.

    Science.gov (United States)

    Rochel, Natacha; Ciesielski, Fabrice; Godet, Julien; Moman, Edelmiro; Roessle, Manfred; Peluso-Iltis, Carole; Moulin, Martine; Haertlein, Michael; Callow, Phil; Mély, Yves; Svergun, Dmitri I; Moras, Dino

    2011-05-01

    Nuclear hormone receptors (NHRs) control numerous physiological processes through the regulation of gene expression. The present study provides a structural basis for understanding the role of DNA in the spatial organization of NHR heterodimers in complexes with coactivators such as Med1 and SRC-1. We have used SAXS, SANS and FRET to determine the solution structures of three heterodimer NHR complexes (RXR-RAR, PPAR-RXR and RXR-VDR) coupled with the NHR interacting domains of coactivators bound to their cognate direct repeat elements. The structures show an extended asymmetric shape and point to the important role played by the hinge domains in establishing and maintaining the integrity of the structures. The results reveal two additional features: the conserved position of the ligand-binding domains at the 5' ends of the target DNAs and the binding of only one coactivator molecule per heterodimer, to RXR's partner.

  12. A novel reactive transport code for coupling of combined finite element - finite volume transport with Gibbs energy minimization

    Science.gov (United States)

    Fowler, S. J.; Driesner, T.; Kulik, D.; Wagner, T.

    2010-12-01

    We present a novel computational tool for modelling temporally and spatially varying chemical interactions between hydrothermal fluids and rocks that may affect the long-term performance of geothermal reservoirs. The code is written in C++. It incorporates fluid-rock interaction and scale formation self-consistently, via a modular coupling approach that combines the Complex System Modelling Platform (CSMP++) code for fluid flow in porous and fractured media (Matthai et al., 2007) with the numerical kernel (GEMIPM2K) of the GEM-Selektor Gibbs free energy minimization package (Kulik, Wagner et al., 2007). CSMP++ uses finite element-finite volume spatial discretization, implicit or explicit time discretization, and an operator splitting approach to solve equations. The GEM-Selektor package supports a wide range of equation of state and activity models, facilitating calculation of complex fluid-mineral equilibria. Coupled code input includes temperature, pressure, a charge balance, and total amounts of system chemical elements, as well as domain and boundary condition specifications. Speciation, thermodynamic, and physical properties of the system are output. Critical advantages of the coupled code compared to existing hydrothermal reactive transport models are: (1) simultaneous consideration of complex solid solutions (e.g., clay minerals) and non-ideal aqueous solutions (GEMIPM2K), and (2) a discretization scheme that can be applied to mass and heat transport in irregular, geologically realistic geometries (CSMP++). Each coupled simulation results in a thermodynamically-based description of the geochemical and physical state of a hydrothermal system evolving along a complex P-T-X path. The code design allows for efficient and flexible incorporation of numerical and thermodynamic database improvements. We apply the coupled code to a number of geologic applications to test its accuracy and performance. Kulik, D., Wagner, T. et al. (2007). GEM-Selektor (GEMS-PSI) home

  13. User`s manual for the FEHM application -- A finite-element heat- and mass-transfer code

    Energy Technology Data Exchange (ETDEWEB)

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved in the FEHM application by using the finite-element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat- and mass-transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. In fact, FEHM is capable of describing flow that is dominated in many areas by fracture and fault flow, including the inherently three-dimensional flow that results from permeation to and from faults and fractures. The code can handle coupled heat and mass-transfer effects, such as boiling, dryout, and condensation that can occur in the near-field region surrounding the potential repository and the natural convection that occurs through Yucca Mountain due to seasonal temperature changes. This report outlines the uses and capabilities of the FEHM application, initialization of code variables, restart procedures, and error processing. The report describes all the data files, the input data, including individual input records or parameters, and the various output files. The system interface is described, including the software environment and installation instructions.

  14. Cracking the Code of Soil Genesis. The Early Role of Rare Earth Elements

    Science.gov (United States)

    Zaharescu, D. G.; Dontsova, K.; Burghelea, C. I.; Maier, R. M.; Huxman, T. E.; Chorover, J.

    2014-12-01

    Soil is terrestrial life support system. Its genesis involves tight interactions between biota and mineral surfaces that mobilize structural elements into biogeochemical cycles. Of all chemical elements rare earth elements (REE) are a group of 16 non-nutrient elements of unusual geochemical similarity and present in all components of the surface environment. While much is known about the role of major nutrients in soil development we lack vital understanding of how early biotic colonization affects more conservative elements such as REE. A highly controlled experiment was set up at University of Arizona's Biosphere-2 that tested the effect of 4 biological treatments, incorporating a combination of microbe, grass, mycorrhiza and uninoculated control on REE leaching and uptake in 4 bedrock substrates: basalt, rhyolite, granite and schist. Generally the response of REE to biota presence was synergistic. Variation in total bedrock chemistry could explain major trends in pore water REE. There was a fast transition from chemistry-dominated to a biota dominated environment in the first 3-4 months of inoculation/seeding which translated into increase in REE signal over time. Relative REE abundances in water were generally reflected in plant concentrations, particularly in root, implying that below ground biomass is the main sync of REE in the ecosystem. Mycorrhiza effect on REE uptake in plant organs was significant and increased with infection rates. Presence of different biota translated into subtle differences in REE release, reveling potential biosignatures of biolota-rock colonization. The results thus bring fundamental insight into early stages non-nutrient cycle and soil genesis.

  15. Interfacing VPSC with finite element codes. Demonstration of irradiation growth simulation in a cladding tube

    Energy Technology Data Exchange (ETDEWEB)

    Patra, Anirban [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-03-23

    This Milestone report shows good progress in interfacing VPSC with the FE codes ABAQUS and MOOSE, to perform component-level simulations of irradiation-induced deformation in Zirconium alloys. In this preliminary application, we have performed an irradiation growth simulation in the quarter geometry of a cladding tube. We have benchmarked VPSC-ABAQUS and VPSC-MOOSE predictions with VPSC-SA predictions to verify the accuracy of the VPSCFE interface. Predictions from the FE simulations are in general agreement with VPSC-SA simulations and also with experimental trends.

  16. Interfacing VPSC with finite element codes. Demonstration of irradiation growth simulation in a cladding tube

    International Nuclear Information System (INIS)

    Patra, Anirban; Tome, Carlos

    2016-01-01

    This Milestone report shows good progress in interfacing VPSC with the FE codes ABAQUS and MOOSE, to perform component-level simulations of irradiation-induced deformation in Zirconium alloys. In this preliminary application, we have performed an irradiation growth simulation in the quarter geometry of a cladding tube. We have benchmarked VPSC-ABAQUS and VPSC-MOOSE predictions with VPSC-SA predictions to verify the accuracy of the VPSCFE interface. Predictions from the FE simulations are in general agreement with VPSC-SA simulations and also with experimental trends.

  17. ABAQUS-EPGEN: a general-purpose finite-element code. Volume 1. User's manual

    International Nuclear Information System (INIS)

    Hibbitt, H.D.; Karlsson, B.I.; Sorensen, E.P.

    1982-10-01

    This document is the User's Manual for ABAQUS/EPGEN, a general purpose finite element computer program, designed specifically to serve advanced structural analysis needs. The program contains very general libraries of elements, materials and analysis procedures, and is highly modular, so that complex combinations of features can be put together to model physical problems. The program is aimed at production analysis needs, and for this purpose aspects such as ease-of-use, reliability, flexibility and efficiency have received maximum attention. The input language is designed to make it straightforward to describe complicated models; the analysis procedures are highly automated with the program choosing time or load increments based on user supplied tolerances and controls; and the program offers a wide range of post-processing options for display of the analysis results

  18. Modeling Architectural Patterns Using Architectural Primitives

    NARCIS (Netherlands)

    Zdun, Uwe; Avgeriou, Paris

    2005-01-01

    Architectural patterns are a key point in architectural documentation. Regrettably, there is poor support for modeling architectural patterns, because the pattern elements are not directly matched by elements in modeling languages, and, at the same time, patterns support an inherent variability that

  19. Development of finite element code for the analysis of coupled thermo-hydro-mechanical behaviors of saturated-unsaturated medium

    International Nuclear Information System (INIS)

    Ohnishi, Y.; Shibata, H.; Kobayashi, A.

    1985-01-01

    A model is presented which describes fully coupled thermo-hydro-mechanical behavior of porous geologic medium. The mathematical formulation for the model utilizes the Biot theory for the consolidation and the energy balance equation. The medium is in the condition of saturated-unsaturated flow, then the free surfaces are taken into consideration in the model. The model, incorporated in a finite element numerical procedure, was implemented in a two-dimensional computer code. The code was developed under the assumptions that the medium is poro-elastic and in plane strain condition; water in the ground does not change its phase; heat is transferred by conductive and convective flow. Analytical solutions pertaining to consolidation theory for soils and rocks, thermoelasticity for solids and hydrothermal convection theory provided verification of stress and fluid flow couplings, respectively in the coupled model. Several types of problems are analyzed. The one is a study of some of the effects of completely coupled thermo-hydro-mechanical behavior on the response of a saturated-unsaturated porous rock containing a buried heat source. Excavation of an underground opening which has radioactive wastes at elevated temperatures is modeled and analyzed. The results shows that the coupling phenomena can be estimated at some degree by the numerical procedure. The computer code has a powerful ability to analyze of the repository the complex nature of the repository

  20. Dynamic analysis of aircraft impact using the linear elastic finite element codes FINEL, SAP and STARDYNE

    International Nuclear Information System (INIS)

    Lundsager, P.; Krenk, S.

    1975-08-01

    The static and dynamic response of a cylindrical/ spherical containment to a Boeing 720 impact is computed using 3 different linear elastic computer codes: FINEL, SAP and STARDYNE. Stress and displacement fields are shown together with time histories for a point in the impact zone. The main conclusions from this study are: - In this case the maximum dynamic load factors for stress and displacements were close to 1, but a static analysis alone is not fully sufficient. - More realistic load time histories should be considered. - The main effects seem to be local. The present study does not indicate general collapse from elastic stresses alone. - Further study of material properties at high rates is needed. (author)

  1. STAT, GAPS, STRAIN, DRWDIM: a system of computer codes for analyzing HTGR fuel test element metrology data. User's manual

    Energy Technology Data Exchange (ETDEWEB)

    Saurwein, J.J.

    1977-08-01

    A system of computer codes has been developed to statistically reduce Peach Bottom fuel test element metrology data and to compare the material strains and fuel rod-fuel hole gaps computed from these data with HTGR design code predictions. The codes included in this system are STAT, STRAIN, GAPS, and DRWDIM. STAT statistically evaluates test element metrology data yielding fuel rod, fuel body, and sleeve irradiation-induced strains; fuel rod anisotropy; and additional data characterizing each analyzed fuel element. STRAIN compares test element fuel rod and fuel body irradiation-induced strains computed from metrology data with the corresponding design code predictions. GAPS compares test element fuel rod, fuel hole heat transfer gaps computed from metrology data with the corresponding design code predictions. DRWDIM plots the measured and predicted gaps and strains. Although specifically developed to expedite the analysis of Peach Bottom fuel test elements, this system can be applied, without extensive modification, to the analysis of Fort St. Vrain or other HTGR-type fuel test elements.

  2. Changes in cis-regulatory elements of a key floral regulator are associated with divergence of inflorescence architectures.

    Science.gov (United States)

    Kusters, Elske; Della Pina, Serena; Castel, Rob; Souer, Erik; Koes, Ronald

    2015-08-15

    Higher plant species diverged extensively with regard to the moment (flowering time) and position (inflorescence architecture) at which flowers are formed. This seems largely caused by variation in the expression patterns of conserved genes that specify floral meristem identity (FMI), rather than changes in the encoded proteins. Here, we report a functional comparison of the promoters of homologous FMI genes from Arabidopsis, petunia, tomato and Antirrhinum. Analysis of promoter-reporter constructs in petunia and Arabidopsis, as well as complementation experiments, showed that the divergent expression of leafy (LFY) and the petunia homolog aberrant leaf and flower (ALF) results from alterations in the upstream regulatory network rather than cis-regulatory changes. The divergent expression of unusual floral organs (UFO) from Arabidopsis, and the petunia homolog double top (DOT), however, is caused by the loss or gain of cis-regulatory promoter elements, which respond to trans-acting factors that are expressed in similar patterns in both species. Introduction of pUFO:UFO causes no obvious defects in Arabidopsis, but in petunia it causes the precocious and ectopic formation of flowers. This provides an example of how a change in a cis-regulatory region can account for a change in the plant body plan. © 2015. Published by The Company of Biologists Ltd.

  3. Mutations of conserved non-coding elements of PITX2 in patients with ocular dysgenesis and developmental glaucoma.

    Science.gov (United States)

    Protas, Meredith E; Weh, Eric; Footz, Tim; Kasberger, Jay; Baraban, Scott C; Levin, Alex V; Katz, L Jay; Ritch, Robert; Walter, Michael A; Semina, Elena V; Gould, Douglas B

    2017-09-15

    Mutations in FOXC1 and PITX2 constitute the most common causes of ocular anterior segment dysgenesis (ASD), and confer a high risk for secondary glaucoma. The genetic causes underlying ASD in approximately half of patients remain unknown, despite many of them being screened by whole exome sequencing. Here, we performed whole genome sequencing on DNA from two affected individuals from a family with dominantly inherited ASD and glaucoma to identify a 748-kb deletion in a gene desert that contains conserved putative PITX2 regulatory elements. We used CRISPR/Cas9 to delete the orthologous region in zebrafish in order to test the pathogenicity of this structural variant. Deletion in zebrafish reduced pitx2 expression during development and resulted in shallow anterior chambers. We screened additional patients for copy number variation of the putative regulatory elements and found an overlapping deletion in a second family and in a potentially-ancestrally-related index patient with ASD and glaucoma. These data suggest that mutations affecting conserved non-coding elements of PITX2 may constitute an important class of mutations in patients with ASD for whom the molecular cause of their disease have not yet been identified. Improved functional annotation of the human genome and transition to sequencing of patient genomes instead of exomes will be required before the magnitude of this class of mutations is fully understood. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. The relationship between the traditional and contemporary elements in the church architecture of the Western Christian countries in the 20th century

    Directory of Open Access Journals (Sweden)

    Manić Božidar

    2015-01-01

    Full Text Available Church architecture has been developing continually within Western Christianity since the 4th century, It gradually becomes less important from the end of the Middle Ages, especially with the advent of the ideas of Reformation and Enlightment, going almost out of the focus of contemporary architecture with the advent of modernism in the 20th century. The most important factors for the development of this building type in the 20th century, were the emergence of modernism in architecture and strengthening of the movements of liturgical renewal. It was a time in which the diametrically opposed concepts - radically modernizing and conservatively traditional - were expressed to the extreme, with many transitional forms, Striving to active participation of believers can lead to completely different results - strengthening the liturgical assembly, on one hand, and radical desacralisation of worship, on the other. There is a large number of architectural solutions, some of which share common characteristics concerning spatial organization and the distribution of laity and clergy, but with a great diversity of other architectural characteristics and different relations of traditional and contemporary elements. The experiences of Western Christian countries can be of use, to some extent, in the research of contemporary Orthodox church architecture.

  5. The Use of the STAGS Finite Element Code in Stitched Structures Development

    Science.gov (United States)

    Jegley, Dawn C.; Lovejoy, Andrew E.

    2014-01-01

    In the last 30 years NASA has worked in collaboration with industry to develop enabling technologies needed to make aircraft more fuel-efficient and more affordable. The focus on the airframe has been to reduce weight, improve damage tolerance and better understand structural behavior under realistic flight and ground loading conditions. Stitched structure is a technology that can address the weight savings, cost reduction, and damage tolerance goals, but only if it is supported by accurate analytical techniques. Development of stitched technology began in the 1990's as a partnership between NASA and Boeing (McDonnell Douglas at the time) under the Advanced Composites Technology Program and has continued under various titles and programs and into the Environmentally Responsible Aviation Project today. These programs contained development efforts involving manufacturing development, design, detailed analysis, and testing. Each phase of development, from coupons to large aircraft components was supported by detailed analysis to prove that the behavior of these structures was well-understood and predictable. The Structural Analysis of General Shells (STAGS) computer code was a critical tool used in the development of many stitched structures. As a key developer of STAGS, Charles Rankin's contribution to the programs was quite significant. Key features of STAGS used in these analyses and discussed in this paper include its accurate nonlinear and post-buckling capabilities, its ability to predict damage growth, and the use of Lagrange constraints and follower forces.

  6. Ethical codes. Fig leaf argument, ballast or cultural element for radiation protection?; Ethik-Codes. Feigenblatt, Ballast oder Kulturelement fuer den Strahlenschutz?

    Energy Technology Data Exchange (ETDEWEB)

    Gellermann, Rainer [Nuclear Control and Consulting GmbH, Braunschweig (Germany)

    2014-07-01

    The international association for radiation protection (IRPA) adopted in May 2004 a Code of Ethics in order to allow their members to hold an adequate professional level of ethical line of action. Based on this code of ethics the professional body of radiation protection (Fachverband fuer Strahlenschutz) has developed its own ethical code and adopted in 2005.

  7. Structure-aided prediction of mammalian transcription factor complexes in conserved non-coding elements

    KAUST Repository

    Guturu, H.

    2013-11-11

    Mapping the DNA-binding preferences of transcription factor (TF) complexes is critical for deciphering the functions of cis-regulatory elements. Here, we developed a computational method that compares co-occurring motif spacings in conserved versus unconserved regions of the human genome to detect evolutionarily constrained binding sites of rigid TF complexes. Structural data were used to estimate TF complex physical plausibility, explore overlapping motif arrangements seldom tackled by non-structure-aware methods, and generate and analyse three-dimensional models of the predicted complexes bound to DNA. Using this approach, we predicted 422 physically realistic TF complex motifs at 18% false discovery rate, the majority of which (326, 77%) contain some sequence overlap between binding sites. The set of mostly novel complexes is enriched in known composite motifs, predictive of binding site configurations in TF-TF-DNA crystal structures, and supported by ChIP-seq datasets. Structural modelling revealed three cooperativity mechanisms: direct protein-protein interactions, potentially indirect interactions and \\'through-DNA\\' interactions. Indeed, 38% of the predicted complexes were found to contain four or more bases in which TF pairs appear to synergize through overlapping binding to the same DNA base pairs in opposite grooves or strands. Our TF complex and associated binding site predictions are available as a web resource at http://bejerano.stanford.edu/complex.

  8. A Reference Architecture for Provisioning of Tools as a Service: Meta-Model, Ontologies and Design Elements

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali; Sheng, Quan Z.

    2016-01-01

    Software Architecture (SA) plays a critical role in designing, developing and evolving cloud-based platforms that can be used to provision different types of services to consumers on demand. In this paper, we present a Reference Architecture (RA) for designing cloud-based Tools as a service SPACE...... (TSPACE) for provisioning a bundled suite of tools by following the Software as a Service (SaaS) model. The reference architecture has been designed by leveraging information structuring approaches and by using well-known architecture design principles and patterns. The RA has been documented using view...

  9. Finite element code-based modeling of a multi-feature isolation system and passive alleviation of possible inner pounding

    Science.gov (United States)

    Ismail, Mohammed; López-Almansa, Francesc; Benavent-Climent, Amadeo; Pujades-Beneit, Luis G.

    2014-09-01

    The existing seismic isolation systems are based on well-known and accepted physical principles, but they are still having some functional drawbacks. As an attempt of improvement, the Roll-N-Cage (RNC) isolator has been recently proposed. It is designed to achieve a balance in controlling isolator displacement demands and structural accelerations. It provides in a single unit all the necessary functions of vertical rigid support, horizontal flexibility with enhanced stability, resistance to low service loads and minor vibration, and hysteretic energy dissipation characteristics. It is characterized by two unique features that are a self-braking (buffer) and a self-recentering mechanism. This paper presents an advanced representation of the main and unique features of the RNC isolator using an available finite element code called SAP2000. The validity of the obtained SAP2000 model is then checked using experimental, numerical and analytical results. Then, the paper investigates the merits and demerits of activating the built-in buffer mechanism on both structural pounding mitigation and isolation efficiency. The paper addresses the problem of passive alleviation of possible inner pounding within the RNC isolator, which may arise due to the activation of its self-braking mechanism under sever excitations such as near-fault earthquakes. The results show that the obtained finite element code-based model can closely match and accurately predict the overall behavior of the RNC isolator with effectively small errors. Moreover, the inherent buffer mechanism of the RNC isolator could mitigate or even eliminate direct structure-to-structure pounding under severe excitation considering limited septation gaps between adjacent structures. In addition, the increase of inherent hysteretic damping of the RNC isolator can efficiently limit its peak displacement together with the severity of the possibly developed inner pounding and, therefore, alleviate or even eliminate the

  10. Summary of the models and methods for the FEHM application - a finite-element heat- and mass-transfer code

    International Nuclear Information System (INIS)

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The mathematical models and numerical methods employed by the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multi-component flow in porous media, are described. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The component models of FEHM are discussed. The first major component, Flow- and Energy-Transport Equations, deals with heat conduction; heat and mass transfer with pressure- and temperature-dependent properties, relative permeabilities and capillary pressures; isothermal air-water transport; and heat and mass transfer with noncondensible gas. The second component, Dual-Porosity and Double-Porosity/Double-Permeability Formulation, is designed for problems dominated by fracture flow. Another component, The Solute-Transport Models, includes both a reactive-transport model that simulates transport of multiple solutes with chemical reaction and a particle-tracking model. Finally, the component, Constitutive Relationships, deals with pressure- and temperature-dependent fluid/air/gas properties, relative permeabilities and capillary pressures, stress dependencies, and reactive and sorbing solutes. Each of these components is discussed in detail, including purpose, assumptions and limitations, derivation, applications, numerical method type, derivation of numerical model, location in the FEHM code flow, numerical stability and accuracy, and alternative approaches to modeling the component

  11. Physical analysis and modelling of aerosols transport. implementation in a finite elements code. Experimental validation in laminar and turbulent flows

    International Nuclear Information System (INIS)

    Armand, Patrick

    1995-01-01

    The aim of this work consists in the Fluid Mechanics and aerosol Physics coupling. In the first part, the order of magnitude analysis of the particle dynamics is done. This particle is embedded in a non-uniform unsteady flow. Flow approximations around the inclusion are described. Corresponding aerodynamic drag formulae are expressed. Possible situations related to the problem data are extensively listed. In the second part, one studies the turbulent particles transport. Eulerian approach which is particularly well adapted to industrial codes is preferred in comparison with the Lagrangian methods. One chooses the two-fluid formalism in which career gas-particles slip is taken into account. Turbulence modelling gets through a k-epsilon modulated by the inclusions action on the flow. The model is implemented In a finite elements code. Finally, In the third part, one validates the modelling in laminar and turbulent cases. We compare simulations to various experiments (settling battery, inertial impaction in a bend, jets loaded with glass beads particles) which are taken in the literature or done by ourselves at the laboratory. The results are very close. It is a good point when it is thought of the particles transport model and associated software future use. (author) [fr

  12. A Stiffness Reduction Method for efficient absorption of waves at boundaries for use in commercial Finite Element codes.

    Science.gov (United States)

    Pettit, J R; Walker, A; Cawley, P; Lowe, M J S

    2014-09-01

    Commercially available Finite Element packages are being used increasingly for modelling elastic wave propagation problems. Demand for improved capability has resulted in a drive to maximise the efficiency of the solver whilst maintaining a reliable solution. Modelling waves in unbound elastic media to high levels of accuracy presents a challenge for commercial packages, requiring the removal of unwanted reflections from model boundaries. For time domain explicit solvers, Absorbing Layers by Increasing Damping (ALID) have proven successful because they offer flexible application to modellers and, unlike the Perfectly Matched Layers (PMLs) approach, they are readily implemented in most commercial Finite Element software without requiring access to the source code. However, despite good overall performance, this technique requires the spatial model to extend significantly outside the domain of interest. Here, a Stiffness Reduction Method (SRM) has been developed that operates within a significantly reduced spatial domain. The technique is applied by altering the damping and stiffness matrices of the system, inducing decay of any incident wave. Absorbing region variables are expressed as a function of known model constants, helping to apply the technique to generic elastodynamic problems. The SRM has been shown to perform significantly better than ALID, with results confirmed by both numerical and analytical means. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Summary Report for ASC L2 Milestone #4782: Assess Newly Emerging Programming and Memory Models for Advanced Architectures on Integrated Codes

    Energy Technology Data Exchange (ETDEWEB)

    Neely, J. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hornung, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Black, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Robinson, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-29

    This document serves as a detailed companion to the powerpoint slides presented as part of the ASC L2 milestone review for Integrated Codes milestone #4782 titled “Assess Newly Emerging Programming and Memory Models for Advanced Architectures on Integrated Codes”, due on 9/30/2014, and presented for formal program review on 9/12/2014. The program review committee is represented by Mike Zika (A Program Project Lead for Kull), Brian Pudliner (B Program Project Lead for Ares), Scott Futral (DEG Group Lead in LC), and Mike Glass (Sierra Project Lead at Sandia). This document, along with the presentation materials, and a letter of completion signed by the review committee will act as proof of completion for this milestone.

  14. PyLith: A Finite-Element Code for Modeling Quasi-Static and Dynamic Crustal Deformation

    Science.gov (United States)

    Aagaard, B.; Williams, C. A.; Knepley, M. G.

    2011-12-01

    We have developed open-source finite-element software for 2-D and 3-D dynamic and quasi-static modeling of crustal deformation. This software, PyLith (current release is version 1.6) can be used for quasi-static viscoelastic modeling, dynamic spontaneous rupture and/or ground-motion modeling. Unstructured and structured finite-element discretizations allow for spatial scales ranging from tens of meters to hundreds of kilometers with temporal scales in dynamic problems ranging from milliseconds to minutes and temporal scales in quasi-static problems ranging from minutes to thousands of years. PyLith development is part of the NSF funded Computational Infrastructure for Geodynamics (CIG) and the software runs on a wide variety of platforms (laptops, workstations, and Beowulf clusters). Binaries (Linux, Darwin, and Windows systems) and source code are available from geodynamics.org. PyLith uses a suite of general, parallel, graph data structures called Sieve for storing and manipulating finite-element meshes. This permits use of a variety of 2-D and 3-D cell types including triangles, quadrilaterals, hexahedra, and tetrahedra. Current PyLith features include prescribed fault ruptures with multiple earthquakes and aseismic creep, spontaneous fault ruptures with a variety of fault constitutive models, time-dependent Dirichlet and Neumann boundary conditions, absorbing boundary conditions, time-dependent point forces, and gravitational body forces. PyLith supports infinitesimal and small strain formulations for linear elastic rheologies, linear and generalized Maxwell viscoelastic rheologies, power-law viscoelastic rheologies, and Drucker-Prager elastoplastic rheologies. Current software development focuses on coupling quasi-static and dynamic simulations to resolve multi-scale deformation across the entire seismic cycle and the coupling of elasticity to heat and/or fluid flow.

  15. Software architecture 1

    CERN Document Server

    Oussalah , Mourad Chabane

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural template

  16. Software architecture 2

    CERN Document Server

    Oussalah, Mourad Chabanne

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural templa

  17. Software requirements, design, and verification and validation for the FEHM application - a finite-element heat- and mass-transfer code

    International Nuclear Information System (INIS)

    Dash, Z.V.; Robinson, B.A.; Zyvoloski, G.A.

    1997-07-01

    The requirements, design, and verification and validation of the software used in the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multicomponent flow in porous media, are described. The test of the DOE Code Comparison Project, Problem Five, Case A, which verifies that FEHM has correctly implemented heat and mass transfer and phase partitioning, is also covered

  18. Validation of finite element code DELFIN by means of the zero power experiences at the nuclear power plant of Atucha I

    International Nuclear Information System (INIS)

    Grant, C.R.

    1996-01-01

    Code DELFIN, developed in CNEA, treats the spatial discretization using heterogeneous finite elements, allowing a correct treatment of the continuity of fluxes and currents among elements and a more realistic representation of the hexagonal lattice of the reactor. It can be used for fuel management calculation, Xenon oscillation and spatial kinetics. Using the HUEMUL code for cell calculation (which uses a generalized two dimensional collision probability theory and has the WIMS library incorporated in a data base), the zero power experiences performed in 1974 were calculated. (author). 8 refs., 9 figs., 3 tabs

  19. Integrated application of in situ non destructive techniques for the evaluation of the architectural elements of monumental structures.

    Science.gov (United States)

    Fais, Silvana; Casula, Giuseppe; Cuccuru, Francesco; Ligas, Paola; Bianchi, Maria Giovanna; Marraccini, Alessandro

    2017-04-01

    The need to integrate different non invasive geophysical datasets for an effective diagnostic process of the stone materials of cultural heritage buildings is due to the complexity of the intrinsic characteristics of the different types of stones and of their degradation process. Consequently integration between different geophysical techniques is required for the characterization of stone building materials. In order to perform the diagnostic process by different non-invasive techniques thus interpreting in a realistic way the different geophysical parameters, it is necessary to link the petrophysical characteristics of stones with the geophysical ones. In this study the complementary application of three different non invasive techniques (terrestrial laser scanner (TLS), infrared thermography and ultrasonic surface and tomography measurements) was carried out to analyse the conservation state and quality of the carbonate building materials of three inner columns of the old precious church of San Lorenzo in the historical city center of Cagliari (Sardinia). In previous works (Casula et al., 2009; Fais et al., 2015), especially the integrated application of TLS and ultrasonic techniques has been demonstrated to represent a powerful tool in evaluating the quality of the stone building materials by solving or limiting the uncertainties typical of all indirect methods. Thanks to the terrestrial laser scanner (TLS) technique it was possible to 3D model the investigated columns and their surface geometrical anomalies. The TLS measurements were complemented by several ultrasonic in situ and laboratory tests in the 24kHz - 54kHz range. The ultrasonic parameters, especially longitudinal and transversal velocities, allow to recover information on materials related with mechanical properties. A good correlation between TLS surface geometrical anomalies and the ultrasonic velocity ones is evident at the surface and in shallow parts of the investigated architectural elements

  20. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    International Nuclear Information System (INIS)

    Valach, M.; Zymak, J.; Svoboda, R.

    1997-01-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs

  1. Evaluation of a Nonlinear Finite Element Program - ABAQUS.

    Science.gov (United States)

    1983-03-15

    review of detailed program architecture . Instead, discussion is focused on the overall coding structure and its data base design. This is in contrast to...conditions, etc.; material model information; and plotting of undeformed geometry. Hisotry Input-These include data related to analysis procedure...efficiency of a finite element software is affected by several factors. These include: 01 i) Program architecture and coding style ii) Numerical

  2. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...... to this systematic thinking of the building technique we get a diverse and functional architecture. Creating a new and clearer story telling about new and smart system based thinking behind the architectural expression....

  3. Distribution Pattern of Fe, Sr, Zr and Ca Elements as Particle Size Function in the Code River Sediments from Upstream to Downstream

    International Nuclear Information System (INIS)

    Sri Murniasih; Muzakky

    2007-01-01

    The analysis of Fe, Sr, Zr and Ca elements concentration of granular sediment from upstream to downstream of Code river has been done. The aim of this research is to know the influence of particle size on the concentration of Fe, Sr, Zr and Ca elements in the Code river sediments from upstream to downstream and its distribution pattern. The instrument used was x-ray fluorescence with Si(Li) detector. Analysis results show that more Fe and Sr elements are very much found in 150 - 90 μm particle size, while Zr and Ca elements are very much found in < 90 μm particle size. Distribution pattern of Fe, Sr, Zr and Ca elements distribution in Code river sediments tends to increase relatively from upstream to downstream following its conductivity. The concentration of Fe, Sr, Zr and Ca elements are 1.49 ± 0.03 % - 5.93 ± 0.02 % ; 118.20 ± 10.73 ppm - 468.21 ± 20.36 ppm; 19.81 ppm ± 0.86 ppm - 76.36 ± 3.02 ppm and 3.22 ± 0.25 % - 11.40 ± 0.31 % successively. (author)

  4. MORSMATEL: a rapid and efficient code to calculate vibration-rotational matrix elements for r-dependent operators of two Morse oscillators

    Science.gov (United States)

    Lopez-Piñeiro, A.; Sanchez, M. L.; Moreno, B.

    1992-06-01

    The computer program MORSMATEL has been developed to calculate vibrational-rotational matrix elements of several r-dependent operators of two Morse oscillators. This code is based on a set of recurrence relations which are valid for any value of the power and of the quantum numbers v and J of each oscillator.

  5. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    the retrofitting of the existing concrete element blocks from the period. Related to the actual demands to the building physic problems a new industrialized period has started based on light-weight elements basically made of wooden structures and faced with different suitable materials meant for individual...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  6. Empirically-grounded Reference Architectures : A Proposal

    NARCIS (Netherlands)

    Galster, Matthias; Avgeriou, Paris

    2011-01-01

    A reference architecture describes core elements of the software architecture for systems that stem from the same domain. A reference architecture ensures interoperability of systems through standardization. It also facilitates the instantiation of new concrete architectures. However, we currently

  7. Changes in cis-regulatory elements of a key floral regulator are associated with divergence of inflorescence architectures

    NARCIS (Netherlands)

    Kusters, E.; Della Pina, S.; Castel, R.; Souer, E.; Koes, R.

    2015-01-01

    Higher plant species diverged extensively with regard to the moment (flowering time) and position (inflorescence architecture) at which flowers are formed. This seems largely caused by variation in the expression patterns of conserved genes that specify floral meristem identity (FMI), rather than

  8. Changes in cis-regulatory elements of a key floral regulator are associated with divergence of inflorescence architectures.

    NARCIS (Netherlands)

    Kusters, E.; Della Pina, S.; Castel, R.; Souer, E.J.; Koes, R.E.

    2015-01-01

    Higher plant species diverged extensively with regard to the moment (flowering time) and position (inflorescence architecture) at which flowers are formed. This seems largely caused by variation in the expression patterns of conserved genes that specify floral meristem identity (FMI), rather than

  9. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Architectural freedom and industrialized architecture. Inge Vestergaard, Associate Professor, Cand. Arch. Aarhus School of Architecture, Denmark Noerreport 20, 8000 Aarhus C Telephone +45 89 36 0000 E-mai l inge.vestergaard@aarch.dk Based on the repetitive architecture from the "building boom" 1960...... customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performed expression in direct relation to the given context. Through the last couple of years we have in Denmark been focusing a more sustainable and low energy building technique, which also include...... to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...

  10. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Based on the repetitive architecture from the “building boom” from 1960 to 1973, it is discussed how architects can handle these Danish element and montage buildings through the transformation to upgraded aesthetical, functional and energy efficient architecture. The method used is analysis...... of cases, parallels to literature studies and client and producer interviews. The analysis compares best practice in Denmark and best practice in Austria. Modern architects accepted the fact that industrialized architecture told the storey of repetition and monotony as basic condition. This article aims...... to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...

  11. Toward Measures for Software Architectures

    National Research Council Canada - National Science Library

    Chastek, Gary; Ferguson, Robert

    2006-01-01

    .... Defining these architectural measures is very difficult. The software architecture deeply affects subsequent development and project management decisions, such as the breakdown of the coding tasks and the definition of the development increments...

  12. Production of Curved Precast Concrete Elements for Shell Structures and Free-form Architecture using the Flexible Mould Method

    OpenAIRE

    Schipper, H.R.; Grünewald, S.; Eigenraam, P.; Raghunath, P.; Kok, M.A.D.

    2014-01-01

    Free-form buildings tend to be expensive. By optimizing the production process, economical and well-performing precast concrete structures can be manufactured. In this paper, a method is presented that allows producing highly accurate double curved-elements without the need for milling two expensive mould surfaces per single element. The flexible mould is fully reusable and the benefits of applying self-compacting concrete are utilised. The flexible mould process work as follows: Thin concret...

  13. Development of a Non-Linear Element Code for the Improvement of Piezoelectric Actuator Design and Reliability

    National Research Council Canada - National Science Library

    Lynch, Christopher S; Landis, Chad

    2006-01-01

    .... The code has been used to conduct simulations of geometries in which the field distribution is inhomogeneous and results in local consentrations such as for interdigitated electrodes, for cofired...

  14. The frequency-dependent elements in the code SASSI: A bridge between civil engineers and the soil-structure interaction specialists

    International Nuclear Information System (INIS)

    Tyapin, Alexander

    2007-01-01

    After four decades of the intensive studies of the soil-structure interaction (SSI) effects in the field of the NPP seismic analysis there is a certain gap between the SSI specialists and civil engineers. The results obtained using the advanced SSI codes like SASSI are often rather far from the results obtained using general codes (though match the experimental and field data). The reasons for the discrepancies are not clear because none of the parties can recall the results of the 'other party' and investigate the influence of various factors causing the difference step by step. As a result, civil engineers neither feel the SSI effects, nor control them. The author believes that the SSI specialists should do the first step forward (a) recalling 'viscous' damping in the structures versus the 'material' one and (b) convoluting all the SSI wave effects into the format of 'soil springs and dashpots', more or less clear for civil engineers. The tool for both tasks could be a special finite element with frequency-dependent stiffness developed by the author for the code SASSI. This element can represent both soil and structure in the SSI model and help to split various factors influencing seismic response. In the paper the theory and some practical issues concerning the new element are presented

  15. ARDISC (Argonne Dispersion Code): computer programs to calculate the distribution of trace element migration in partially equilibrating media

    International Nuclear Information System (INIS)

    Strickert, R.; Friedman, A.M.; Fried, S.

    1979-04-01

    A computer program (ARDISC, the Argonne Dispersion Code) is described which simulates the migration of nuclides in porous media and includes first order kinetic effects on the retention constants. The code allows for different absorption and desorption rates and solves the coupled migration equations by arithmetic reiterations. Input data needed are the absorption and desorption rates, equilibrium surface absorption coefficients, flow rates and volumes, and media porosities

  16. Software architecture evolution

    DEFF Research Database (Denmark)

    Barais, Olivier; Le Meur, Anne-Francoise; Duchien, Laurence

    2008-01-01

    Software architectures must frequently evolve to cope with changing requirements, and this evolution often implies integrating new concerns. Unfortunately, when the new concerns are crosscutting, existing architecture description languages provide little or no support for this kind of evolution...... one particular framework named Tran SAT, which addresses the above problems of software architecture evolution. Tran SAT provides a new element in the software architecture descriptions language, called an architectural aspect, for describing new concerns and their integration into an existing...... architecture. Following the early aspect paradigm, Tran SAT allows the software architect to design a software architecture stepwise in terms of aspects at the design stage. It realises the evolution as the weaving of new architectural aspects into an existing software architecture....

  17. Characterization of the low-copy HERV-Fc family: evidence for recent integrations in primates of elements with coding envelope genes

    International Nuclear Information System (INIS)

    Benit, Laurence; Calteau, Alexandra; Heidmann, Thierry

    2003-01-01

    In a previous search based on the envelope gene, we had identified two related proviral elements that could not be included in identified ERV families. An in silico database screening associated with an in vivo polymerase chain reaction search using primers in the reverse transcriptase domain, now allowed identification of a series of related elements, found at a limited number in simians. A phylogenetic analysis led to their inclusion in a new family of endogenous retroviruses with limited expansion, which we named ERV-Fc, and which is part of the enlarged ERV-F/H family. The human genome comprises only six HERV-Fc, among which two possess full-length coding envelope genes. A complete provirus was identified in the baboon, also disclosing a fully open envelope gene. Cloning of the sites orthologous to the envelope-coding human proviruses demonstrated presence of the integrated proviruses in chimpanzee and gorilla, but not in orangutan. For the baboon element, the orthologous locus was found empty even in the phylogenetically most closely related macaque, again suggesting, together with the complete identity of its LTRs, 'recent' integration. The data presented are compatible with an evolutionary scheme in which the ERV-Fc proviruses would be the endogenous traces of an active retroviral element, possibly acting as an infectious retrovirus with low endogeneization potency, with evidence for integrations at two distinct periods of primate evolution

  18. Methodology for bus layout for topological quantum error correcting codes

    Energy Technology Data Exchange (ETDEWEB)

    Wosnitzka, Martin; Pedrocchi, Fabio L.; DiVincenzo, David P. [RWTH Aachen University, JARA Institute for Quantum Information, Aachen (Germany)

    2016-12-15

    Most quantum computing architectures can be realized as two-dimensional lattices of qubits that interact with each other. We take transmon qubits and transmission line resonators as promising candidates for qubits and couplers; we use them as basic building elements of a quantum code. We then propose a simple framework to determine the optimal experimental layout to realize quantum codes. We show that this engineering optimization problem can be reduced to the solution of standard binary linear programs. While solving such programs is a NP-hard problem, we propose a way to find scalable optimal architectures that require solving the linear program for a restricted number of qubits and couplers. We apply our methods to two celebrated quantum codes, namely the surface code and the Fibonacci code. (orig.)

  19. MxaJ structure reveals a periplasmic binding protein-like architecture with unique secondary structural elements.

    Science.gov (United States)

    Myung Choi, Jin; Cao, Thinh-Phat; Wouk Kim, Si; Ho Lee, Kun; Haeng Lee, Sung

    2017-07-01

    MxaJ is a component of type II methanol dehydrogenase (MDH) that mediates electron transfer during methanol oxidation in methanotrophic bacteria. However, little is known about how MxaJ structurally cooperates with MDH and Cytochrome c L . Here, we report for the first time the crystal structure of MxaJ. MxaJ consists of eight α-helices and six β-strands, and resembles the "bi-lobate" folding architecture found in periplasmic binding proteins. Distinctive features of MxaJ include prominent loops and a β-strand around the hinge region supporting the ligand-binding cavity, which might provide a more favorable framework for interacting with proteins rather than small molecules. Proteins 2017; 85:1379-1386. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Production of Curved Precast Concrete Elements for Shell Structures and Free-form Architecture using the Flexible Mould Method

    NARCIS (Netherlands)

    Schipper, H.R.; Grünewald, S.; Eigenraam, P.; Raghunath, P.; Kok, M.A.D.

    2014-01-01

    Free-form buildings tend to be expensive. By optimizing the production process, economical and well-performing precast concrete structures can be manufactured. In this paper, a method is presented that allows producing highly accurate double curved-elements without the need for milling two expensive

  1. RAP-3A Computer code for thermal and hydraulic calculations in steady state conditions for fuel element clusters

    International Nuclear Information System (INIS)

    Popescu, C.; Biro, L.; Iftode, I.; Turcu, I.

    1975-10-01

    The RAP-3A computer code is designed for calculating the main steady state thermo-hydraulic parameters of multirod fuel clusters with liquid metal cooling. The programme provides a double accuracy computation of temperatures and axial enthalpy distributions of pressure losses and axial heat flux distributions in fuel clusters before boiling conditions occur. Physical and mathematical models as well as a sample problem are presented. The code is written in FORTRAN-4 language and is running on a IBM-370/135 computer

  2. Software architecture evolution

    DEFF Research Database (Denmark)

    Barais, Olivier; Le Meur, Anne-Francoise; Duchien, Laurence

    2008-01-01

    Software architectures must frequently evolve to cope with changing requirements, and this evolution often implies integrating new concerns. Unfortunately, when the new concerns are crosscutting, existing architecture description languages provide little or no support for this kind of evolution....... The software architect must modify multiple elements of the architecture manually, which risks introducing inconsistencies. This chapter provides an overview, comparison and detailed treatment of the various state-of-the-art approaches to describing and evolving software architectures. Furthermore, we discuss...... one particular framework named Tran SAT, which addresses the above problems of software architecture evolution. Tran SAT provides a new element in the software architecture descriptions language, called an architectural aspect, for describing new concerns and their integration into an existing...

  3. Structural evaluation method for class 1 vessels by using elastic-plastic finite element analysis in code case of JSME rules on design and construction

    International Nuclear Information System (INIS)

    Asada, Seiji; Hirano, Takashi; Nagata, Tetsuya; Kasahara, Naoto

    2008-01-01

    A structural evaluation method by using elastic-plastic finite element analysis has been developed and published as a code case of Rules on Design and Construction for Nuclear Power Plants (The First Part: Light Water Reactor Structural Design Standard) in the JSME Codes for Nuclear Power Generation Facilities. Its title is 'Alternative Structural Evaluation Criteria for Class 1 Vessels Based on Elastic-Plastic Finite Element Analysis' (NC-CC-005). This code case applies elastic-plastic analysis to evaluation of such failure modes as plastic collapse, thermal ratchet, fatigue and so on. Advantage of this evaluation method is free from stress classification, consistently use of Mises stress and applicability to complex 3-dimensional structures which are hard to be treated by the conventional stress classification method. The evaluation method for plastic collapse has such variation as the Lower Bound Approach Method, Twice-Elastic-Slope Method and Elastic Compensation Method. Cyclic Yield Area (CYA) based on elastic analysis is applied to screening evaluation of thermal ratchet instead of secondary stress evaluation, and elastic-plastic analysis is performed when the CYA screening criteria is not satisfied. Strain concentration factors can be directly calculated based on elastic-plastic analysis. (author)

  4. Core Structure Elements Architectures to Facilitate Construction and Secure Interconnection of Mobile Services Frameworks and Advanced IAM Systems

    Science.gov (United States)

    Karantjias, Athanasios; Polemi, Nineta

    The impressing penetration rates of electronic and mobile networks provide the unique opportunity to organizations to provide advanced e/m-services, accelerating their entrance in the digital society, and strengthening their fundamental structure. Service Oriented Architectures (SOAs) is an acknowledged promising technology to overcome the complexity inherent to the communication among multiple e-business actors across organizational domains. Nevertheless, the need for more privacy-aware transactions raises specific challenges that SOAs need to address, including the problems of managing identities and ensuring privacy in the e/m-environment. This article presents a targeted, user-centric scalable and federated Identity Management System (IAM), calledSecIdAM, and a mobile framework for building privacy-aware, interoperable, and secure mobile applications with respect to the way that the trust relationship among the involved entities, users and SOAs, is established. Finally, it analyzes a user-transparent m-process for obtaining an authentication and authorization token, issued from the SecIdAM as integrated in the IST European programme SWEB for the public sector.

  5. Development of finite element code for the analysis of coupled thermo-hydro-mechanical behaviors of a saturated-unsaturated medium

    International Nuclear Information System (INIS)

    Ohnishi, Y.; Shibata, H.; Kobsayashi, A.

    1987-01-01

    A model is presented which describes fully coupled thermo-hydro-mechanical behavior of a porous geologic medium. The mathematical formulation for the model utilizes the Biot theory for the consolidation and the energy balance equation. If the medium is in the condition of saturated-unsaturated flow, then the free surfaces are taken into consideration in the model. The model, incorporated in a finite element numerical procedure, was implemented in a two-dimensional computer code. The code was developed under the assumptions that the medium is poro-elastic and in the plane strain condition; that water in the ground does not change its phase; and that heat is transferred by conductive and convective flow. Analytical solutions pertaining to consolidation theory for soils and rocks, thermoelasticity for solids and hydrothermal convection theory provided verification of stress and fluid flow couplings, respectively, in the coupled model. Several types of problems are analyzed

  6. Verification of finite element analysis code CalculiX CrunchiX (ccx) in accordance with ISO 10211:2007

    OpenAIRE

    Nammi, Sathish K.; Shirvani, Hassan; Shirvani, Ayoub; Mauricette, Jean-Luc

    2014-01-01

    The design standard ISO 10211 provides four thermal problems; a square column, a composite structure, a multi-environment building envelope and an iron bar penetrating an insulation layer. Each test case is described in a standard summary, which includes benchmark target solutions. A numerical code is considered as compliant with the aforementioned standard, providing the solutions for the test cases are within the tolerances for set physical point temperatures and total heat flow. Analyses w...

  7. ABCXYZ: vector potential (A) and magnetic field (B) code (C) for Cartesian (XYZ) geometry using general current elements. [In LRL TRAN for CDC > 600 computer

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, D.V.; Breazeal, J.; Finan, C.H.; Johnston, B.M.

    1976-09-14

    ABCXYZ is a computer code for obtaining the Cartesian components of the vector potential and the magnetic field on an observed grid from an arrangement of current-carrying wires. Arbitrary combinations of straight line segments, arcs, and loops are allowed in the specification of the currents. Arbitrary positions and orientations of the current-carrying elements are also allowed. Specification of the wire diameter permits the computation of well-defined fields, even in the interiors of the conductors. An optical feature generates magnetic field lines. Extensive graphical and printed output is available to the user including contour, grid-line, and field-line plots. 12 figures, 1 table.

  8. Utilizing elements of the CSAU phenomena identification and ranking table (PIRT) to qualify a PWR non-LOCA transients system code

    Energy Technology Data Exchange (ETDEWEB)

    Greene, K.R.; Fletcher, C.D.; Gottula, R.C.; Lindquist, T.R.; Stitt, B.D. [Framatome ANP, Richland, WA (United States)

    2001-07-01

    Licensing analyses of Nuclear Regulatory Commission (NRC) Standard Review Plan (SRP) Chapter 15 non-LOCA transients are an important part of establishing operational safety limits and design limits for nuclear power plants. The applied codes and methods are generally qualified using traditional methods of benchmarking and assessment, sample problems, and demonstration of conservatism. Rigorous formal methods for developing code and methodology have been created and applied to qualify realistic methods for Large Break Loss-of-Coolant Accidents (LBLOCA's). This methodology, Code Scaling, Applicability, and Uncertainty (CSAU), is a very demanding, resource intensive, process to apply. It would be challenging to apply a comprehensive and complete CSAU level of analysis, individually, to each of the more than 30 non-LOCA transients that comprise Chapter 15 events. However, certain elements of the process can be easily adapted to improve quality of the codes and methods used to analyze non- LOCA transients. One of these elements is the Phenomena Identification and Ranking Table (PIRT). This paper presents the results of an informally constructed PIRT that applies to non-LOCA transients for Pressurized Water Reactors (PWR's) of the Westinghouse and Combustion Engineering design. A group of experts in thermal-hydraulics and safety analysis identified and ranked the phenomena. To begin the process, the PIRT was initially performed individually by each expert. Then through group interaction and discussion, a consensus was reached on both the significant phenomena and the appropriate ranking. The paper also discusses using the PIRT as an aid to qualify a 'conservative' system code and methodology. Once agreement was obtained on the phenomena and ranking, the table was divided into six functional groups, by nature of the transients, along the same lines as Chapter 15. Then, assessment and disposition of the significant phenomena was performed. The PIRT and

  9. An architectural decision modeling framework for service oriented architecture design

    OpenAIRE

    Zimmermann, Olaf

    2009-01-01

    In this thesis, we investigate whether reusable architectural decision models can support Service-Oriented Architecture (SOA) design. In the current state of the art, architectural decisions are captured ad hoc and retrospectively on projects; this is a labor-intensive undertaking without immediate benefits. On the contrary, we investigate the role reusable architectural decision models can play during SOA design: We treat recurring architectural decisions as first-class method elements and p...

  10. Time-history simulation of civil architecture earthquake disaster relief- based on the three-dimensional dynamic finite element method

    Directory of Open Access Journals (Sweden)

    Liu Bing

    2014-10-01

    Full Text Available Earthquake action is the main external factor which influences long-term safe operation of civil construction, especially of the high-rise building. Applying time-history method to simulate earthquake response process of civil construction foundation surrounding rock is an effective method for the anti-knock study of civil buildings. Therefore, this paper develops a civil building earthquake disaster three-dimensional dynamic finite element numerical simulation system. The system adopts the explicit central difference method. Strengthening characteristics of materials under high strain rate and damage characteristics of surrounding rock under the action of cyclic loading are considered. Then, dynamic constitutive model of rock mass suitable for civil building aseismic analysis is put forward. At the same time, through the earthquake disaster of time-history simulation of Shenzhen Children’s Palace, reliability and practicability of system program is verified in the analysis of practical engineering problems.

  11. Sensitivity Analyses on Aging Elements for Wolsong Unit 1 Using RELAP/CANDU-SCAN Coupled Code System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Man Woong; Lee, Sang Kyu; Kim, Hyun Koon; Lee, Jong In [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of); Hwang, Su Hyun [FNC Technology Co. Ltd, Taejon (Korea, Republic of)

    2006-07-01

    Wolsong Unit 1 is in operating for 23 years since 1983. As a result of long-term operation, structure, systems and components (SSC) have in general suffered from aging. However, as up to now, the researches for aging are only focused on the individual system, component or structure. Meanwhile, a comprehensive safety analysis considering integrated aging effects for a nuclear power plant (NPP) has not been performed so far. Therefore, in this study, the aging effects on the safety analysis for Wolsong Unit 1 were identified and assessed using RELAP/CANDU-SCAN coupled code system to present the technology basis applied to continuing operation.

  12. Architecture-Centric Evolution

    NARCIS (Netherlands)

    Zdun, Uwe; Avgeriou, Paris

    2006-01-01

    Despite the general acceptance of software architecture as a pivotal player in software engineering, software evolution techniques have been traditionally concentrated on the code level. The state-of-the-practice is comprised of refactoring and re-engineering techniques that focus on code artefacts.

  13. Color coding of televised task elements in remote work: a literature review with practical recommendations for a fuel reprocessing facility

    International Nuclear Information System (INIS)

    Clarke, M.M.; Preston-Anderson, A.

    1981-11-01

    The experimental literature on the effects of color visual displays was reviewed with particular reference to the performance of remote work in a Hot Experimental Facility (HEF) using real scene closed-circult television systems. It was also reviewed with more general reference to the broader range of work-related issues of operator learning and preference, and display specifications. Color has been shown to enhance the performance of tasks requiring search and location and may also enhance tracking/transportation tasks. However, both HEF large-volume searching and tracking can be computer augmented, alleviating some of the necessity for a color code to assist an operator. Although color enhances long-term memory and is preferred to black and white displays, it has not been shown to have a specific advantage in the performance of unique tasks (where computer augmentation is more problematic and visual input to operator is critical). Practical display specifications are discussed with reference to hue and size of color code, target size, ambient illumination, multiple displays, and coatings. The authors conclude that the disadvantages to color television in the HEF far outweigh any possible advantages and recommend the use of high-resolution black and white systems, unless future experiments unequivocally indicate that (1) color is superior to black and white for in-situ task performance or (2) it is imperative in terms of long-range psychological well-being

  14. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    Science.gov (United States)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  15. FTS2000 network architecture

    Science.gov (United States)

    Klenart, John

    1991-01-01

    The network architecture of FTS2000 is graphically depicted. A map of network A topology is provided, with interservice nodes. Next, the four basic element of the architecture is laid out. Then, the FTS2000 time line is reproduced. A list of equipment supporting FTS2000 dedicated transmissions is given. Finally, access alternatives are shown.

  16. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...... the obligation to prepare students to perform in a profession that is largely defined by forces outside that discipline. It will be proposed that the autonomy of architecture can be understood as a unique kind of information: as architecture’s self-reliance or knowledge-about itself. A knowledge...... that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...

  17. Numerical Simulation of Fragment Separation during Rock Cutting Using a 3D Dynamic Finite Element Analysis Code

    Directory of Open Access Journals (Sweden)

    Zhenguo Lu

    2017-01-01

    Full Text Available To predict fragment separation during rock cutting, previous studies on rock cutting interactions using simulation approaches, experimental tests, and theoretical methods were considered in detail. This study used the numerical code LS-DYNA (3D to numerically simulate fragment separation. In the simulations, a damage material model and erosion criteria were used for the base rock, and the conical pick was designated a rigid material. The conical pick moved at varying linear speeds to cut the fixed base rock. For a given linear speed of the conical pick, numerical studies were performed for various cutting depths and mechanical properties of rock. The numerical simulation results demonstrated that the cutting forces and sizes of the separated fragments increased significantly with increasing cutting depth, compressive strength, and elastic modulus of the base rock. A strong linear relationship was observed between the mean peak cutting forces obtained from the numerical, theoretical, and experimental studies with correlation coefficients of 0.698, 0.8111, 0.868, and 0.768. The simulation results also showed an exponential relationship between the specific energy and cutting depth and a linear relationship between the specific energy and compressive strength. Overall, LS-DYNA (3D is effective and reliable for predicting the cutting performance of a conical pick.

  18. HEFF---A user`s manual and guide for the HEFF code for thermal-mechanical analysis using the boundary-element method; Version 4.1: Yucca Mountain Site Characterization Project

    Energy Technology Data Exchange (ETDEWEB)

    St. John, C.M.; Sanjeevan, K. [Agapito (J.F.T.) and Associates, Inc., Grand Junction, CO (United States)

    1991-12-01

    The HEFF Code combines a simple boundary-element method of stress analysis with the closed form solutions for constant or exponentially decaying heat sources in an infinite elastic body to obtain an approximate method for analysis of underground excavations in a rock mass with heat generation. This manual describes the theoretical basis for the code, the code structure, model preparation, and step taken to assure that the code correctly performs its intended functions. The material contained within the report addresses the Software Quality Assurance Requirements for the Yucca Mountain Site Characterization Project. 13 refs., 26 figs., 14 tabs.

  19. The Transcriptional Specificity of NF-κB Dimers Is Coded within the κB DNA Response Elements

    Directory of Open Access Journals (Sweden)

    Vivien Ya-Fan Wang

    2012-10-01

    Full Text Available Nuclear factor κB (NF-κB regulates gene expression by binding to specific DNA elements, known collectively as κB sites, that are contained within the promoters/enhancers of target genes. We found that the identity of the central base pair (bp of κB sites profoundly affects the transcriptional activity of NF-κB dimers. RelA dimers prefer an A/T bp at this position for optimal transcriptional activation (A/T-centric and discriminate against G/C-centric κB sites. The p52 homodimer, in contrast, activates transcription from G/C-centric κB sites in complex with Bcl3 but represses transcription from the A/T-centric sites. The p52:Bcl3 complex binds to these two classes of κB sites in distinct modes, permitting the recruitment of coactivator, corepressor, or both coactivator and corepressor complexes in promoters that contain G/C-, A/T-, or both G/C- and A/T-centric sites. Therefore, through sensing of bp differences within κB sites, NF-κB dimers modulate biological programs by activating, repressing, and altering the expression of effector genes.

  20. High-Fidelity RF Gun Simulations with the Parallel 3D Finite Element Particle-In-Cell Code Pic3P

    Energy Technology Data Exchange (ETDEWEB)

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Schussman, G.; Ko, K.; /SLAC

    2009-06-19

    SLAC's Advanced Computations Department (ACD) has developed the first parallel Finite Element 3D Particle-In-Cell (PIC) code, Pic3P, for simulations of RF guns and other space-charge dominated beam-cavity interactions. Pic3P solves the complete set of Maxwell-Lorentz equations and thus includes space charge, retardation and wakefield effects from first principles. Pic3P uses higher-order Finite Elementmethods on unstructured conformal meshes. A novel scheme for causal adaptive refinement and dynamic load balancing enable unprecedented simulation accuracy, aiding the design and operation of the next generation of accelerator facilities. Application to the Linac Coherent Light Source (LCLS) RF gun is presented.

  1. VISCOT: a two-dimensional and axisymmetric nonlinear transient thermoviscoelastic and thermoviscoplastic finite-element code for modeling time-dependent viscous mechanical behavior of a rock mass

    International Nuclear Information System (INIS)

    1983-04-01

    VISCOT is a non-linear, transient, thermal-stress finite-element code designed to determine the viscoelastic, fiscoplastic, or elastoplastic deformation of a rock mass due to mechanical and thermal loading. The numerical solution of the nonlinear incremental equilibrium equations within VISCOT is performed by using an explicit Euler time-stepping scheme. The rock mass may be modeled as a viscoplastic or viscoelastic material. The viscoplastic material model can be described by a Tresca, von Mises, Drucker-Prager or Mohr-Coulomb yield criteria (with or without strain hardening) with an associated flow rule which can be a power or an exponential law. The viscoelastic material model within VISCOT is a temperature- and stress-dependent law which has been developed specifically for salt rock masses by Pfeifle, Mellegard and Senseny in ONWI-314 topical report (1981). Site specific parameters for this creep law at the Richton, Permian, Paradox and Vacherie salt sites have been calculated and are given in ONWI-314 topical report (1981). A major application of VISCOT (in conjunction with a SCEPTER heat transfer code such as DOT) is the thermomechanical analysis of a rock mass such as salt in which significant time-dependent nonlinear deformations are expected to occur. Such problems include room- and canister-scale studies during the excavation, operation, and long-term post-closure stages in a salt repository. In Section 1.5 of this document the code custodianship and control is described along with the status of verification, validation and peer review of this report

  2. A Novel Analytical Strategy to Identify Fusion Transcripts between Repetitive Elements and Protein Coding-Exons Using RNA-Seq.

    Directory of Open Access Journals (Sweden)

    Tianyuan Wang

    Full Text Available Repetitive elements (REs comprise 40-60% of the mammalian genome and have been shown to epigenetically influence the expression of genes through the formation of fusion transcript (FTs. We previously showed that an intracisternal A particle forms an FT with the agouti gene in mice, causing obesity/type 2 diabetes. To determine the frequency of FTs genome-wide, we developed a TopHat-Fusion-based analytical pipeline to identify FTs with high specificity. We applied it to an RNA-seq dataset from the nucleus accumbens (NAc of mice repeatedly exposed to cocaine. Cocaine was previously shown to increase the expression of certain REs in this brain region. Using this pipeline that can be applied to single- or paired-end reads, we identified 438 genes expressing 813 different FTs in the NAc. Although all types of studied repeats were present in FTs, simple sequence repeats were underrepresented. Most importantly, reverse-transcription and quantitative PCR validated the expression of selected FTs in an independent cohort of animals, which also revealed that some FTs are the prominent isoforms expressed in the NAc by some genes. In other RNA-seq datasets, developmental expression as well as tissue specificity of some FTs differed from their corresponding non-fusion counterparts. Finally, in silico analysis predicted changes in the structure of proteins encoded by some FTs, potentially resulting in gain or loss of function. Collectively, these results indicate the robustness of our pipeline in detecting these new isoforms of genes, which we believe provides a valuable tool to aid in better understanding the broad role of REs in mammalian cellular biology.

  3. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...

  4. Software Architecture Reconstruction Method, a Survey

    OpenAIRE

    Zainab Nayyar; Nazish Rafique

    2014-01-01

    Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules a...

  5. Synthesis and characterization of f-element iodate architectures with variable dimensionality, alpha- and beta-Am(IO3)3.

    Science.gov (United States)

    Runde, Wolfgang; Bean, Amanda C; Brodnax, Lia F; Scott, Brian L

    2006-03-20

    Two americium(III) iodates, beta-Am(IO3)3 (I) and alpha-Am(IO3)3 (II), have been prepared from the aqueous reactions of Am(III) with KIO(4) at 180 degrees C and have been characterized by single-crystal X-ray diffraction, diffuse reflectance, and Raman spectroscopy. The alpha-form is consistent with the known structure type I of anhydrous lanthanide iodates. It consists of a three-dimensional network of pyramidal iodate groups bridging [AmO8] polyhedra where each of the americium ions are coordinated to eight iodate ligands. The beta-form reveals a novel architecture that is unknown within the f-element iodate series. beta-Am(IO3)3 exhibits a two-dimensional layered structure with nine-coordinate Am(III) atoms. Three crystallographically unique pyramidal iodate anions link the Am atoms into corrugated sheets that interact with one another through intermolecular IO3-...IO3- interactions forming dimeric I2O10 units. One of these anions utilizes all three O atoms to simultaneously bridge three Am atoms. The other two iodate ligands bridge only two Am atoms and have one terminal O atom. In contrast to alpha-Am(IO3)3, where the [IO3] ligands are solely corner-sharing with [AmO8] polyhedra, a complex arrangement of corner- and edge-sharing mu2- and mu3-[IO3] pyramids can be found in beta-Am(IO3)3. Crystallographic data: I, monoclinic, space group P2(1)/n, a = 8.871(3) A, b = 5.933(2) A, c = 15.315(4) A, beta = 96.948(4) degrees , V = 800.1(4) A(3), Z = 4; II, monoclinic, space group P2(1)/c, a = 7.243(2) A, b = 8.538(3) A, c = 13.513(5) A, beta = 100.123(6) degrees , V = 822.7(5) A(3), Z = 4.

  6. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders......' concerns with respect to a system under development. An architectural prototype is primarily a learning and communication vehicle used to explore and experiment with alternative architectural styles, features, and patterns in order to balance different architectural qualities. The use of architectural...... prototypes in the development process is discussed, and we argue that such prototypes can play a role throughout the entire process. The use of architectural prototypes is illustrated by three distinct cases of creating software systems. We argue that architectural prototyping can provide key insights...

  7. Architectural Prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders......' concerns with respect to a system under development. An architectural prototype is primarily a learning and communication vehicle used to explore and experiment with alternative architectural styles, features, and patterns in order to balance different architectural qualities. The use of architectural...... prototypes in the development process is discussed, and we argue that such prototypes can play a role throughout the entire process. The use of architectural prototypes is illustrated by three distinct cases of creating software systems. We argue that architectural prototyping can provide key insights...

  8. Dynamic Weather Routes Architecture Overview

    Science.gov (United States)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  9. Product Architecture Modularity Strategies

    DEFF Research Database (Denmark)

    Mikkola, Juliana Hsuan

    2003-01-01

    and how components and interfaces influence the degree of modularization are considered. In order to gain a better understanding of product architecture modularity as a strategy, a theoretical framework and propositions are drawn from various academic literature sources. Based on the literature review......The focus of this paper is to integrate various perspectives on product architecture modularity into a general framework, and also to propose a way to measure the degree of modularization embedded in product architectures. Various trade-offs between modular and integral product architectures......, the following key elements of product architecture are identified: components (standard and new-to-the-firm), interfaces (standardization and specification), degree of coupling, and substitutability. A mathematical function, termed modularization function, is introduced to measure the degree of modularization...

  10. Essential software architecture

    CERN Document Server

    Gorton, Ian

    2011-01-01

    Job titles like ""Technical Architect"" and ""Chief Architect"" nowadays abound in software industry, yet many people suspect that ""architecture"" is one of the most overused and least understood terms in professional software development. Gorton's book tries to resolve this dilemma. It concisely describes the essential elements of knowledge and key skills required to be a software architect. The explanations encompass the essentials of architecture thinking, practices, and supporting technologies. They range from a general understanding of structure and quality attributes through technical i

  11. OS Friendly Microprocessor Architecture

    Science.gov (United States)

    2017-04-01

    have developed a computer architecture that reduces the high cost of a context switch and provides hardware-based computer security. A context switch...code to jump to a computer virus or other malware application. Caller ID does not have any authentication. A prank caller can easily spoof Caller ID...

  12. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  13. Transparency and Movement in Architecture

    OpenAIRE

    Estremadoyro, Veronica

    2003-01-01

    This project investigates transparency and movement as the main measured elements that defi ne space. These elements seek to articulate distinct and memorable places throughout the building, acknowledging its unique setting along the Potomac River in Old Town Alexandria, Virginia. Architecture and nature as opposite elements combine here to defi ne a building in which water, light and views become the main architectural agents set in dialog with the natural surroundings. An existi...

  14. Evolution of the Petasis-Ferrier union/rearrangement tactic: construction of architecturally complex natural products possessing the ubiquitous cis-2,6-substituted tetrahydropyran structural element.

    Science.gov (United States)

    Smith, Amos B; Fox, Richard J; Razler, Thomas M

    2008-05-01

    The frequent low abundance of architecturally complex natural products possessing significant bioregulatory properties mandates the development of rapid, efficient, and stereocontrolled synthetic tactics, not only to provide access to the biologically rare target but also to enable elaboration of analogues for the development of new therapeutic agents with improved activities and/or pharmacokinetic properties. In this Account, the genesis and evolution of the Petasis-Ferrier union/rearrangement tactic, in the context of natural product total syntheses, is described. The reaction sequence comprises a powerful tactic for the construction of the 2,6- cis-substituted tetrahydropyran ring system, a ubiquitous structural element often found in complex natural products possessing significant bioactivities. The three-step sequence, developed in our laboratory, extends two independent methods introduced by Ferrier and Petasis and now comprises: condensation between a chiral, nonracemic beta-hydroxy acid and an aldehyde to furnish a dioxanone; carbonyl olefination; and Lewis-acid-induced rearrangement of the resultant enol acetal to generate the 2,6- cis-substituted tetrahydropyranone system in a highly stereocontrolled fashion. To demonstrate the envisioned versatility and robustness of the Petasis-Ferrier union/rearrangement tactic in complex molecule synthesis, we exploited the method as the cornerstone in our now successful total syntheses of (+)-phorboxazole A, (+)-zampanolide, (+)-dactylolide, (+)-spongistatins 1 and 2, (-)-kendomycin, (-)-clavosolide A, and most recently, (-)-okilactomycin. Although each target comprises a number of synthetic challenges, this Account focuses on the motivation, excitement, and frustrations associated with the evolution and implementation of the Petasis-Ferrier union/rearrangement tactic. For example, during our (+)-phorboxazole A endeavor, we recognized and exploited the inherent pseudo symmetry of the 2,6- cis

  15. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  16. A surface code quantum computer in silicon.

    Science.gov (United States)

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  17. VLSI architecture

    Energy Technology Data Exchange (ETDEWEB)

    Randell, B.; Treleaven, P.C.

    1983-01-01

    This book is a collection of course papers which discusses the latest (1982) milestone of electronic building blocks and its effect on computer architecture. Contributions range from selecting a VLSI process technology to Japan's Fifth Generation Computer Architecture. Contents, abridged: VLSI and machine architecture. Graphic design aids: HED and FATFREDDY. On the LUCIFER system. Clocking of VLSI circuits. Decentralised computer architectures for VLSI. Index.

  18. Architecture & Environment

    Science.gov (United States)

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  19. Minimalism in architecture: Architecture as a language of its identity

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2012-01-01

    Full Text Available Every architectural work is created on the principle that includes the meaning, and then this work is read like an artifact of the particular meaning. Resources by which the meaning is built primarily, susceptible to transformation, as well as routing of understanding (decoding messages carried by a work of architecture, are subject of semiotics and communication theories, which have played significant role for the architecture and the architect. Minimalism in architecture, as a paradigm of the XXI century architecture, means searching for essence located in the irreducible minimum. Inspired use of architectural units (archetypical elements, trough the fatasm of simplicity, assumes the primary responsibility for providing the object identity, because it participates in language formation and therefore in its reading. Volume is form by clean language that builds the expression of the fluid areas liberated of recharge needs. Reduced architectural language is appropriating to the age marked by electronic communications.

  20. Verification and benchmarking of MAGNUM-2D: a finite element computer code for flow and heat transfer in fractured porous media

    Energy Technology Data Exchange (ETDEWEB)

    Eyler, L.L.; Budden, M.J.

    1985-03-01

    The objective of this work is to assess prediction capabilities and features of the MAGNUM-2D computer code in relation to its intended use in the Basalt Waste Isolation Project (BWIP). This objective is accomplished through a code verification and benchmarking task. Results are documented which support correctness of prediction capabilities in areas of intended model application. 10 references, 43 figures, 11 tables.

  1. National Positioning, Navigation, and Timing Architecture

    National Research Council Canada - National Science Library

    Huested, Patrick; Popejoy, Paul D

    2008-01-01

    .... The strategy is supported by vectors, or enterprise architecture elements, for using multiple PNT-related phenomenologies and interchangeable PNT solutions, PNT and Communications synergy, and co...

  2. Architectural Prototyping in Industrial Practice

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2008-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system......, in addressing issues regarding quality attributes, in addressing architectural risks, and in addressing the problem of knowledge transfer and conformance. Little work has been reported so far on the actual industrial use of architectural prototyping. In this paper, we report from an ethnographical study...... prototypes include end-user or business related functionality rather than purely architectural functionality. Based on these observations we provide recommendations for effective industrial architectural prototyping....

  3. Enterprise Architecture Evaluation

    DEFF Research Database (Denmark)

    Andersen, Peter; Carugati, Andrea

    2014-01-01

    By being holistically preoccupied with coherency among organizational elements such as organizational strategy, business needs and the IT functions role in supporting the business, enterprise architecture (EA) has grown to become a core competitive advantage. Though EA is a maturing research area...

  4. The DACC system. Code burnup of cell for projection of the fuel elements in the power net work PWR and BWR

    International Nuclear Information System (INIS)

    Cepraga, D.; Boeriu, St.; Gheorghiu, E.; Cristian, I.; Patrulescu, I.; Cimporescu, D.; Ciuvica, P.; Velciu, E.

    1975-01-01

    The calculation system DACC-5 is a zero-dimensional reactor physics code used to calculate the criticality and burn-up of light-water reactors. The code requires as input essential extensive reactor parameters (fuel rod radius, water density, etc.). The nuclear constants (intensive parameters) are calculated with a five-group model (2 thermal and 3 fast groups). A fitting procedure is systematically employed to reduce the computation time of the code. Zero-dimensional burn-up calculations are made in an automatic way. Part one of the paper contains the code physical model and computer structure. Part two of the paper will contain tests of DACC-5 credibility for different light-water power lattices

  5. Catalyst Architecture

    DEFF Research Database (Denmark)

    ’Catalyst Architecture’ takes its point of departure in a broadened understanding of the role of architecture in relation to developmental problems in large cities. Architectural projects frame particular functions and via their form language, they can provide the user with an aesthetic experience....... The broadened understanding of architecture consists in that an architectural project, by virtue of its placement in the context and of its composition of programs, can have a mediating role in a positive or cultural development of the district in question. In this sense, we talk about architecture as catalyst...... cities on the planet have growing pains and social cohesiveness is under pressure from an increased difference between rich and poor, social segregation, ghettoes, immigration of guest workers and refugees, commercial mass tourism etc. In this context, it is important to ask which role architecture...

  6. Catalyst Architecture

    DEFF Research Database (Denmark)

    Kiib, Hans; Marling, Gitte; Hansen, Peter Mandal

    2014-01-01

    How can architecture promote the enriching experiences of the tolerant, the democratic, and the learning city - a city worth living in, worth supporting and worth investing in? Catalyst Architecture comprises architectural projects, which, by virtue of their location, context and their combination...... of programs, have a role in mediating positive social and/or cultural development. In this sense, we talk about architecture as a catalyst for: sustainable adaptation of the city’s infrastructure appropriate renovation of dilapidated urban districts strengthening of social cohesiveness in the city development...

  7. 77 FR 3070 - Electric Engineering, Architectural Services, Design Policies and Construction Standards

    Science.gov (United States)

    2012-01-23

    ... Engineering, Architectural Services, Design Policies and Construction Standards AGENCY: Rural Utilities..., engineering services and architectural services for transactions above the established threshold dollar levels... Code of Federal Regulations as follows: PART 1724--ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND...

  8. French RSE-M and RCC-MR code appendices for flaw analysis: Presentation of the fracture parameters calculation-Part V: Elements of validation

    Energy Technology Data Exchange (ETDEWEB)

    Marie, S. [CEA Saclay, DEN/DM2S/SEMT/LISN, 91191 Gif sur Yvette Cedex (France)], E-mail: stephane.marie@cea.fr; Chapuliot, S.; Kayser, Y. [CEA Saclay, DEN/DM2S/SEMT/LISN, 91191 Gif sur Yvette Cedex (France); Lacire, M.H. [CEA Saclay, DEN/DDIN, 91191 Gif sur Yvette Cedex (France); Drubay, B. [CEA Saclay, DEN/DM2S/SEMT/LISN, 91191 Gif sur Yvette Cedex (France); Barthelet, B. [EDF/EPN, Site Cap Ampere, 1 place Pleyel 93207, Saint Denis Cedex 1 (France); Le Delliou, P. [EDF Pole Industrie-Division R and D, Site des Renardieres, Route de Sens, Ecuelles, 77250 Moret sur Loing Cedex (France); Rougier, V. [EDF/UTO, SIS/GAM, 6, avenue Montaigne, 93192 Noisy le Grand (France); Naudin, C. [EDF/SEPTEN, 12-14, avenue Dutrievoz, 69628 Villeurbanne Cedex (France); Gilles, P.; Triay, M. [AREVA ANP, Tour AREVA, 92084 Paris La Defense Cedex 16 (France)

    2007-10-15

    French nuclear codes include flaw assessment procedures: the RSE-M Code 'Rules for In-service Inspection of Nuclear Power Plant Components' and the RCC-MR code 'Design and Construction Rules for Mechanical Components of FBR Nuclear Islands and High Temperature Applications'. Development of analytical methods has been made for the last 10 years in the framework of a collaboration between CEA, EDF and AREVA-NP, and by R and D actions involving CEA and IRSN. These activities have led to a unification of the common methods of the two codes. The calculation of fracture mechanics parameters, in particular the stress intensity factor K{sub I} and the J integral, has been widely developed for industrial configurations. All the developments have been integrated in the 2005 edition of RSE-M and in 2007 edition of RCC-MR. This series of articles consists of 5 parts: the first part presents an overview of the methods proposed in the RCC-MR and RSE-M codes. Parts II-IV provide the compendia for specific components. The geometries are plates (part II), pipes (part III) and elbows (part IV). This part presents validation of the methods, with details on the process followed for their development and of the evaluation accuracy of the proposed analytical methods.

  9. French RSE-M and RCC-MR code appendices for flaw analysis: Presentation of the fracture parameters calculation-Part V: Elements of validation

    International Nuclear Information System (INIS)

    Marie, S.; Chapuliot, S.; Kayser, Y.; Lacire, M.H.; Drubay, B.; Barthelet, B.; Le Delliou, P.; Rougier, V.; Naudin, C.; Gilles, P.; Triay, M.

    2007-01-01

    French nuclear codes include flaw assessment procedures: the RSE-M Code 'Rules for In-service Inspection of Nuclear Power Plant Components' and the RCC-MR code 'Design and Construction Rules for Mechanical Components of FBR Nuclear Islands and High Temperature Applications'. Development of analytical methods has been made for the last 10 years in the framework of a collaboration between CEA, EDF and AREVA-NP, and by R and D actions involving CEA and IRSN. These activities have led to a unification of the common methods of the two codes. The calculation of fracture mechanics parameters, in particular the stress intensity factor K I and the J integral, has been widely developed for industrial configurations. All the developments have been integrated in the 2005 edition of RSE-M and in 2007 edition of RCC-MR. This series of articles consists of 5 parts: the first part presents an overview of the methods proposed in the RCC-MR and RSE-M codes. Parts II-IV provide the compendia for specific components. The geometries are plates (part II), pipes (part III) and elbows (part IV). This part presents validation of the methods, with details on the process followed for their development and of the evaluation accuracy of the proposed analytical methods

  10. The NASA Space Communications Data Networking Architecture

    Science.gov (United States)

    Israel, David J.; Hooke, Adrian J.; Freeman, Kenneth; Rush, John J.

    2006-01-01

    The NASA Space Communications Architecture Working Group (SCAWG) has recently been developing an integrated agency-wide space communications architecture in order to provide the necessary communication and navigation capabilities to support NASA's new Exploration and Science Programs. A critical element of the space communications architecture is the end-to-end Data Networking Architecture, which must provide a wide range of services required for missions ranging from planetary rovers to human spaceflight, and from sub-orbital space to deep space. Requirements for a higher degree of user autonomy and interoperability between a variety of elements must be accommodated within an architecture that necessarily features minimum operational complexity. The architecture must also be scalable and evolvable to meet mission needs for the next 25 years. This paper will describe the recommended NASA Data Networking Architecture, present some of the rationale for the recommendations, and will illustrate an application of the architecture to example NASA missions.

  11. Architectural Contestation

    NARCIS (Netherlands)

    Merle, J.

    2012-01-01

    This dissertation addresses the reductive reading of Georges Bataille's work done within the field of architectural criticism and theory which tends to set aside the fundamental ‘broken’ totality of Bataille's oeuvre and also to narrowly interpret it as a mere critique of architectural form,

  12. Minimalism in architecture: Abstract conceptualization of architecture

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2015-01-01

    Full Text Available Minimalism in architecture contains the idea of the minimum as a leading creative tend to be considered and interpreted in working through phenomena of empathy and abstraction. In the Western culture, the root of this idea is found in empathy of Wilhelm Worringer and abstraction of Kasimir Malevich. In his dissertation, 'Abstraction and Empathy' Worringer presented his thesis on the psychology of style through which he explained the two opposing basic forms: abstraction and empathy. His conclusion on empathy as a psychological basis of observation expression is significant due to the verbal congruence with contemporary minimalist expression. His intuition was enhenced furthermore by figure of Malevich. Abstraction, as an expression of inner unfettered inspiration, has played a crucial role in the development of modern art and architecture of the twentieth century. Abstraction, which is one of the basic methods of learning in psychology (separating relevant from irrelevant features, Carl Jung is used to discover ideas. Minimalism in architecture emphasizes the level of abstraction to which the individual functions are reduced. Different types of abstraction are present: in the form as well as function of the basic elements: walls and windows. The case study is an example of Sou Fujimoto who is unequivocal in its commitment to the autonomy of abstract conceptualization of architecture.

  13. Systemic Architecture

    DEFF Research Database (Denmark)

    Poletto, Marco; Pasquero, Claudia

    This is a manual investigating the subject of urban ecology and systemic development from the perspective of architectural design. It sets out to explore two main goals: to discuss the contemporary relevance of a systemic practice to architectural design, and to share a toolbox of informational...... design protocols developed to describe the city as a territory of self-organization. Collecting together nearly a decade of design experiments by the authors and their practice, ecoLogicStudio, the book discusses key disciplinary definitions such as ecologic urbanism, algorithmic architecture, bottom......-up or tactical design, behavioural space and the boundary of the natural and the artificial realms within the city and architecture. A new kind of "real-time world-city" is illustrated in the form of an operational design manual for the assemblage of proto-architectures, the incubation of proto...

  14. Data Element Registry Services

    Data.gov (United States)

    U.S. Environmental Protection Agency — Data Element Registry Services (DERS) is a resource for information about value lists (aka code sets / pick lists), data dictionaries, data elements, and EPA data...

  15. A Systematic Way to Develop the Software Architecture based on Architecture Vision

    OpenAIRE

    Faried, Muhammad Aamir; Ilyas, Mustafa

    2010-01-01

    In the software development life cycle, changes are inevitable. Designing the architecture of the software and writing the source code does not end the software life cycle. The software system evolves as changes in the environment and requirements are incorporated in the system. If these changes are not managed properly, the architecture of the software deteriorates and leads to architecture erosion. This study is an effort to address the problem of architecture erosion and to keep the softwa...

  16. Pan-cancer screen for mutations in non-coding elements with conservation and cancer specificity reveals correlations with expression and survival

    DEFF Research Database (Denmark)

    Hornshøj, Henrik; Nielsen, Morten Muhlig; Sinnott-Armstrong, Nicholas A

    2018-01-01

    Cancer develops by accumulation of somatic driver mutations, which impact cellular function. Mutations in non-coding regulatory regions can now be studied genome-wide and further characterized by correlation with gene expression and clinical outcome to identify driver candidates. Using a new two-...

  17. High Speed Viterbi Decoder Architecture

    DEFF Research Database (Denmark)

    Paaske, Erik; Andersen, Jakob Dahl

    1998-01-01

    The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s.......The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s....

  18. Islamic Architecture and Arch

    Directory of Open Access Journals (Sweden)

    Mohammed Mahbubur Rahman

    2015-01-01

    Full Text Available The arch, an essential architectural element since the early civilizations, permitted the construction of lighter walls and vaults, often covering a large span. Visually it was an important decorative feature that was trans-mitted from architectural decoration to other forms of art worldwide. In early Islamic period, Muslims were receiving from many civilizations, which they improved and re-introduced to bring about the Renaissance. Arches appeared in Mesopotamia, Indus, Egyptian, Babylonian, Greek and Assyrian civilizations; but the Romans applied the technique to a wide range of structures. The Muslims mastered the use and design of the arch, employed for structural and functional purposes, progressively meeting decorative and symbolic pur-poses. Islamic architecture is characterized by arches employed in all types of buildings; most common uses being in arcades. This paper discusses the process of assimilation and charts how they contributed to other civilizations.

  19. An Architecture For Boundary-Based Segmentation

    Science.gov (United States)

    Adffel, J. M.; Sanz, J. L. C.; Jain, A. K.; Current, K. W.

    1988-02-01

    A novel hardware architecture for extracting region boundaries in two raster scan passes through a binary image is presented. The first pass gathers statistics regarding the size of each object contour. This information is used to dynamically allocate available memory for storage of boundary codes. In the second raster pass, the same architecture constructs lists of Grid-Joint Codes to represent the perimeter pixels of each object. These codes, referred to variously as "crack" codes or "raster-chain" codes in the literature, are later decoded by the hardware to reproduce the ordered sequence of coordinates surrounding each object. This list of coordinates is useful for the variety of shape recognition and manipulation algorithms which utilize boundary information. We present results of software simulations of the VLSI architecture, along with measurements of the coding efficiency of the basic algorithm, and estimates of the overall chip complexity.

  20. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen

    This PhD thesis is motived by a personal interest in the theoretical, practical and creative qualities of architecture. But also a wonder and curiosity about the cultural and social relations architecture represents through its occupation with both the sciences and the arts. Inspired by present i...... with the material appearance of objects, but also the imaginary world of dreams and memories which are concealed with the communicative significance of intentions when designing the future super hospitals....... initiatives in Aalborg Hospital to overcome patient undernutrition by refurbishing eating environments, this thesis engages in an investigation of the interior architectural qualities of patient eating environments. The relevance for this holistic perspective, synthesizing health, food and architecture......, is the current building of a series of Danish ‘super hospitals’ and an increased focus among architectural practices on research-based knowledge produced with the architectural sub-disciplines Healing Architecture and Evidence-Based Design. The problem is that this research does not focus on patient eating...

  1. Humanizing Architecture

    DEFF Research Database (Denmark)

    Toft, Tanya Søndergaard

    2015-01-01

    The article proposes the urban digital gallery as an opportunity to explore the relationship between ‘human’ and ‘technology,’ through the programming of media architecture. It takes a curatorial perspective when proposing an ontological shift from considering media facades as visual spectacles...... agency and a sense of being by way of dematerializing architecture. This is achieved by way of programming the symbolic to provide new emotional realizations and situations of enlightenment in the public audience. This reflects a greater potential to humanize the digital in media architecture....

  2. Healing Architecture

    DEFF Research Database (Denmark)

    Folmer, Mette Blicher; Mullins, Michael; Frandsen, Anne Kathrine

    2012-01-01

    The project examines how architecture and design of space in the intensive unit promotes or hinders interaction between relatives and patients. The primary starting point is the relatives. Relatives’ support and interaction with their loved ones is important in order to promote the patients healing...... process. Therefore knowledge on how space can support interaction is fundamental for the architect, in order to make the best design solutions. Several scientific studies document that the hospital's architecture and design are important for human healing processes, including how the physical environment...... architectural and design solutions in order to improve quality of interaction between relative and patient in the hospital's intensive unit....

  3. Architectural technology

    DEFF Research Database (Denmark)

    2005-01-01

    The booklet offers an overall introduction to the Institute of Architectural Technology and its projects and activities, and an invitation to the reader to contact the institute or the individual researcher for further information. The research, which takes place at the Institute of Architectural...... Technology at the Roayl Danish Academy of Fine Arts, School of Architecture, reflects a spread between strategic, goal-oriented pilot projects, commissioned by a ministry, a fund or a private company, and on the other hand projects which originate from strong personal interests and enthusiasm of individual...

  4. Enterprise architecture evaluation using architecture framework and UML stereotypes

    Directory of Open Access Journals (Sweden)

    Narges Shahi

    2014-08-01

    Full Text Available There is an increasing need for enterprise architecture in numerous organizations with complicated systems with various processes. Support for information technology, organizational units whose elements maintain complex relationships increases. Enterprise architecture is so effective that its non-use in organizations is regarded as their institutional inability in efficient information technology management. The enterprise architecture process generally consists of three phases including strategic programing of information technology, enterprise architecture programing and enterprise architecture implementation. Each phase must be implemented sequentially and one single flaw in each phase may result in a flaw in the whole architecture and, consequently, in extra costs and time. If a model is mapped for the issue and then it is evaluated before enterprise architecture implementation in the second phase, the possible flaws in implementation process are prevented. In this study, the processes of enterprise architecture are illustrated through UML diagrams, and the architecture is evaluated in programming phase through transforming the UML diagrams to Petri nets. The results indicate that the high costs of the implementation phase will be reduced.

  5. CALIPSO - a computer code for the calculation of fluiddynamics, thermohydraulics and changes of geometry in failing fuel elements of a fast breeder reactor

    International Nuclear Information System (INIS)

    Kedziur, F.

    1982-07-01

    The computer code CALIPSO was developed for the calculation of a hypothetical accident in an LMFBR (Liquid Metal Fast Breeder Reactor), where the failure of fuel pins is assumed. It calculates two-dimensionally the thermodynamics, fluiddynamics and changes in geometry of a single fuel pin and its coolant channel in a time period between failure of the pin and a state, at which the geometry is nearly destroyed. The determination of temperature profiles in the fuel pin cladding and the channel wall make it possible to take melting and freezing processes into account. Further features of CALIPSO are the variable channel cross section in order to model disturbances of the channel geometry as well as the calculation of two velocity fields including the consideration of virtual mass effects. The documented version of CALIPSO is especially suited for the calculation of the SIMBATH experiments carried out at the Kernforschungszentrum Karlsruhe, which simulate the above-mentioned accident. The report contains the complete documentation of the CALIPSO code: the modeling of the geometry, the equations used, the structure of the code and the solution procedure as well as the instructions for use with an application example. (orig.) [de

  6. The Simulation Intranet Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  7. Architectured Nanomembranes

    Energy Technology Data Exchange (ETDEWEB)

    Sturgeon, Matthew R. [Former ORNL postdoc; Hu, Michael Z. [ORNL

    2017-07-01

    This paper has reviewed the frontier field of “architectured membranes” that contains anisotropic oriented porous nanostructures of inorganic materials. Three example types of architectured membranes were discussed with some relevant results from our own research: (1) anodized thin-layer titania membranes on porous anodized aluminum oxide (AAO) substrates of different pore sizes, (2) porous glass membranes on alumina substrate, and (3) guest-host membranes based on infiltration of yttrium-stabilized zirconia inside the pore channels of AAO matrices.

  8. The Political Economy of Architectural Research : Dutch Architecture, Architects and the City, 2000-2012

    NARCIS (Netherlands)

    Djalali, A.

    2016-01-01

    The status of architectural research has not yet been clearly defined. Nevertheless, architectural research has surely become a core element in the profession of architecture. In fact, the tendency seem for architects to be less and less involved with building design and construction services, which

  9. An Empirical Investigation of Architectural Prototyping

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2010-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system...

  10. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... expression in the specific housing area. It is the aim of this article to expand the different design strategies which architects can use – to give the individual project attitudes and designs with architectural quality. Through the customized component production it is possible to choose different...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  11. PICNIC Architecture.

    Science.gov (United States)

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source.

  12. SURF: a subroutine code to draw the axonometric projection of a surface generated by a scalar function over a discretized plane domain using finite element computations

    International Nuclear Information System (INIS)

    Giuliani, Giovanni; Giuliani, Silvano.

    1980-01-01

    The FORTRAN IV subroutine SURF has been designed to help visualising the results of Finite Element computations. It drawns the axonometric projection of a surface generated in 3-dimensional space by a scalar function over a discretized plane domain. The most important characteristic of the routine is to remove the hidden lines and in this way it enables a clear vision of the details of the generated surface

  13. Participation in benchmark MATIS-H of NEA/OCDE: uses CFD codes applied to nuclear safety. Study of the spacer grids in the fuel elements

    International Nuclear Information System (INIS)

    Pena-Monferrer, C.; Chiva, S.; Munoz-cobo, J. L.; Vela, E.

    2012-01-01

    This paper develops participation in benchmark MATIS-H, promoted by the NEA / OECD-KAERI, involving the study of turbulent flow in a rod beam with spacers in an experimental installation. Its aim is the analysis of hydraulic behavior of turbulent flow in the subchannels of the fuel elements, essential for the improvement of safety margins in normal and transient operations and to maximize the use of nuclear energy through an optimal design of grids.

  14. Architectural geometry

    KAUST Repository

    Pottmann, Helmut

    2014-11-26

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  15. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    International Nuclear Information System (INIS)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-01-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated

  16. Multiprocessor architecture: Synthesis and evaluation

    Science.gov (United States)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  17. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  18. Application architectures of enterprise information systems versus service oriented architecture

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2007-01-01

    Full Text Available There are two different enterprise IS architectures, older application architecture and younger service oriented architecture. The application architecture its structural element is a classical web-based application can accept a partial or complex solution of enterprise IS. The first has got problems with data-process-communication integrity disturbing among IS applications. The second is convenient for large enterprises not for small and intermediate. Classical web-based applications are too inflexible to accepted necessary changes concerning a progress in the enterprise market-production environment.The service oriented architecture of IS can be based on enterprise web-services. Computerization of such small and flexible units can be given by classical web-services. There is constructed a new web-based application that plays a structural unit role for service oriented architecture. This application consists of a sequence formed by enterprise web-services calling. Enterprise web-services can easily accept necessary changes concerning a progress in the enterprise market-production environment. That‘s why contemporary younger service oriented architecture seems to be more acceptable for any enterprise than older application architecture.

  19. Architectural Engineers

    DEFF Research Database (Denmark)

    Petersen, Rikke Premer

    engineering is addresses from two perspectives – as an educational response and an occupational constellation. Architecture and engineering are two of the traditional design professions and they frequently meet in the occupational setting, but at educational institutions they remain largely estranged....... The paper builds on a multi-sited study of an architectural engineering program at the Technical University of Denmark and an architectural engineering team within an international engineering consultancy based on Denmark. They are both responding to new tendencies within the building industry where...... the role of engineers and architects increasingly overlap during the design process, but their approaches reflect different perceptions of the consequences. The paper discusses some of the challenges that design education, not only within engineering, is facing today: young designers must be equipped...

  20. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    collaboration: How can qualitative anthropological approaches contribute to contemporary architecture? And just as importantly: What can anthropologists learn from architects’ understanding of spatial and material surroundings? Recent theoretical developments in anthropology stress the role of materials......Architecture and anthropology have always had a common focus on dwelling, housing, urban life and spatial organisation. Current developments in both disciplines make it even more relevant to explore their boundaries and overlaps. Architects are inspired by anthropological insights and methods......, while recent material and spatial turns in anthropology have also brought an increasing interest in design, architecture and the built environment. Understanding the relationship between the social and the physical is at the heart of both disciplines, and they can obviously benefit from further...

  1. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    Architecture and anthropology have always had a common focus on dwelling, housing, urban life and spatial organisation. Current developments in both disciplines make it even more relevant to explore their boundaries and overlaps. Architects are inspired by anthropological insights and methods......, while recent material and spatial turns in anthropology have also brought an increasing interest in design, architecture and the built environment. Understanding the relationship between the social and the physical is at the heart of both disciplines, and they can obviously benefit from further...... collaboration: How can qualitative anthropological approaches contribute to contemporary architecture? And just as importantly: What can anthropologists learn from architects’ understanding of spatial and material surroundings? Recent theoretical developments in anthropology stress the role of materials...

  2. Architectural Narratives

    DEFF Research Database (Denmark)

    Kiib, Hans

    2010-01-01

    a functional framework for these concepts, but tries increasingly to endow the main idea of the cultural project with a spatially aesthetic expression - a shift towards “experience architecture.” A great number of these projects typically recycle and reinterpret narratives related to historical buildings......In this essay, I focus on the combination of programs and the architecture of cultural projects that have emerged within the last few years. These projects are characterized as “hybrid cultural projects,” because they intend to combine experience with entertainment, play, and learning. This essay...... identifies new rationales related to this development, and it argues that “cultural planning” has increasingly shifted its focus from a cultural institutional approach to a more market-oriented strategy that integrates art and business. The role of architecture has changed, too. It not only provides...

  3. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    anthropology. On the one hand, there are obviously good reasons for developing architecture based on anthropological insights in local contexts and anthropologically inspired techniques for ‘collaborative formation of issues’. Houses and built environments are huge investments, their life expectancy...... and other spaces that architects are preoccupied with. On the other hand, the distinction between architecture and design is not merely one of scale. Design and architecture represent – at least in Denmark – also quite different disciplinary traditions and methods. Where designers develop prototypes......, architects tend to work with models and plans that are not easily understood by lay people. Further, many architects are themselves sceptical towards notions of user-involvement and collaborative design. They fear that the imagination of citizens and users is restricted to what they are already familiar with...

  4. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  5. Gregarious Data Re-structuring in a Many Core Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shrestha, Sunil; Manzano Franco, Joseph B.; Marquez, Andres; Zuckerman, Stephane; Song, Shuaiwen; Gao, Guang R.

    2015-08-24

    this paper, we have developed a new methodology that takes in consideration the access patterns from a single parallel actor (e.g. a thread), as well as, the access patterns of “grouped” parallel actors that share a resource (e.g. a distributed Level 3 cache). We start with a hierarchical tile code for our target machine and apply a series of transformations at the tile level to improve data residence in a given memory hierarchy level. The contribution of this paper includes (a) collaborative data restructuring for group reuse and (b) low overhead transformation technique to improve access pattern and bring closely connected data elements together. Preliminary results in a many core architecture, Tilera TileGX, shows promising improvements over optimized OpenMP code (up to 31% increase in GFLOPS) and over our own previous work on fine grained runtimes (up to 16%) for selected kernels

  6. Reframing Architecture

    DEFF Research Database (Denmark)

    Riis, Søren

    2013-01-01

    I would like to thank Prof. Stephen Read (2011) and Prof. Andrew Benjamin (2011) for both giving inspiring and elaborate comments on my article “Dwelling in-between walls: the architectural surround”. As I will try to demonstrate below, their two different responses not only supplement my article...... focuses on how the absence of an initial distinction might threaten the endeavour of my paper. In my reply to Read and Benjamin, I will discuss their suggestions and arguments, while at the same time hopefully clarifying the postphenomenological approach to architecture....

  7. Implementation of constitutive equations for creep damage mechanics into the ABAQUS finite element code - some practical cases in high temperature component design and life assessment

    International Nuclear Information System (INIS)

    Segle, P.; Samuelson, L.Aa.; Andersson, Peder; Moberg, F.

    1996-01-01

    Constitutive equations for creep damage mechanics are implemented into the finite element program ABAQUS using a user supplied subroutine, UMAT. A modified Kachanov-Rabotnov constitutive equation which accounts for inhomogeneity in creep damage is used. With a user defined material a number of bench mark tests are analyzed for verification. In the cases where analytical solutions exist, the numerical results agree very well. In other cases, the creep damage evolution response appear to be realistic in comparison with laboratory creep tests. The appropriateness of using the creep damage mechanics concept in design and life assessment of high temperature components is demonstrated. 18 refs

  8. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class......Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class...

  9. Solar energy - design element

    International Nuclear Information System (INIS)

    Sudimac, Budimir S.; Dubljevic, Andjela N.

    2015-01-01

    The main focus of this study is the theoretical examination of the possibilities of applying technological, functional, aesthetic and energy resources in elements of urban design. Designed solutions are treated as part of the overall optimization of architectural elements and urban space, in which technological development enables the use of certain energy potentials of elements of urban design. The paper presents student hypothetical design models of urban architectural elements with integrated photovoltaic modules. The analytical procedure was applied in the analysis of student work in a seminar of the first year of master studies at the Faculty of Architecture. The aim is to improve students' awareness of the need for proper handling of energy and the possibility of integration with other architectural elements. The research and the results have enabled further work on the sustainable development of architectural elements with a focus on the use of solar energy by promoting the modern design approach. Key words: PV module, teaching, solar energy, urban design

  10. From green architecture to architectural green

    DEFF Research Database (Denmark)

    Earon, Ofri

    2011-01-01

    of green architecture. The paper argues that this greenification of facades is insufficient. The green is only a skin cladding the exterior envelope without having a spatial significance. Through the paper it is proposed to flip the order of words from green architecture to architectural green...... that describes the architectural exclusivity of this particular architecture genre. The adjective green expresses architectural qualities differentiating green architecture from none-green architecture. Currently, adding trees and vegetation to the building’s facade is the main architectural characteristics...

  11. PetriCode: A Tool for Template-Based Code Generation from CPN Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    levels of abstraction. The elements of the models are annotated with code generation pragmatics enabling PetriCode to use a template-based approach to generate code while keeping the models uncluttered from implementation artefacts. PetriCode is the realization of our code generation approach which has...

  12. 38 CFR 39.22 - Architectural design standards.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Architectural design...-16-10) Standards and Requirements for Project § 39.22 Architectural design standards. The..., Ontario, CA 91761-2816. (a) Architectural and structural requirements—(1) Life Safety Code. Standards must...

  13. Accuracy Test of Software Architecture Compliance Checking Tools : Test Instruction

    NARCIS (Netherlands)

    Prof.dr. S. Brinkkemper; Dr. Leo Pruijt; C. Köppe; J.M.E.M. van der Werf

    2015-01-01

    Author supplied: "Abstract Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies

  14. Architecture Analysis

    NARCIS (Netherlands)

    Iacob, Maria-Eugenia; Jonkers, Henk; van der Torre, Leon; de Boer, Frank S.; Bonsangue, Marcello; Stam, Andries W.; Lankhorst, Marc M.; Quartel, Dick A.C.; Aldea, Adina; Lankhorst, Marc

    2017-01-01

    This chapter also explains what the added value of enterprise architecture analysis techniques is in addition to existing, more detailed, and domain-specific ones for business processes or software, for example. Analogous to the idea of using the ArchiMate enterprise modelling language to integrate

  15. Metabolistic Architecture

    DEFF Research Database (Denmark)

    2013-01-01

    Textile Spaces presents different approaches to using textile as a spatial definer and artistic medium. The publication collages images and text, art and architecture, science, philosophy and literature, process and product, past, present and future. It forms an insight into soft materials...

  16. Textile Architecture

    DEFF Research Database (Denmark)

    Heimdal, Elisabeth Jacobsen

    2010-01-01

    Textiles can be used as building skins, adding new aesthetic and functional qualities to architecture. Just like we as humans can put on a coat, buildings can also get dressed. Depending on our mood, or on the weather, we can change coat, and so can the building. But the idea of using textiles...

  17. The genes coding for the conversion of carbazole to catechol are flanked by IS6100 elements in Sphingomonas sp. strain XLDN2-5.

    Directory of Open Access Journals (Sweden)

    Zhonghui Gai

    Full Text Available BACKGROUND: Carbazole is a recalcitrant compound with a dioxin-like structure and possesses mutagenic and toxic activities. Bacteria respond to a xenobiotic by recruiting exogenous genes to establish a pathway to degrade the xenobiotic, which is necessary for their adaptation and survival. Usually, this process is mediated by mobile genetic elements such as plasmids, transposons, and insertion sequences. FINDINGS: The genes encoding the enzymes responsible for the degradation of carbazole to catechol via anthranilate were cloned, sequenced, and characterized from a carbazole-degrading Sphingomonas sp. strain XLDN2-5. The car gene cluster (carRAaBaBbCAc and fdr gene were accompanied on both sides by two copies of IS6100 elements, and organized as IS6100::ISSsp1-ORF1-carRAaBaBbCAc-ORF8-IS6100-fdr-IS6100. Carbazole was converted by carbazole 1,9a-dioxygenase (CARDO, CarAaAcFdr, meta-cleavage enzyme (CarBaBb, and hydrolase (CarC to anthranilate and 2-hydroxypenta-2,4-dienoate. The fdr gene encoded a novel ferredoxin reductase whose absence resulted in lower transformation activity of carbazole by CarAa and CarAc. The ant gene cluster (antRAcAdAbAa which was involved in the conversion of anthranilate to catechol was also sandwiched between two IS6100 elements as IS6100-antRAcAdAbAa-IS6100. Anthranilate 1,2-dioxygenase (ANTDO was composed of a reductase (AntAa, a ferredoxin (AntAb, and a two-subunit terminal oxygenase (AntAcAd. Reverse transcription-PCR results suggested that carAaBaBbCAc gene cluster, fdr, and antRAcAdAbAa gene cluster were induced when strain XLDN2-5 was exposed to carbazole. Expression of both CARDO and ANTDO in Escherichia coli required the presence of the natural reductases for full enzymatic activity. CONCLUSIONS/SIGNIFICANCE: We predict that IS6100 might play an important role in the establishment of carbazole-degrading pathway, which endows the host to adapt to novel compounds in the environment. The organization of the car

  18. Hijazi Architectural Object Library (haol)

    Science.gov (United States)

    Baik, A.; Boehm, J.

    2017-02-01

    As with many historical buildings around the world, building façades are of special interest; moreover, the details of such windows, stonework, and ornaments give each historic building its individual character. Each object of these buildings must be classified in an architectural object library. Recently, a number of researches have been focusing on this topic in Europe and Canada. From this standpoint, the Hijazi Architectural Objects Library (HAOL) has reproduced Hijazi elements as 3D computer models, which are modelled using a Revit Family (RFA). The HAOL will be dependent on the image survey and point cloud data. The Hijazi Object such as Roshan and Mashrabiyah, become as vocabulary of many Islamic cities in the Hijazi region such as Jeddah in Saudi Arabia, and even for a number of Islamic historic cities such as Istanbul and Cairo. These architectural vocabularies are the main cause of the beauty of these heritage. However, there is a big gap in both the Islamic architectural library and the Hijazi architectural library to provide these unique elements. Besides, both Islamic and Hijazi architecture contains a huge amount of information which has not yet been digitally classified according to period and styles. Due to this issue, this paper will be focusing on developing of Heritage BIM (HBIM) standards and the HAOL library to reduce the cost and the delivering time for heritage and new projects that involve in Hijazi architectural styles. Through this paper, the fundamentals of Hijazi architecture informatics will be provided via developing framework for HBIM models and standards. This framework will provide schema and critical information, for example, classifying the different shapes, models, and forms of structure, construction, and ornamentation of Hijazi architecture in order to digitalize parametric building identity.

  19. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  20. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  1. Coding as literacy metalithikum IV

    CERN Document Server

    Bühlmann, Vera; Moosavi, Vahid

    2015-01-01

    Recent developments in computer science, particularly "data-driven procedures" have opened a new level of design and engineering. This has also affected architecture. The publication collects contributions on Coding as Literacy by computer scientists, mathematicians, philosophers, cultural theorists, and architects. "Self-Organizing Maps" (SOM) will serve as the concrete reference point for all further discussions.

  2. MUF architecture /art London

    DEFF Research Database (Denmark)

    Svenningsen Kajita, Heidi

    2009-01-01

    Om MUF architecture samt interview med Liza Fior og Katherine Clarke, partnere i muf architecture/art......Om MUF architecture samt interview med Liza Fior og Katherine Clarke, partnere i muf architecture/art...

  3. God Save Architecture

    NARCIS (Netherlands)

    Pnina Avidar; Beatriz Ramo; dr. Marc Glaudemans

    2011-01-01

    First year students of architecture studied contemporary architectural discourse and develop critical standpoints against the macho-style heroic interpretation of architecture's power to transform the world. The disproportionate focus on iconographic architecture is being criticized. The book is a

  4. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  5. Simulations of dynamic crack propagation in brittle materials using nodal cohesive forces and continuum damage mechanics in the distinct element code LDEC

    Energy Technology Data Exchange (ETDEWEB)

    Block, G I; Rubin, M B; Morris, J P; Berryman, J G

    2006-08-23

    Experimental data indicates that the limiting crack speed in brittle materials is less than the Rayleigh wave speed. One reason for this is that dynamic instabilities produce surface roughness and microcracks that branch from the main crack. These processes increase dissipation near the crack tip over a range of crack speeds. When the scale of observation (or mesh resolution) becomes much larger than the typical sizes of these features, effective-medium theories are required to predict the coarse-grained fracture dynamics. Two approaches to modeling these phenomena are described and used in numerical simulations. The first approach is based on cohesive elements that utilize a rate-dependent weakening law for the nodal cohesive forces. The second approach uses a continuum damage model which has a weakening effect that lowers the effective Rayleigh wave speed in the material surrounding the crack tip. Simulations in this paper show that while both models are capable of increasing the energy dissipated during fracture when the mesh size is larger than the process zone size, only the continuum damage model is able to limit the crack speed over a range of applied loads. Numerical simulations of straight-running cracks demonstrate good agreement between the theoretical predictions of the combined models and experimental data on dynamic crack propagation in brittle materials. Simulations that model crack branching are also presented.

  6. Architectural Drawing

    DEFF Research Database (Denmark)

    Steinø, Nicolai

    2018-01-01

    without being able to visualize it in drawing. Architectural design, in other words, to a large extent happens through drawing. Hence, to neglect drawing skills is to neglect an important capacity to create architectural design. While the current-day argument for the depreciation of drawing skills...... is that computers can represent graphic ideas both faster and better than most medium-skilled draftsmen, drawing in design is not only about representing final designs. In fact, several steps involving the capacity to draw lie before the representation of a final design. Not only is drawing skills an important...... prerequisite for learning about the nature of existing objects and spaces, and thus to build a vocabulary of design. It is also a prerequisite for both reflecting and communicating about design ideas. In this paper, a taxonomy of notation, reflection, communication and presentation drawing is presented...

  7. Kosmos = architecture

    Directory of Open Access Journals (Sweden)

    Tine Kurent

    1985-12-01

    Full Text Available The old Greek word "kosmos" means not only "cosmos", but also "the beautiful order", "the way of building", "building", "scenography", "mankind", and, in the time of the New Testament, also "pagans". The word "arhitekton", meaning first the "master of theatrical scenography", acquired the meaning of "builder", when the words "kosmos" and ~kosmetes" became pejorative. The fear that architecture was not considered one of the arts before Renaissance, since none of the Muses supervised the art of building, results from the misunderstanding of the word "kosmos". Urania was the Goddes of the activity implied in the verb "kosmein", meaning "to put in the beautiful order" - everything, from the universe to the man-made space, i. e. the architecture.

  8. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  9. Transverse pumped laser amplifier architecture

    Science.gov (United States)

    Bayramian, Andrew James; Manes, Kenneth; Deri, Robert; Erlandson, Al; Caird, John; Spaeth, Mary

    2013-07-09

    An optical gain architecture includes a pump source and a pump aperture. The architecture also includes a gain region including a gain element operable to amplify light at a laser wavelength. The gain region is characterized by a first side intersecting an optical path, a second side opposing the first side, a third side adjacent the first and second sides, and a fourth side opposing the third side. The architecture further includes a dichroic section disposed between the pump aperture and the first side of the gain region. The dichroic section is characterized by low reflectance at a pump wavelength and high reflectance at the laser wavelength. The architecture additionally includes a first cladding section proximate to the third side of the gain region and a second cladding section proximate to the fourth side of the gain region.

  10. Numerical simulation of cracks and interfaces with cohesive zone models in the extended finite element method, with EDF R and D software Code Aster

    International Nuclear Information System (INIS)

    Ferte, Guilhem

    2014-01-01

    In order to assess the harmfulness of detected defects in some nuclear power plants, EDF Group is led to develop advanced simulation tools. Among the targeted mechanisms are 3D non-planar quasi-static crack propagation, but also dynamic transients during unstable phases. In the present thesis, quasi-brittle crack growth is simulated based on the combination of the XFEM and cohesive zone models. These are inserted over large potential crack surfaces, so that the cohesive law will naturally separate adherent and de-bonding zones, resulting in an implicit update of the crack front, which makes the originality of the approach. This requires a robust insertion of non-smooth interface laws in the XFEM, which is achieved in quasi-statics with the use of XFEM-suited multiplier spaces in a consistent formulation, block-wise diagonal interface operators and an augmented Lagrangian formalism to write the cohesive law. Based on this concept and a novel directional criterion appealing to cohesive integrals, a propagation procedure over non-planar crack paths is proposed and compared with literature benchmarks. As for dynamics, an initially perfectly adherent cohesive law is implicitly treated within an explicit time-stepping scheme, resulting in an analytical determination of interface tractions if appropriate discrete spaces are used. Implementation is validated on a tapered DCB test. Extension to quadratic elements is then investigated. For stress-free cracks, it was found that a subdivision into quadratic sub-cells is needed for optimality. Theory expects enriched quadrature to be necessary for distorted sub-cells, but this could not be observed in practice. For adherent interfaces, a novel discrete multiplier space was proposed which has both numerical stability and produces quadratic convergence if used along with quadratic sub-cells. (author)

  11. Strategic Management of Architectural Technical Debt

    Science.gov (United States)

    2012-05-22

    SEI Agile Research Forum Twitter #SEIAgile © 2012 Carnegie Mellon University Strategic Management of Architectural Technical Debt Ipek Ozkaya...AND SUBTITLE Strategic Management of Architectural Technical Debt 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  12. Modular Power Architectures for Microgrid Clusters

    DEFF Research Database (Denmark)

    Lin, Hengwei; Liu, Leo; Guerrero, Josep M.

    2014-01-01

    One of the most important elements in microgrids is the configuration architecture which includes installed capacity, devices location, converter topologies, as well as system control and management strategies. Reliability, security and stability in microgrids require global communication systems...... approach is proposed and evaluated to effectively optimize and manage modular microgrid architectures....

  13. Edge equilibrium code for tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xujing [Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100190 (China); Zakharov, Leonid E. [Princeton Plasma Physics Laboratory Princeton, MS-27 P.O. Box 451, New Jersey (United States); Drozdov, Vladimir V. [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)

    2014-01-15

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.

  14. Compiler Driven Code Comments and Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Karlsson, Sven

    2011-01-01

    Helping programmers write parallel software is an urgent problem given the popularity of multi-core architectures. Engineering compilers which automatically parallelize and vectorize code has turned out to be very challenging. Consequently, compilers are very selective with respect to the coding...

  15. Compiler Driven Code Comments and Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Karlsson, Sven

    2010-01-01

    Helping programmers write parallel software is an urgent problem given the popularity of multi-core architectures. Engineering compilers which automatically parallelize and vectorize code has turned out to be very challenging and consequently compilers are very selective with respect to the coding...

  16. Network Coding

    Indian Academy of Sciences (India)

    message symbols downstream, network coding achieves vast performance gains by permitting intermediate nodes to carry out algebraic oper- ations on the incoming data. In this article we present a tutorial introduction to network coding as well as an application to the e±cient operation of distributed data-storage networks.

  17. WIND PROTECTION OF LANDSCAPE ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    Trubitsyna Natalja Anatolevna

    2017-07-01

    Full Text Available The article discusses the interaction between the wind regime and the landscape. Examples of objects of landscape architecture in high-tech and science-intensive spheres, such as the launch pad of a spacecraft, are given. Wind protection is represented as a result of work on wind power engineering and a means of increasing bioclimatic comfort. The terms of landscape architecture are disclosed and mutual influence on the climate and impact on woody-shrub vegetation and field crops are analyzed. The phenomenon of air permeability for optimal operation of windproof structures and orientations of geoplastics and dendroplastics is described. In this paper, a classification of terrain types is described with a description of their elemental composition, as well as various categories of landscape. The proposal to consider the landscape as a territorial complex, and landscape buildings, landscape-architectural structures as objects of landscape architecture possessing properties of wind protection and air permeability was introduced. Thus, the concept of a landscape-architectural complex as a single group of landscape-architectural objects located on the territory and connected by a common system of communications, functions, technical elements and a visual image is formulated. Further research is based on the rationale for the use of the term ensemble in relation to the objects of the landscape and architectural complex and the identification of their design and planning features that can affect the parameters of wind protection and air permeability. The paper concludes that frequent coincidence of favorable for the fauna wind regime and mimicry of landscape architecture objects. The combination in the landscape of functions for wind protection and aesthetics is analyzed with analysis of such elements of landscape architecture as hedges and windproof properties of green plantations. In the work examples of wind engineering small architectural forms are

  18. Architectural fragments

    DEFF Research Database (Denmark)

    Bang, Jacob Sebastian

    2018-01-01

    the photographs as a starting point for a series of paintings. This way of creating representations of something that already exists is for me to see a way forward in the "decoding" of my own models into other depictions. The models are analyzed through a series of representations in different types of drawings....... I try to invent the ways of drawing the models - that decode and unfold them into architectural fragments- into future buildings or constructions in the landscape. [1] Luigi Moretti: Italian architect, 1907 - 1973 [2] Man Ray: American artist, 1890 - 1976. in 2015, I saw the wonderful exhibition...

  19. Medical Data Architecture Project Status

    Science.gov (United States)

    Krihak, M.; Middour, C.; Lindsey, A.; Marker, N.; Wolfe, S.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.

    2017-01-01

    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current International Space Station (ISS) medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable an increasingly autonomous crew than the current ISS paradigm. The MDA will develop capabilities that support automated data collection, and the necessary functionality and challenges in executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. To attain this goal, the first year of the MDA project focused on reducing technical risk, developing documentation and instituting iterative development processes that established the basis for the first version of MDA software (or Test Bed 1). Test Bed 1 is based on a nominal operations scenario authored by the ExMC Element Scientist. This narrative was decomposed into a Concept of Operations that formed the basis for Test Bed 1 requirements. These requirements were successfully vetted through the MDA Test Bed 1 System Requirements Review, which permitted the MDA project to begin software code development and component integration. This paper highlights the MDA objectives, development processes, and accomplishments, and identifies the fiscal year 2017 milestones and

  20. Structural elements design manual

    CERN Document Server

    Draycott, Trevor

    2012-01-01

    Gives clear explanations of the logical design sequence for structural elements. The Structural Engineer says: `The book explains, in simple terms, and with many examples, Code of Practice methods for sizing structural sections in timber, concrete,masonry and steel. It is the combination into one book of section sizing methods in each of these materials that makes this text so useful....Students will find this an essential support text to the Codes of Practice in their study of element sizing'.

  1. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  2. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  3. Responsive Architecture and the Problem of Obsolescence

    Directory of Open Access Journals (Sweden)

    Mark Meagher

    2014-11-01

    Full Text Available Responsive architecture, a design field that has arisen in recent decades at the intersection of architecture and computer science, invokes a material response to digital information and implies the capacity of the building to respond dynamically to changing stimuli. The question I will address in the paper is whether it is possible for the responsive components of architecture to become a poetically expressive part of the building, and if so why this result has so rarely been achieved in contemporary and recent built work. The history of attitudes to- ward obsolescence in buildings is investigated as one explanation for the rarity of examples like the one considered here that successfully overcomes the rapid obsolescence of responsive components and makes these elements an integral part of the work of architecture. In conclusion I identify strategies for the design of responsive components as poetically expressive elements of architecture.

  4. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    strategies which architects can use - to give the individual project attitudes and designs with architectural quality. Through the customized component production it is possible to choose many different proportions, to organize the process at site choosing either one room components or several rooms...... customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performed expression in direct relation to the given context. Through the last couple of years we have in Denmark been focusing a more sustainable and low energy building technique, which also include...

  5. From green architecture to architectural green

    DEFF Research Database (Denmark)

    Earon, Ofri

    2011-01-01

    of green architecture. The paper argues that this greenification of facades is insufficient. The green is only a skin cladding the exterior envelope without having a spatial significance. Through the paper it is proposed to flip the order of words from green architecture to architectural green....... Architectural green could signify green architecture with inclusive interrelations between green and space, built and unbuilt, inside and outside. The aim of the term is to reflect a new focus in green architecture – its architectural performance. Ecological issues are not underestimated or ignored, but so far...... they have overshadowed the architectural potential of green architecture. The paper questions how a green space should perform, look like and function. Two examples are chosen to demonstrate thorough integrations between green and space. The examples are public buildings categorized as pavilions. One...

  6. Geochemical computer codes. A review

    International Nuclear Information System (INIS)

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  7. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017....... Coding Class projektet er et pilotprojekt, hvor en række skoler i København og Vejle kommuner har igangsat undervisningsaktiviteter med fokus på kodning og programmering i skolen. Evalueringen og dokumentationen af projektet omfatter kvalitative nedslag i udvalgte undervisningsinterventioner i efteråret...

  8. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  9. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen; Fisker, Anna Marie; Kirkegaard, Poul Henning

    2013-01-01

    In the attempt to improve patient treatment and recovery, researchers focus on applying concepts of hospitality to hospitals. Often these concepts are dominated by hotel-metaphors focusing on host–guest relationships or concierge services. Motivated by a project trying to improve patient treatment...... is known for his writings on theatricality, understood as a holistic design approach emphasizing the contextual, cultural, ritual and social meanings rooted in architecture. Relative hereto, the International Food Design Society recently argued, in a similar holistic manner, that the methodology used...... to provide an aesthetic eating experience includes knowledge on both food and design. Based on a hermeneutic reading of Semper’s theory, our thesis is that this holistic design approach is important when debating concepts of hospitality in hospitals. We use this approach to argue for how ‘food design...

  10. Architecture for Teraflop Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  11. Simulation of the Intake and Compression Strokes of a Motored 4-Valve Si Engine with a Finite Element Code Simulation de l'admission et de la compression dans un moteur 4-soupapes AC entraîné à l'aide d'un code de calcul à éléments finis

    Directory of Open Access Journals (Sweden)

    Bailly O.

    2006-12-01

    Full Text Available A CFD code, using a mixed finite volumes - finite elements method on tetraedrons, is now available for engine simulations. The code takes into account the displacement of moving walls such as piston and valves in a full automatic way: a single mesh is used for a full computation and no intervention of the user is necessary. A fourth order implicit spatial scheme and a first order implicit temporal scheme are used. The work presented in this paper is part of a larger program for the validation of this new numerical tool for engine applications. Here, comparisons between computation and experiments of the intake and compression strokes of a four-valve engine were carried out. The experimental investigations are conducted on a single cylinder four valve optical research engine. The turbulence intensity, mean velocity components, tumble and swirl ratios in the combustion chamber are deduced from the LDV measurements. The comparisons between computations and experiments are made on the mean velocity flow field at different locations inside the chamber and for different crank angles. We also present some global comparisons (swirl and tumble ratios. The simulation shows excellent agreement between computations and experiments. Un code de calcul utilisant une approche mixte éléments finis - volumes finis en tétraèdres a été développé pour les simulations moteur. Le code prend en compte le déplacement des parois mobiles comme les pistons et les soupapes de façon totalement automatique : un maillage unique est utilisé pour tout le calcul sans intervention de l'utilisateur. Un schéma implicite du quatrième ordre en espace et du premier ordre en temps est retenu. Le travail présenté dans cet article fait partie d'une démarche globale de validation de cette nouvelle approche pour les moteurs. Des comparaisons entre calculs et mesures lors des phases d'admission et de compression dans un moteur 4-soupapes AC y sont exposées. Ces exp

  12. Image Compression of MRI Image using Planar Coding

    OpenAIRE

    Lalitha Y. S; Mrityunjaya V. Latte

    2011-01-01

    In this paper a hierarchical coding technique for variable bit rate service is developed using embedded zero block coding approach. The suggested approach enhances the variable rate coding by zero tree based block-coding architecture with Context Modeling for low complexity and high performance. The proposed algorithm utilizes the significance state-table forming the context modeling to control the coding passes with low memory requirement and low implementation complexity with the nearly sam...

  13. The architecture of a modern military health information system.

    Science.gov (United States)

    Mukherji, Raj J; Egyhazy, Csaba J

    2004-06-01

    This article describes a melding of a government-sponsored architecture for complex systems with open systems engineering architecture developed by the Institute for Electrical and Electronics Engineers (IEEE). Our experience in using these two architectures in building a complex healthcare system is described in this paper. The work described shows that it is possible to combine these two architectural frameworks in describing the systems, operational, and technical views of a complex automation system. The advantage in combining the two architectural frameworks lies in the simplicity of implementation and ease of understanding of automation system architectural elements by medical professionals.

  14. Relating business intelligence and enterprise architecture - A method for combining operational data with architectural metadata

    NARCIS (Netherlands)

    Veneberg, R.K.M.; Iacob, Maria Eugenia; van Sinderen, Marten J.; Bodenstaff, L.

    Combining enterprise architecture and operational data is complex (especially when considering the actual ‘matching’ of data with enterprise architecture elements), and little has been written on how to do this. In this paper we aim to fill this gap, and propose a method to combine operational data

  15. Change Impact Analysis of Crosscutting in Software Architectural Design

    NARCIS (Netherlands)

    van den Berg, Klaas

    2006-01-01

    Software architectures should be amenable to changes in user requirements and implementation technology. The analysis of the impact of these changes can be based on traceability of architectural design elements. Design elements have dependencies with other software artifacts but also evolve in time.

  16. Lean Architecture for Agile Software Development

    CERN Document Server

    Coplien, James O

    2010-01-01

    More and more Agile projects are seeking architectural roots as they struggle with complexity and scale - and they're seeking lightweight ways to do it: Still seeking? In this book the authors help you to find your own path; Taking cues from Lean development, they can help steer your project toward practices with longstanding track records; Up-front architecture? Sure. You can deliver an architecture as code that compiles and that concretely guides development without bogging it down in a mass of documents and guesses about the implementation; Documentation? Even a whiteboard diagram, or a CRC

  17. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  18. Vectorization and parallelization of a production reactor assembly code

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.

    1991-01-01

    In order to efficiently use new features of supercomputers, production codes, usually written 10 - 20 years ago, must be tailored for modern computer architectures. We have chosen to optimize the CPM-2 code, a production reactor assembly code based on the collision probability transport method. Substantial speedups in the execution times were obtained with the parallel/vector version of the CPM-2 code. In addition, we have developed a new transfer probability method, which removes some of the modelling limitations of the collision probability method encoded in the CPM-2 code, and can fully utilize parallel/vector architecture of a multiprocessor IBM 3090. (author)

  19. SUSTAINABLE ARCHITECTURE : WHAT ARCHITECTURE STUDENTS THINK

    OpenAIRE

    SATWIKO, PRASASTO

    2013-01-01

    Sustainable architecture has become a hot issue lately as the impacts of climate change become more intense. Architecture educations have responded by integrating knowledge of sustainable design in their curriculum. However, in the real life, new buildings keep coming with designs that completely ignore sustainable principles. This paper discusses the results of two national competitions on sustainable architecture targeted for architecture students (conducted in 2012 and 2013). The results a...

  20. Information architecture for building digital library | Obuh ...

    African Journals Online (AJOL)

    The paper provided an overview of constituent elements of a digital library and explained the underlying information architecture and building blocks for a digital library. It specifically proffered meaning to the various elements or constituents of a digital library system. The paper took a look at the structure of information as a ...

  1. Network architecture for multimedia services

    Science.gov (United States)

    Hayes, John R.; Kerner, Martin

    1995-02-01

    Video on Demand is expected to be the first of many Video Dial Tone services that will bring broadband connections to residential customers. While significant research has been undertaken to identify cost effective access architectures, much less effort has been expended on video servers and backbone architectures, and it may be here that competitive advantage can be obtained. This paper focuses on the placement of key architectural elements within the network to show that a network topology which balances centralized and distributed storage of content minimizes backbone costs. Intuitively, storing copies of popular movies close to consumers reduces demand on the network, while centralizing titles reduces the amount of server resources required due to sharing and hence the cost of storage.

  2. Lightweight enterprise architectures

    CERN Document Server

    Theuerkorn, Fenix

    2004-01-01

    STATE OF ARCHITECTUREArchitectural ChaosRelation of Technology and Architecture The Many Faces of Architecture The Scope of Enterprise Architecture The Need for Enterprise ArchitectureThe History of Architecture The Current Environment Standardization Barriers The Need for Lightweight Architecture in the EnterpriseThe Cost of TechnologyThe Benefits of Enterprise Architecture The Domains of Architecture The Gap between Business and ITWhere Does LEA Fit? LEA's FrameworkFrameworks, Methodologies, and Approaches The Framework of LEATypes of Methodologies Types of ApproachesActual System Environmen

  3. Vector calculation of particle code

    International Nuclear Information System (INIS)

    Nishiguchi, A.; Yabe, T.; Orii, S.

    1985-01-01

    The development of vector computer requires the modification of the algorithm into a suitable form for vector calculation. Among many algorithms, the particle code is a typical example which has suffered a damage in the calculation on supercomputer owing to its possibility of recurrent data access in collecting cell-wise quantities from particle's quartities. In this article, we report a new method to liberate the particle code from recurrent calculations. It should be noticed, however, that the method may depend on the architecture of supercomputer, and works well on FACOM VP-100 and VP-200: the indirect data accessing must be vectorized and its speed should be fast. (Mori, K.)

  4. Verifying Architectural Design Rules of the Flight Software Product Line

    Science.gov (United States)

    Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen

    2009-01-01

    This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.

  5. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2009-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and

  6. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2008-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and

  7. The Pulley Element

    Directory of Open Access Journals (Sweden)

    Štekbauer Hynek

    2016-12-01

    Full Text Available The pulley is used in a number of structures for the mechanical advantage it gives. This paper presents an approach for the calculation of a pulley-cable system using a special pulley element in the finite element method. The Lagrange Multiplier method and Penalty method are used to define the pulley element, as described in this paper. Both approaches are easy to implement in general FEM codes.

  8. The Pulley Element

    OpenAIRE

    Štekbauer Hynek

    2016-01-01

    The pulley is used in a number of structures for the mechanical advantage it gives. This paper presents an approach for the calculation of a pulley-cable system using a special pulley element in the finite element method. The Lagrange Multiplier method and Penalty method are used to define the pulley element, as described in this paper. Both approaches are easy to implement in general FEM codes.

  9. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  10. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and softwa...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....

  11. Latest improvements on TRACPWR six-equations thermohydraulic code

    International Nuclear Information System (INIS)

    Rivero, N.; Batuecas, T.; Martinez, R.; Munoz, J.; Lenhardt, G.; Serrano, P.

    1999-01-01

    The paper presents the latest improvements on TRACPWR aimed at adapting the code to present trends on computer platforms, architectures and training requirements as well as extending the scope of the code itself and its applicability to other technologies different from Westinghouse PWR one. Firstly major features of TRACPWR as best estimate and real time simulation code are summed, then the areas where TRACPWR is being improved are presented. These areas comprising: (1) Architecture: integrating TRACPWR and RELAP5 codes, (2) Code scope enhancement: modelling the Mid-Loop operation, (3) Code speed-up: applying parallelization techniques, (4) Code platform downswing: porting to Windows N1 platform, (5) On-line performance: allowing simulation initialisation from a Plant Process Computer, and (6) Code scope extension: using the code for modelling VVER and PHWR technology. (author)

  12. Project Integration Architecture

    Science.gov (United States)

    Jones, William Henry

    2008-01-01

    The Project Integration Architecture (PIA) is a distributed, object-oriented, conceptual, software framework for the generation, organization, publication, integration, and consumption of all information involved in any complex technological process in a manner that is intelligible to both computers and humans. In the development of PIA, it was recognized that in order to provide a single computational environment in which all information associated with any given complex technological process could be viewed, reviewed, manipulated, and shared, it is necessary to formulate all the elements of such a process on the most fundamental level. In this formulation, any such element is regarded as being composed of any or all of three parts: input information, some transformation of that input information, and some useful output information. Another fundamental principle of PIA is the assumption that no consumer of information, whether human or computer, can be assumed to have any useful foreknowledge of an element presented to it. Consequently, a PIA-compliant computing system is required to be ready to respond to any questions, posed by the consumer, concerning the nature of the proffered element. In colloquial terms, a PIA-compliant system must be prepared to provide all the information needed to place the element in context. To satisfy this requirement, PIA extends the previously established object-oriented- programming concept of self-revelation and applies it on a grand scale. To enable pervasive use of self-revelation, PIA exploits another previously established object-oriented-programming concept - that of semantic infusion through class derivation. By means of self-revelation and semantic infusion through class derivation, a consumer of information can inquire about the contents of all information entities (e.g., databases and software) and can interact appropriately with those entities. Other key features of PIA are listed.

  13. The application of diagrams in architectural design

    Directory of Open Access Journals (Sweden)

    Dulić Olivera

    2014-01-01

    Full Text Available Diagrams in architecture represent the visualization of the thinking process, or selective abstraction of concepts or ideas translated into the form of drawings. In addition, they provide insight into the way of thinking about and in architecture, thus creating a balance between the visual and the conceptual. The subject of research presented in this paper are diagrams as a specific kind of architectural representation, and possibilities and importance of their application in the design process. Diagrams are almost old as architecture itself, and they are an element of some of the most important studies of architecture during all periods of history - which results in a large number of different definitions of diagrams, but also very different conceptualizations of their features, functions and applications. The diagrams become part of contemporary architectural discourse during the eighties and nineties of the twentieth century, especially through the work of architects like Bernard Tschumi, Peter Eisenman, Rem Koolhaas, SANAA and others. The use of diagrams in the design process allows unification of some of the essential aspects of the profession: architectural representation and design process, as well as the question of the concept of architectural and urban design at a time of rapid changes at all levels of contemporary society. The aim of the research is the analysis of the diagram as a specific medium for processing large amounts of information that the architect should consider and incorporate into the architectural work. On that basis, it is assumed that an architectural diagram allows the creator the identification and analysis of specific elements or ideas of physical form, thereby constantly maintaining concept of the integrity of the architectural work.

  14. A Model for the Development of Architectural Psychology Formation in Architectural Education

    Directory of Open Access Journals (Sweden)

    Semra Sema UZUNOĞLU

    2014-05-01

    Full Text Available This article examines the purpose, method, and outcomes of an “Architectural Psychology” program, developed to introduce new architecture students to the subject. A combination of the architectural concepts of Social Psychology, Environmental Psychology and Psychology of Perception were used in the program. Psychology of Architecture will be taught simultaneously with the Introduction to Architecture program in a student-based educational system to help students understand psychology with its definitions, and implement it throughout their architecture education. This method was implemented in one class of 140 students of interior design, and another of 80 students of architecture. In this article, the latter is explained. The duration of the program was fourteen weeks and, in total, nine activities and a final study were assigned. The outcomes of the assigned activities indicated that the “Architectural Psychology” program enhanced the architecture curriculum by adding the elements lacking in the “architecture – psychology relationship” and its usage in architectural design. Therefore, it is concluded that the addition of the program to the curriculum was apt and beneficial.

  15. ANIMAL code

    Energy Technology Data Exchange (ETDEWEB)

    Lindemuth, I.R.

    1979-02-28

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables.

  16. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  17. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  18. Expander Codes

    Indian Academy of Sciences (India)

    Codes and Channels. A noisy communication channel is illustrated in Fig- ... nication channel. Suppose we want to transmit a message over the unreliable communication channel so that even if the channel corrupts some of the bits we are able to recover ..... is d-regular, meaning thereby that every vertex has de- gree d.

  19. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  20. Network Coding

    Indian Academy of Sciences (India)

    Network coding is a technique to increase the amount of information °ow in a network by mak- ing the key observation that information °ow is fundamentally different from commodity °ow. Whereas, under traditional methods of opera- tion of data networks, intermediate nodes are restricted to simply forwarding their incoming.

  1. The plasma automata network (PAN) architecture

    International Nuclear Information System (INIS)

    Cameron-Carey, C.M.

    1991-01-01

    Conventional neural networks consist of processing elements which are interconnected according to a specified topology. Typically, the number of processing elements and the interconnection topology are fixed. A neural network's information processing capability lies mainly in the variability of interconnection strengths, which directly influence activation patterns; these patterns represent entities and their interrelationships. Contrast this architecture, with its fixed topology and variable interconnection strengths, against one having dynamic topology and fixed connection strength. This paper reports on this proposed architecture in which there are no connections between processing elements. Instead, the processing elements form a plasma, exchanging information upon collision. A plasma can be populated with several different types of processing elements, each with their won activation function and self-modification mechanism. The activation patterns that are the plasma;s response to stimulation drive natural selection among processing elements which evolve to optimize performance

  2. A Code Phase Division Multiple Access (CPDMA) technique for VSAT satellite communications

    Science.gov (United States)

    Bruno, R.; Mcomber, R.; Weinberg, A.

    1991-01-01

    A reference concept and implementation relevant to the application of Code Phase Division Multiple Access (CPDMA) to a high capacity satellite communication system providing 16 Kbps single hop channels between Very Small Aperture Terminals (VSAT's) is described. The description includes a potential implementation of an onboard CPDMA bulk demodulator/converter utilizing programmable charge coupled device (CCD) technology projected to be available in the early 1990's. A high level description of the system architecture and operations, identification of key functional and performance requirements of the system elements, and analysis results of end-to-end system performance relative to key figures of merit such as spectral efficiency are also provided.

  3. Architecture as Design Study.

    Science.gov (United States)

    Kauppinen, Heta

    1989-01-01

    Explores the use of analogies in architectural design, the importance of Gestalt theory and aesthetic cannons in understanding and being sensitive to architecture. Emphasizes the variation between public and professional appreciation of architecture. Notes that an understanding of architectural process enables students to improve the aesthetic…

  4. Architectural design decisions

    NARCIS (Netherlands)

    Jansen, Antonius Gradus Johannes

    2008-01-01

    A software architecture can be considered as the collection of key decisions concerning the design of the software of a system. Knowledge about this design, i.e. architectural knowledge, is key for understanding a software architecture and thus the software itself. Architectural knowledge is mostly

  5. Fragments of Architecture

    DEFF Research Database (Denmark)

    Bang, Jacob Sebastian

    2016-01-01

    Topic 3: “Case studies dealing with the artistic and architectural work of architects worldwide, and the ties between specific artistic and architectural projects, methodologies and products”......Topic 3: “Case studies dealing with the artistic and architectural work of architects worldwide, and the ties between specific artistic and architectural projects, methodologies and products”...

  6. Can You Hear Architecture

    DEFF Research Database (Denmark)

    Ryhl, Camilla

    2016-01-01

    Taking an off set in the understanding of architectural quality being based on multisensory architecture, the paper aims to discuss the current acoustic discourse in inclusive design and its implications to the integration of inclusive design in architectural discourse and practice as well...... design and architectural quality for people with a hearing disability and a newly conducted qualitative evaluation research in Denmark as well as architectural theories on multisensory aspects of architectural experiences, the paper uses examples of existing Nordic building cases to discuss the role...... of acoustics in both inclusive design and multisensory architecture....

  7. Temporal Architecture: Poetic Dwelling in Japanese buildings

    Directory of Open Access Journals (Sweden)

    Michael Lazarin

    2014-07-01

    Full Text Available Heidegger’s thinking about poetic dwelling and Derrida’s impressions of Freudian estrangement are employed to provide a constitutional analysis of the experience of Japanese architecture, in particular, the Japanese vestibule (genkan. This analysis is supplemented by writings by Japanese architects and poets. The principal elements of Japanese architecture are: (1 ma, and (2 en. Ma is usually translated as ‘interval’ because, like the English word, it applies to both space and time.  However, in Japanese thinking, it is not so much an either/or, but rather a both/and. In other words, Japanese architecture emphasises the temporal aspect of dwelling in a way that Western architectural thinking usually does not. En means ‘joint, edge, the in-between’ as an ambiguous, often asymmetrical spanning of interior and exterior, rather than a demarcation of these regions. Both elements are aimed at producing an experience of temporality and transiency.

  8. The Walk-Man Robot Software Architecture

    Directory of Open Access Journals (Sweden)

    Mirko Ferrati

    2016-05-01

    Full Text Available A software and control architecture for a humanoid robot is a complex and large project, which involves a team of developers/researchers to be coordinated and requires many hard design choices. If such project has to be done in a very limited time, i.e., less than 1 year, more constraints are added and concepts, such as modular design, code reusability, and API definition, need to be used as much as possible. In this work, we describe the software architecture developed for Walk-Man, a robot participant at the Darpa Robotics Challenge. The challenge required the robot to execute many different tasks, such as walking, driving a car, and manipulating objects. These tasks need to be solved by robotics specialists in their corresponding research field, such as humanoid walking, motion planning, or object manipulation. The proposed architecture was developed in 10 months, provided boilerplate code for most of the functionalities required to control a humanoid robot and allowed robotics researchers to produce their control modules for DRC tasks in a short time. Additional capabilities of the architecture include firmware and hardware management, mixing of different middlewares, unreliable network management, and operator control station GUI. All the source code related to the architecture and some control modules have been released as open source projects.

  9. Space and Architecture's Current Line of Research? A Lunar Architecture Workshop With An Architectural Agenda.

    Science.gov (United States)

    Solomon, D.; van Dijk, A.

    space context that will be useful on Earth on a conceptual and practical level? * In what ways could architecture's field of reference offer building on the Moon (and other celestial bodies) a paradigm shift? 1 In addition to their models and designs, workshop participants will begin authoring a design recommendation for the building of (infra-) structures and habitats on celestial bodies in particular the Moon and Mars. The design recommendation, a substantiated aesthetic code of conduct (not legally binding) will address long term planning and incorporate issues of sustainability, durability, bio-diversity, infrastructure, CHANGE, and techniques that lend themselves to Earth-bound applications. It will also address the cultural implications of architectural design might have within the context of space exploration. The design recommendation will ultimately be presented for peer review to both the space and architecture communities. What would the endorsement from the architectural community of such a document mean to the space community? The Lunar Architecture Workshop is conceptualised, produced and organised by(in alphabetical order): Alexander van Dijk, Art Race in Space, Barbara Imhof; ES- CAPE*spHERE, Vienna, University of Technology, Institute for Design and Building Construction, Vienna, Bernard Foing; ESA SMART1 Project Scientist, Susmita Mo- hanty; MoonFront, LLC, Hans Schartner' Vienna University of Technology, Institute for Design and Building Construction, Debra Solomon; Art Race in Space, Dutch Art Institute, Paul van Susante; Lunar Explorers Society. Workshop locations: ESTEC, Noordwijk, NL and V2_Lab, Rotterdam, NL Workshop dates: June 3-16, 2002 (a Call for Participation will be made in March -April 2002.) 2

  10. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  11. Future city architecture for optimal living

    CERN Document Server

    Pardalos, Panos

    2015-01-01

      This book offers a wealth of interdisciplinary approaches to urbanization strategies in architecture centered on growing concerns about the future of cities and their impacts on essential elements of architectural optimization, livability, energy consumption and sustainability. It portrays the urban condition in architectural terms, as well as the living condition in human terms, both of which can be optimized by mathematical modeling as well as mathematical calculation and assessment.   Special features include:   ·        new research on the construction of future cities and smart cities   ·        discussions of sustainability and new technologies designed to advance ideas to future city developments   Graduate students and researchers in architecture, engineering, mathematical modeling, and building physics will be engaged by the contributions written by eminent international experts from a variety of disciplines including architecture, engineering, modeling, optimization, and relat...

  12. ESA: Enterprise Service Architecture

    OpenAIRE

    Liu, Yi

    2010-01-01

    The Service oriented perspective is emerging as an important view both for business architecture and IT architecture in the overall context of enterprise architectures. Many existing enterprise architecture frameworks like DODAF, MODAF and NAF have lately been extended with service-oriented views. The UPDM UML Profile and Metamodel for DODAF and MODAF has thus included various service-oriented views. This thesis proposes a new enterprise architecture framework ESA Enterprise Service Arch...

  13. The Tera Multithreaded Architecture and Unstructured Meshes

    Science.gov (United States)

    Bokhari, Shahid H.; Mavriplis, Dimitri J.

    1998-01-01

    The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.

  14. Advanced Architectures for Astrophysical Supercomputing

    Science.gov (United States)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  15. Spatially parallel architectures for industrial robot vision

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, D.H.; Veronis, A.M.; Salland, J.C.

    1983-01-01

    Spatially parallel computing systems that contain thousands of processing elements have a well suited architecture for vision systems. Described is a simple spatially parallel architecture that employs direct visual input and will be capable of being integrated onto a single chip of silicon within the decade. The architecture can determine basic shapes, such as square or triangular, and can determine the number of holes in an object or the number of cavities or inlets in the object. It is able to segment an input image that contains many objects. Algorithms that involve brightness also can be executed. 8 references.

  16. A data acquisition architecture for the SSC

    International Nuclear Information System (INIS)

    Partridge, R.

    1990-01-01

    An SSC data acquisition architecture applicable to high-p T detectors is described. The architecture is based upon a small set of design principles that were chosen to simplify communication between data acquisition elements while providing the required level of flexibility and performance. The architecture features an integrated system for data collection, event building, and communication with a large processing farm. The interface to the front end electronics system is also discussed. A set of design parameters is given for a data acquisition system that should meet the needs of high-p T detectors at the SSC

  17. Performance Analysis of Multiradio Transmitter with Polar or Cartesian Architectures Associated with High Efficiency Switched-Mode Power Amplifiers (invited paper

    Directory of Open Access Journals (Sweden)

    F. Robert

    2010-12-01

    Full Text Available This paper deals with wireless multi-radio transmitter architectures operating in the frequency band of 800 MHz – 6 GHz. As a consequence of the constant evolution in the communication systems, mobile transmitters must be able to operate at different frequency bands and modes according to existing standards specifications. The concept of a unique multiradio architecture is an evolution of the multistandard transceiver characterized by a parallelization of circuits for each standard. Multi-radio concept optimizes surface and power consumption. Transmitter architectures using sampling techniques and baseband ΣΔ or PWM coding of signals before their amplification appear as good candidates for multiradio transmitters for several reasons. They allow using high efficiency power amplifiers such as switched-mode PAs. They are highly flexible and easy to integrate because of their digital nature. But when the transmitter efficiency is considered, many elements have to be taken into account: signal coding efficiency, PA efficiency, RF filter. This paper investigates the interest of these architectures for a multiradio transmitter able to support existing wireless communications standards between 800 MHz and 6 GHz. It evaluates and compares the different possible architectures for WiMAX and LTE standards in terms of signal quality and transmitter power efficiency.

  18. Design requirements of communication architecture of SMART safety system

    International Nuclear Information System (INIS)

    Park, H. Y.; Kim, D. H.; Sin, Y. C.; Lee, J. Y.

    2001-01-01

    To develop the communication network architecture of safety system of SMART, the evaluation elements for reliability and performance factors are extracted from commercial networks and classified the required-level by importance. A predictable determinacy, status and fixed based architecture, separation and isolation from other systems, high reliability, verification and validation are introduced as the essential requirements of safety system communication network. Based on the suggested requirements, optical cable, star topology, synchronous transmission, point-to-point physical link, connection-oriented logical link, MAC (medium access control) with fixed allocation are selected as the design elements. The proposed architecture will be applied as basic communication network architecture of SMART safety system

  19. Fast underdetermined BSS architecture design methodology for real time applications.

    Science.gov (United States)

    Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R

    2015-01-01

    In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.

  20. Coding isotropic images

    Science.gov (United States)

    Oneal, J. B., Jr.; Natarajan, T. R.

    1976-01-01

    Rate distortion functions for two-dimensional homogeneous isotropic images are compared with the performance of 5 source encoders designed for such images. Both unweighted and frequency weighted mean square error distortion measures are considered. The coders considered are differential PCM (DPCM) using six previous samples in the prediction, herein called 6 pel (picutre element) DPCM; simple DPCM using single sample prediction; 6 pel DPCM followed by entropy coding; 8 x 8 discrete cosine transform coder, and 4 x 4 Hadamard transform coder. Other transform coders were studied and found to have about the same performance as the two transform coders above. With the mean square error distortion measure DPCM with entropy coding performed best. The relative performance of the coders changes slightly when the distortion measure is frequency weighted mean square error. The performance of all the coders was separated by only about 4 dB.

  1. Efficient convolutional sparse coding

    Energy Technology Data Exchange (ETDEWEB)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  2. Space Station Human Factors Research Review. Volume 3: Space Station Habitability and Function: Architectural Research

    Science.gov (United States)

    Cohen, Marc M. (Editor); Eichold, Alice (Editor); Heers, Susan (Editor)

    1987-01-01

    Articles are presented on a space station architectural elements model study, space station group activities habitability module study, full-scale architectural simulation techniques for space stations, and social factors in space station interiors.

  3. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  4. Fault-tolerant architectures for superconducting qubits

    Energy Technology Data Exchange (ETDEWEB)

    DiVincenzo, David P [IBM Research Division, Thomas J Watson Research Center, Yorktown Heights, NY 10598 (United States)], E-mail: divince@watson.ibm.com

    2009-12-15

    In this short review, I draw attention to new developments in the theory of fault tolerance in quantum computation that may give concrete direction to future work in the development of superconducting qubit systems. The basics of quantum error-correction codes, which I will briefly review, have not significantly changed since their introduction 15 years ago. But an interesting picture has emerged of an efficient use of these codes that may put fault-tolerant operation within reach. It is now understood that two-dimensional surface codes, close relatives of the original toric code of Kitaev, can be adapted as shown by Raussendorf and Harrington to effectively perform logical gate operations in a very simple planar architecture, with error thresholds for fault-tolerant operation simulated to be 0.75%. This architecture uses topological ideas in its functioning, but it is not 'topological quantum computation'-there are no non-abelian anyons in sight. I offer some speculations on the crucial pieces of superconducting hardware that could be demonstrated in the next couple of years that would be clear stepping stones towards this surface-code architecture.

  5. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  6. FEBio: finite elements for biomechanics.

    Science.gov (United States)

    Maas, Steve A; Ellis, Benjamin J; Ateshian, Gerard A; Weiss, Jeffrey A

    2012-01-01

    In the field of computational biomechanics, investigators have primarily used commercial software that is neither geared toward biological applications nor sufficiently flexible to follow the latest developments in the field. This lack of a tailored software environment has hampered research progress, as well as dissemination of models and results. To address these issues, we developed the FEBio software suite (http://mrl.sci.utah.edu/software/febio), a nonlinear implicit finite element (FE) framework, designed specifically for analysis in computational solid biomechanics. This paper provides an overview of the theoretical basis of FEBio and its main features. FEBio offers modeling scenarios, constitutive models, and boundary conditions, which are relevant to numerous applications in biomechanics. The open-source FEBio software is written in C++, with particular attention to scalar and parallel performance on modern computer architectures. Software verification is a large part of the development and maintenance of FEBio, and to demonstrate the general approach, the description and results of several problems from the FEBio Verification Suite are presented and compared to analytical solutions or results from other established and verified FE codes. An additional simulation is described that illustrates the application of FEBio to a research problem in biomechanics. Together with the pre- and postprocessing software PREVIEW and POSTVIEW, FEBio provides a tailored solution for research and development in computational biomechanics.

  7. Accuracy Test of Software Architecture Compliance Checking Tools – Test Instruction

    NARCIS (Netherlands)

    Pruijt, Leo; van der Werf, J.M.E.M.|info:eu-repo/dai/nl/36950674X; Brinkkemper., Sjaak|info:eu-repo/dai/nl/07500707X

    2015-01-01

    Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies between modules. Accurate tool

  8. Modeling Architectural Patterns’ Behavior Using Architectural Primitives

    NARCIS (Netherlands)

    Waqas Kamal, Ahmad; Avgeriou, Paris

    2008-01-01

    Architectural patterns have an impact on both the structure and the behavior of a system at the architecture design level. However, it is challenging to model patterns’ behavior in a systematic way because modeling languages do not provide the appropriate abstractions and because each pattern

  9. Modeling Architectural Patterns' Behavior Using Architectural Primitives

    NARCIS (Netherlands)

    Kamal, Ahmad Waqas; Avgeriou, Paris; Morrison, R; Balasubramaniam, D; Falkner, K

    2008-01-01

    Architectural patterns have an impact on both the structure and the behavior of a system at the architecture design level. However, it is challenging to model patterns' behavior in a systematic way because modeling languages do not provide the appropriate abstractions and because each pattern

  10. Simply architecture or bioclimatic architecture?; Arquitectura bioclimatica o simplemente Arquitectura?

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Torres, Juan Manuel [Universidad de Guanajuato (Mexico)

    2006-10-15

    The bioclimatic architecture is the one which profits from its position in the environment and its architectonic elements for the climate benefit. With the aim of reach the internal thermal comfort without using mechanical systems. This article states the story about this singular kind of architecture during centuries. And also emphasizes the sunlight utilization, in order to achieve the desired thermal well-being in edifications. [Spanish] El tipo de arquitectura que toma ventaja de su disposicion en el entorno y sus elementos arquitectonicos para el aprovechamiento del clima, con el fin de conseguir el confort termico interior sin utilizar sistemas mecanicos, se denomina bioclimatica. En este articulo se habla de la historia de este tipo tan singular de arquitectura con el paso de los siglos, y tambien se hace hincapie acerca de la luz solar, como un medio muy eficiente a traves del cual las edificaciones pueden ser disenadas para lograr el bienestar termico deseado.

  11. RATS: Reactive Architectures

    National Research Council Canada - National Science Library

    Christensen, Marc

    2004-01-01

    This project had two goals: To build an emulation prototype board for a tiled architecture and to demonstrate the utility of a global inter-chip free-space photonic interconnection fabric for polymorphous computer architectures (PCA...

  12. Avionics Architecture for Exploration

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the AES Avionics Architectures for Exploration (AAE) project is to develop a reference architecture that is based on standards and that can be scaled and...

  13. Religious architecture: anthropological perspectives

    NARCIS (Netherlands)

    Verkaaik, O.

    2013-01-01

    Religious Architecture: Anthropological Perspectives develops an anthropological perspective on modern religious architecture, including mosques, churches and synagogues. Borrowing from a range of theoretical perspectives on space-making and material religion, this volume looks at how religious

  14. Rhein-Ruhr architecture

    DEFF Research Database (Denmark)

    2002-01-01

    katalog til udstillingen 'Rhein - Ruhr architecture' Meldahls smedie, 15. marts - 28. april 2002. 99 sider......katalog til udstillingen 'Rhein - Ruhr architecture' Meldahls smedie, 15. marts - 28. april 2002. 99 sider...

  15. Controller Architectures for Switching

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2009-01-01

    This paper investigate different controller architectures in connection with controller switching. The controller switching is derived by using the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization. A number of different architectures for the implementation of the YJBK parameterization...... are described and applied in connection with controller switching. An architecture that does not include inversion of the coprime factors is introduced. This architecture will make controller switching particular simple....

  16. Space Internet Architectures and Technologies for NASA Enterprises

    Science.gov (United States)

    Bhasin, Kul; Hayden, Jeffrey L.

    2001-01-01

    NASA's future communications services will be supplied through a space communications network that mirrors the terrestrial Internet in its capabilities and flexibility. The notional requirements for future data gathering and distribution by this Space Internet have been gathered from NASA's Earth Science Enterprise (ESE), the Human Exploration and Development in Space (HEDS), and the Space Science Enterprise (SSE). This paper describes a communications infrastructure for the Space Internet, the architectures within the infrastructure, and the elements that make up the architectures. The architectures meet the requirements of the enterprises beyond 2010 with Internet 'compatible technologies and functionality. The elements of an architecture include the backbone, access, inter-spacecraft and proximity communication parts. From the architectures, technologies have been identified which have the most impact and are critical for the implementation of the architectures.

  17. Chemistry of superheavy elements

    International Nuclear Information System (INIS)

    Schaedel, M.

    2012-01-01

    The chemistry of superheavy elements - or transactinides from their position in the Periodic Table - is summarized. After giving an overview over historical developments, nuclear aspects about synthesis of neutron-rich isotopes of these elements, produced in hot-fusion reactions, and their nuclear decay properties are briefly mentioned. Specific requirements to cope with the one-atom-at-a-time situation in automated chemical separations and recent developments in aqueous-phase and gas-phase chemistry are presented. Exciting, current developments, first applications, and future prospects of chemical separations behind physical recoil separators ('pre-separator') are discussed in detail. The status of our current knowledge about the chemistry of rutherfordium (Rf, element 104), dubnium (Db, element 105), seaborgium (Sg, element 106), bohrium (Bh, element 107), hassium (Hs, element 108), copernicium (Cn, element 112), and element 114 is discussed from an experimental point of view. Recent results are emphasized and compared with empirical extrapolations and with fully-relativistic theoretical calculations, especially also under the aspect of the architecture of the Periodic Table. (orig.)

  18. Knowledge and Architectural Practice

    DEFF Research Database (Denmark)

    Verbeke, Johan

    2017-01-01

    This paper focuses on the specific knowledge residing in architectural practice. It is based on the research of 35 PhD fellows in the ADAPT-r (Architecture, Design and Art Practice Training-research) project. The ADAPT-r project innovates architectural research in combining expertise from academi...

  19. Architecture faculty, Prague

    Czech Academy of Sciences Publication Activity Database

    Hnídková, Vendula

    -, č. 40 (2011), s. 30-31 ISSN 1573-3815 Institutional research plan: CEZ:AV0Z80330511 Keywords : Czech contemporary architecture * Alena Šrámková * Architecture faculty, Prague Subject RIV: AL - Art, Architecture , Cultural Heritage

  20. An Embodied Architecture

    Directory of Open Access Journals (Sweden)

    Frances Downing

    2012-10-01

    it is our body boundary. The “flesh” or the lived body (Merleau-Ponty, 1968 is moreover, an inbetween concept that articulates the subjective mind to the objective world. It bridges the boundaries separating inside from outside. Thus, it could act as a metaphor for introducing the notion of edge in architectural place. The edge itself then, embodies the embodied being. Buildings have boundaries of foundation, wall, or roof, parts of which could be thought of as the“skin.” In today’s practice, the various skins of a building have become more complicated and porous as the field of architecture extends itself into “systemic” conditions, within and without. It follows then that the body survives the interaction and communication between mind and theexternal world if it inhabits the edge of place embodying localized boundary metaphors. Architecture is beginning the process of aligning itself with a new moral code—one that is inclusive of our biological reality, the embodiment of ideas, systemic evolution, and ecological necessities. This paper is situated within this new moral code of systemic ecological and biologicalinteractions.

  1. The analysis of cultural architectural trends in Crisan locality

    Directory of Open Access Journals (Sweden)

    SELA Florentina

    2010-09-01

    Full Text Available The paper presents data about the identification and analyse of the traditional architectural elements in Crisan locality knowing that the tourism activity is in a continuous development. The field research (during November 2007 enabled us to develop a qualitative and quantitative analysis in terms of identification of traditional architecture elements, their conservation status, and frequency of traditional building materials use, decorative elements and specificcolors used in construction architecture. Further, based on collected data, was realized the chart - Distribution for TraditionalArchitecture Index (TAI on the distance from the center of Crisan locality, showing that in Crisan locality the houses were and are built without taking into account any rule, destroying thus traditional architecture.

  2. Monte Carlo simulation code modernization

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The continual development of sophisticated transport simulation algorithms allows increasingly accurate description of the effect of the passage of particles through matter. This modelling capability finds applications in a large spectrum of fields from medicine to astrophysics, and of course HEP. These new capabilities however come at the cost of a greater computational intensity of the new models, which has the effect of increasing the demands of computing resources. This is particularly true for HEP, where the demand for more simulation are driven by the need of both more accuracy and more precision, i.e. better models and more events. Usually HEP has relied on the "Moore's law" evolution, but since almost ten years the increase in clock speed has withered and computing capacity comes in the form of hardware architectures of many-core or accelerated processors. To harness these opportunities we need to adapt our code to concurrent programming models taking advantages of both SIMD and SIMT architectures. Th...

  3. Tourists' Transformation Experience: From Destination Architecture to Identity Formation

    DEFF Research Database (Denmark)

    Ye, Helen Yi; Tussyadiah, Iis

    2010-01-01

    Today’s tourists seek unique destinations that could associate with their self identity in a profound way. It is meaningful for destinations to design unique physical elements that offer transformational travel experiences. This study aims at identifying how tourists encounter architecture...... in a destination and if architecture facilitates tourists’ self transformation. Based on narrative structure analysis by deconstruction of travel blog posts, the results suggest that tourists perceive architectural landscape as an important feature that reflects destinations’ identity. Four different interaction...

  4. Reference Avionics Architecture for Lunar Surface Systems

    Science.gov (United States)

    Somervill, Kevin M.; Lapin, Jonathan C.; Schmidt, Oron L.

    2010-01-01

    Developing and delivering infrastructure capable of supporting long-term manned operations to the lunar surface has been a primary objective of the Constellation Program in the Exploration Systems Mission Directorate. Several concepts have been developed related to development and deployment lunar exploration vehicles and assets that provide critical functionality such as transportation, habitation, and communication, to name a few. Together, these systems perform complex safety-critical functions, largely dependent on avionics for control and behavior of system functions. These functions are implemented using interchangeable, modular avionics designed for lunar transit and lunar surface deployment. Systems are optimized towards reuse and commonality of form and interface and can be configured via software or component integration for special purpose applications. There are two core concepts in the reference avionics architecture described in this report. The first concept uses distributed, smart systems to manage complexity, simplify integration, and facilitate commonality. The second core concept is to employ extensive commonality between elements and subsystems. These two concepts are used in the context of developing reference designs for many lunar surface exploration vehicles and elements. These concepts are repeated constantly as architectural patterns in a conceptual architectural framework. This report describes the use of these architectural patterns in a reference avionics architecture for Lunar surface systems elements.

  5. Elemental ABAREX -- a user's manual

    International Nuclear Information System (INIS)

    Smith, A.B.

    1999-01-01

    ELEMENTAL ABAREX is an extended version of the spherical optical-statistical model code ABAREX, designed for the interpretation of neutron interactions with elemental targets consisting of up to ten isotopes. The contributions from each of the isotopes of the element are explicitly dealt with, and combined for comparison with the elemental observables. Calculations and statistical fitting of experimental data are considered. The code is written in FORTRAN-77 and arranged for use on the IBM-compatible personal computer (PC), but it should operate effectively on a number of other systems, particularly VAX/VMS and IBM work stations. Effort is taken to make the code user friendly. With this document a reasonably skilled individual should become fluent with the use of the code in a brief period of time

  6. Architectures of prototypes and architectural prototyping

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Christensen, Michael; Sandvad, Elmer

    1998-01-01

    This paper reports from experience obtained through development of a prototype of a global customer service system in a project involving a large shipping company and a university research group. The research group had no previous knowledge of the complex business of shipping and had never worked...... sessions with users, - evolve over a long period of time to contain more functionality - allow for 6-7 developers working intensively in parallel. Explicit focus on the software architecture and letting the architecture evolve with the prototype played a major role in resolving these conflicting...... constraints. Specifically allowing explicit restructuring phases when the architecture became problematic showed to be crucial.  ...

  7. Allele coding in genomic evaluation

    Directory of Open Access Journals (Sweden)

    Christensen Ole F

    2011-06-01

    Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being

  8. Architectures for wrist-worn energy harvesting

    Science.gov (United States)

    Rantz, R.; Halim, M. A.; Xue, T.; Zhang, Q.; Gu, L.; Yang, K.; Roundy, S.

    2018-04-01

    This paper reports the simulation-based analysis of six dynamical structures with respect to their wrist-worn vibration energy harvesting capability. This work approaches the problem of maximizing energy harvesting potential at the wrist by considering multiple mechanical substructures; rotational and linear motion-based architectures are examined. Mathematical models are developed and experimentally corroborated. An optimization routine is applied to the proposed architectures to maximize average power output and allow for comparison. The addition of a linear spring element to the structures has the potential to improve power output; for example, in the case of rotational structures, a 211% improvement in power output was estimated under real walking excitation. The analysis concludes that a sprung rotational harvester architecture outperforms a sprung linear architecture by 66% when real walking data is used as input to the simulations.

  9. Architecture of absurd (forms, positions, apposition

    Directory of Open Access Journals (Sweden)

    Fedorov Viktor Vladimirovich

    2014-04-01

    Full Text Available In everyday life we constantly face absurd things, which seem to lack common sense. The notion of the absurd acts as: a an aesthetic category; b an element of logic; c a metaphysical phenomenon. The opportunity of its overcoming is achieved through the understanding of the situation, the faith in the existence of sense and hope for his understanding. The architecture of absurd should be considered as a loss of sense of a part of architectural landscape (urban environment. The ways of organization of the architecture of absurd: the exaggerated forms and proportions, the unnatural position and apposition of various objects. These are usually small-scale facilities that have local spatial and temporary value. There are no large absurd architectural spaces, as the natural architectural environment dampens the perturbation of sense-sphere. The architecture of absurd is considered «pathology» of the environment. «Nonsense» objects and hope (or even faith to detect sense generate a fruitful paradox of architecture of absurd presence in the world.

  10. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  11. Use of Astronomical Principles in Indian Temple Architecture

    Science.gov (United States)

    Shylaja, B. S.

    Temples, identified as places of worship, served an important role in building religious tradition and culture in Indian society. Many of them are known to have astronomical elements incorporated in their architecture to facilitate their role in timekeeping and calendar making. In this chapter, we present examples of the use of astronomy in temple architecture.

  12. The chromatin regulatory code: Beyond a histone code

    Science.gov (United States)

    Lesne, A.

    2006-03-01

    In this commentary on the contribution by Arndt Benecke in this issue, I discuss why the notion of “chromatin code” introduced and elaborated in this paper is to be preferred to that of “histone code”. Speaking of a code as regards nucleosome conformation and histone tail post-translational modifications only makes sense within the chromatin fiber, where their physico-chemical features can be translated into regulatory programs at the genome level, by means of a complex, multi-level interplay with the fiber architecture and dynamics settled in the course of Evolution. In particular, this chromatin code presumably exploits allosteric transitions of the chromatin fiber. The chromatin structure dependence of its translation suggests two alternative modes of transcription initiation regulation, also proposed in the paper by A. Benecke in this issue for interpreting strikingly bimodal micro-array data.

  13. CRISPR-based strategies for studying regulatory elements and chromatin structure in mammalian gene control.

    Science.gov (United States)

    Lau, Cia-Hin; Suh, Yousin

    2017-12-01

    The development of high-throughput methods has enabled the genome-wide identification of putative regulatory elements in a wide variety of mammalian cells at an unprecedented resolution. Extensive genomic studies have revealed the important role of regulatory elements and genetic variation therein in disease formation and risk. In most cases, there is only correlative evidence for the roles of these elements and non-coding changes within these elements in pathogenesis. With the advent of genome- and epigenome-editing tools based on the CRISPR technology, it is now possible to test the functional relevance of the regulatory elements and alterations on a genomic scale. Here, we review the various CRISPR-based strategies that have been developed to functionally validate the candidate regulatory elements in mammals as well as the non-coding genetic variants found to be associated with human disease. We also discuss how these synthetic biology tools have helped to elucidate the role of three-dimensional nuclear architecture and higher-order chromatin organization in shaping functional genome and controlling gene expression.

  14. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  15. Edge Equilibrium Code (EEC) For Tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xujling

    2014-02-24

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids

  16. Improvements, verifications and validations of the BOW code

    International Nuclear Information System (INIS)

    Yu, S.D.; Tayal, M.; Singh, P.N.

    1995-01-01

    The BOW code calculates the lateral deflections of a fuel element consisting of sheath and pellets, due to temperature gradients, hydraulic drag and gravity. the fuel element is subjected to restraint from endplates, neighboring fuel elements and the pressure tube. Many new features have been added to the BOW code since its original release in 1985. This paper outlines the major improvements made to the code and verification/validation results. (author)

  17. Information Architecture: The Data Warehouse Foundation.

    Science.gov (United States)

    Thomas, Charles R.

    1997-01-01

    Colleges and universities are initiating data warehouse projects to provide integrated information for planning and reporting purposes. A survey of 40 institutions with active data warehouse projects reveals the kinds of tools, contents, data cycles, and access currently used. Essential elements of an integrated information architecture are…

  18. Orthogonal antenna architecture for MIMO handsets

    DEFF Research Database (Denmark)

    Tatomirescu, Alexandru; Alrabadi, Osama; Pedersen, Gert Frølund

    2012-01-01

    The paper presents a method for decorrelating the antenna elements of a MIMO system in a compact handheld terminal at low bands. The architecture of the antenna system induces orthogonal currents over the closely spaced antennas resulting in a correlation free system. Nevertheless, due to the small...

  19. Hadoop Oriented Smart Cities Architecture.

    Science.gov (United States)

    Diaconita, Vlad; Bologa, Ana-Ramona; Bologa, Razvan

    2018-04-12

    A smart city implies a consistent use of technology for the benefit of the community. As the city develops over time, components and subsystems such as smart grids, smart water management, smart traffic and transportation systems, smart waste management systems, smart security systems, or e-governance are added. These components ingest and generate a multitude of structured, semi-structured or unstructured data that may be processed using a variety of algorithms in batches, micro batches or in real-time. The ICT architecture must be able to handle the increased storage and processing needs. When vertical scaling is no longer a viable solution, Hadoop can offer efficient linear horizontal scaling, solving storage, processing, and data analyses problems in many ways. This enables architects and developers to choose a stack according to their needs and skill-levels. In this paper, we propose a Hadoop-based architectural stack that can provide the ICT backbone for efficiently managing a smart city. On the one hand, Hadoop, together with Spark and the plethora of NoSQL databases and accompanying Apache projects, is a mature ecosystem. This is one of the reasons why it is an attractive option for a Smart City architecture. On the other hand, it is also very dynamic; things can change very quickly, and many new frameworks, products and options continue to emerge as others decline. To construct an optimized, modern architecture, we discuss and compare various products and engines based on a process that takes into consideration how the products perform and scale, as well as the reusability of the code, innovations, features, and support and interest in online communities.

  20. Hadoop Oriented Smart Cities Architecture

    Directory of Open Access Journals (Sweden)

    Vlad Diaconita

    2018-04-01

    Full Text Available A smart city implies a consistent use of technology for the benefit of the community. As the city develops over time, components and subsystems such as smart grids, smart water management, smart traffic and transportation systems, smart waste management systems, smart security systems, or e-governance are added. These components ingest and generate a multitude of structured, semi-structured or unstructured data that may be processed using a variety of algorithms in batches, micro batches or in real-time. The ICT architecture must be able to handle the increased storage and processing needs. When vertical scaling is no longer a viable solution, Hadoop can offer efficient linear horizontal scaling, solving storage, processing, and data analyses problems in many ways. This enables architects and developers to choose a stack according to their needs and skill-levels. In this paper, we propose a Hadoop-based architectural stack that can provide the ICT backbone for efficiently managing a smart city. On the one hand, Hadoop, together with Spark and the plethora of NoSQL databases and accompanying Apache projects, is a mature ecosystem. This is one of the reasons why it is an attractive option for a Smart City architecture. On the other hand, it is also very dynamic; things can change very quickly, and many new frameworks, products and options continue to emerge as others decline. To construct an optimized, modern architecture, we discuss and compare various products and engines based on a process that takes into consideration how the products perform and scale, as well as the reusability of the code, innovations, features, and support and interest in online communities.

  1. A Framework for Retargetable Code Generation using Simulated Annealing

    NARCIS (Netherlands)

    Visser, B.S.

    2000-01-01

    embedded systems. Retargetable code generation is a co-designing method to map a high-level software description onto a variety of hardware architectures without the need to rewrite a compiler. Highly efficient code generation is required to meet, for example, timing, area and low-power constraints.

  2. A Fast MHD Code for Gravitationally Stratified Media using ...

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the ...

  3. Innovations in dynamic architecture

    Directory of Open Access Journals (Sweden)

    Abdulmajid Karanouh

    2015-11-01

    Full Text Available High performance adaptive solutions are capable of responding to the dynamic nature of users and context. These innovative and dynamic systems are steadily gaining ground over ubiquitous ‘best fit’ static models. These architectural elements often exist beyond the scope of mainstream building standards and traditional methods for data representation or communication. This presents major challenges to a highly standardized and compartmentalized industry in which ‘innovation’ is limited to a few signature practices that design iconic yet expensive structures, which often prioritize aesthetics over performance. This paper offers an overview of the benefits that integrated dynamic systems bring to buildings. Through an examination of an applied practice, this paper offers guidelines for communicating complex geometry in a clear design language across interdisciplinary collaborations. The use of diagrammatic grammar to translate underlying algorithmic rules into instructions for design allows complex, innovative solutions to be realized more effectively. The ideas presented here are based on the design principles of the competition-winning scheme of the Al-Bahr Towers. As lead consultant in Innovation Design & Research at AHR (former Aedas-UK, Abdulmajid Karanouh designed and spearheaded this project in close collaboration with Arup. The buildings won the Best Innovation Award 2012 by the Council for Tall Buildings and Urban Habitat (CTBUH. The pair of towers won recognition for its performance-driven form, and dynamic facade that operates following the movement of the sun.

  4. Preserving urban objects of historicaland architectural heritage

    Directory of Open Access Journals (Sweden)

    Bal'zannikova Ekaterina Mikhailovna

    2014-01-01

    structural elements, delivering building materials, preparing the construction site and the basic period when condemned structures are demolished, new design elements are formed and assembled, interior finishing work is performed and the object facade is restored. In contrast to it, our method includes additional periods and a performance list. In particular, it is proposed to carry out a research period prior to the preparatory period, and after the basic period there should be the ending period.Thus, during the research period it is necessary to study urban development features in architectural and town-planning environment, to identify the historical and architectural value of the object, to estimate its ramshackle state and whether it is habitable, to determine the relationship of the object with the architectural and aesthetic image of surrounding objects and to develop a conservation program; and during the ending period it is proposed to assess the historical and architectural significance of the reconstructed object in relation to the aesthetic and architectural image of the surrounding area. The proposed complex method will increase the attractiveness of a historical and architectural heritage object and its surrounding area for tourists and, consequently, raise the cultural level of the visitors. Furthermore, the method will ensure the construction of recreation zones, their more frequent usage and visiting surrounding objects of social infrastructure, because more opportunities for cultural and aesthetic pastime will be offered. The method will also provide a more reasonable and effective use of available funding due to the careful analysis and proper choice of the methods to preserve objects of historical and architectural heritage.

  5. Evolving the Web-Based Distributed SI/PDO Architecture for High-Performance Visualization

    Energy Technology Data Exchange (ETDEWEB)

    HOLMES,VICTOR P.; LINEBARGER,JOHN M.; MILLER,DAVID J.; VANDEWART,RUTHE LYNN; CROWLEY,CHARLES P.

    2000-08-16

    The Simulation Intranet/Product Database Operator (SI/PDO) project has developed a Web-based distributed object architecture for high performance scientific simulation. A Web-based Java interface guides designers through the design and analysis cycle via solid and analytical modeling, meshing, finite element simulation, and various forms of visualization. The SI/PDO architecture has evolved in steps towards satisfying Sandia's long-term goal of providing an end-to-end set of services for high fidelity full physics simulations in a high-performance, distributed, and distance computing environment. This paper describes the continuing evolution of the architecture to provide high-performance visualization services. Extensions to the SI/PDO architecture allow web access to visualization tools that run on MP systems. This architecture makes these tools more easily accessible by providing web-based interfaces and by shielding the user from the details of these computing environments. The design is a multi-tier architecture, where the Java-based GUI tier runs on a web browser and provides image display and control functions. The computation tier runs on MP machines. The middle tiers provide custom communication with MP machines, remote file selection, remote launching of services, load balancing, and machine selection. The architecture allows middleware of various types (CORBA, COM, RMI, sockets, etc.) to connect the tiers depending upon the situation. Testing of constantly developing visualization tools can be done in an environment where there are only two tiers which both run on desktop machines. This allows fast testing turnaround and does not use compute cycles on high-performance machines. Once the code and interfaces are tested, they are moved to high-performance machines, and new tiers are added to handle the problems of using these machines. Uniform interfaces are used throughout the tiers to allow this flexibility. Experiments test the appropriate level of

  6. Development of TUF-ELOCA - a software tool for integrated single-channel thermal-hydraulic and fuel element analyses

    International Nuclear Information System (INIS)

    Popescu, A.I.; Wu, E.; Yousef, W.W.; Pascoe, J.; Parlatan, Y.; Kwee, M.

    2006-01-01

    The TUF-ELOCA tool couples the TUF and ELOCA codes to enable an integrated thermal-hydraulic and fuel element analysis for a single channel during transient conditions. The coupled architecture is based on TUF as the parent process controlling multiple ELOCA executions that simulate the fuel elements behaviour and is scalable to different fuel channel designs. The coupling ensures a proper feedback between the coolant conditions and fuel elements response, eliminates model duplications, and constitutes an improvement from the prediction accuracy point of view. The communication interfaces are based on PVM and allow parallelization of the fuel element simulations. Developmental testing results are presented showing realistic predictions for the fuel channel behaviour during a transient. (author)

  7. Enterprise architecture management

    DEFF Research Database (Denmark)

    Rahimi, Fatemeh; Gøtze, John; Møller, Charles

    2017-01-01

    Despite the growing interest in enterprise architecture management, researchers and practitioners lack a shared understanding of its applications in organizations. Building on findings from a literature review and eight case studies, we develop a taxonomy that categorizes applications of enterprise...... architecture management based on three classes of enterprise architecture scope. Organizations may adopt enterprise architecture management to help form, plan, and implement IT strategies; help plan and implement business strategies; or to further complement the business strategy-formation process....... The findings challenge the traditional IT-centric view of enterprise architecture management application and suggest enterprise architecture management as an approach that could support the consistent design and evolution of an organization as a whole....

  8. Enterprise architecture management

    DEFF Research Database (Denmark)

    Rahimi, Fatemeh; Gøtze, John; Møller, Charles

    2017-01-01

    architecture management based on three classes of enterprise architecture scope. Organizations may adopt enterprise architecture management to help form, plan, and implement IT strategies; help plan and implement business strategies; or to further complement the business strategy-formation process....... The findings challenge the traditional IT-centric view of enterprise architecture management application and suggest enterprise architecture management as an approach that could support the consistent design and evolution of an organization as a whole.......Despite the growing interest in enterprise architecture management, researchers and practitioners lack a shared understanding of its applications in organizations. Building on findings from a literature review and eight case studies, we develop a taxonomy that categorizes applications of enterprise...

  9. Architectural Masterpieces Of Humayun

    Directory of Open Access Journals (Sweden)

    Rahimov Laziz Abduazizovich

    2015-08-01

    Full Text Available this report illustrates about Humayun architecture. Since Baburid style architecture has started in 1526 in India he had put so much effort to change to his own Baburid style which was adopted from Timurid Tradition. However Baburs sudden death did not allow him to develop as he has planned. Therefore his architecture style was not developed in India. Furthermore Humayun was inspired from Baburid architecture from what he has done during for four years although Humayun is sightseeing was very different compare to Babur. He has brought in architectural sphere extraordinary philosophy. He has ruled over India for some years during those years he has developed many new styles and he has put so much effort to change past architectural style. Unfortunately Afghans ruler Sher Shah has attacked India for the reason Humayun had to escape to Persia. The purpose of this report is to identify to which of this rulers belongs misconceptions of those buildings in India.

  10. Knowledge and Architectural Practice

    DEFF Research Database (Denmark)

    Verbeke, Johan

    2017-01-01

    This paper focuses on the specific knowledge residing in architectural practice. It is based on the research of 35 PhD fellows in the ADAPT-r (Architecture, Design and Art Practice Training-research) project. The ADAPT-r project innovates architectural research in combining expertise from academia...... and from practice in order to highlight and extract the specific kind of knowledge which resides and is developed in architectural practice (creative practice research). The paper will discuss three ongoing and completed PhD projects and focusses on the outcomes and their contribution to the field....... Specific to these research projects is that the researcher is within academia but stays emerged in architectural practice. The projects contribute to a better understanding of architectural practice, how it develops and what kind of knowledge is crucial. Furthermore, the paper will develop a reflection...

  11. Architecture and Stages

    DEFF Research Database (Denmark)

    Kiib, Hans

    2009-01-01

    Architecture and Art as Fuel New development zones for shopping and entertainment and space for festivals inside the city CAN be coupled with art and architecture and become ‘open minded' public domains based on cultural exchange and mutual learning. This type of space could be labelled...... as "experiencescape" - a space between tourism, culture, learning and economy. Strategies related to these challenges involve new architectural concepts and art as ‘engines' for a change. New expressive architecture and old industrial buildings are often combined into hybrid narratives, linking the past...... with the future. But this is not enough. The agenda is to develop architectural spaces, where social interaction and learning are enhanced by art and fun. How can we develop new architectural designs in our inner cities and waterfronts where eventscapes, learning labs and temporal use are merged with everyday...

  12. Secure Storage Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Aderholdt, Ferrol [Tennessee Technological University; Caldwell, Blake A [ORNL; Hicks, Susan Elaine [ORNL; Koch, Scott M [ORNL; Naughton, III, Thomas J [ORNL; Pogge, James R [Tennessee Technological University; Scott, Stephen L [Tennessee Technological University; Shipman, Galen M [ORNL; Sorrillo, Lawrence [ORNL

    2015-01-01

    The purpose of this report is to clarify the challenges associated with storage for secure enclaves. The major focus areas for the report are: - review of relevant parallel filesystem technologies to identify assets and gaps; - review of filesystem isolation/protection mechanisms, to include native filesystem capabilities and auxiliary/layered techniques; - definition of storage architectures that can be used for customizable compute enclaves (i.e., clarification of use-cases that must be supported for shared storage scenarios); - investigate vendor products related to secure storage. This study provides technical details on the storage and filesystem used for HPC with particular attention on elements that contribute to creating secure storage. We outline the pieces for a a shared storage architecture that balances protection and performance by leveraging the isolation capabilities available in filesystems and virtualization technologies to maintain the integrity of the data. Key Points: There are a few existing and in-progress protection features in Lustre related to secure storage, which are discussed in (Chapter 3.1). These include authentication capabilities like GSSAPI/Kerberos and the in-progress work for GSSAPI/Host-keys. The GPFS filesystem provides native support for encryption, which is not directly available in Lustre. Additionally, GPFS includes authentication/authorization mechanisms for inter-cluster sharing of filesystems (Chapter 3.2). The limitations of key importance for secure storage/filesystems are: (i) restricting sub-tree mounts for parallel filesystem (which is not directly supported in Lustre or GPFS), and (ii) segregation of hosts on the storage network and practical complications with dynamic additions to the storage network, e.g., LNET. A challenge for VM based use cases will be to provide efficient IO forwarding of the parallel filessytem from the host to the guest (VM). There are promising options like para-virtualized filesystems to

  13. The NASA Auralization Framework and Plugin Architecture

    Science.gov (United States)

    Aumann, Aric R.; Tuttle, Brian C.; Chapin, William L.; Rizzi, Stephen A.

    2015-01-01

    NASA has a long history of investigating human response to aircraft flyover noise and in recent years has developed a capability to fully auralize the noise of aircraft during their design. This capability is particularly useful for unconventional designs with noise signatures significantly different from the current fleet. To that end, a flexible software architecture has been developed to facilitate rapid integration of new simulation techniques for noise source synthesis and propagation, and to foster collaboration amongst researchers through a common releasable code base. The NASA Auralization Framework (NAF) is a skeletal framework written in C++ with basic functionalities and a plugin architecture that allows users to mix and match NAF capabilities with their own methods through the development and use of dynamically linked libraries. This paper presents the NAF software architecture and discusses several advanced auralization techniques that have been implemented as plugins to the framework.

  14. Performance Analysis of FEM Algorithmson GPU and Many-Core Architectures

    KAUST Repository

    Khurram, Rooh

    2015-04-27

    The roadmaps of the leading supercomputer manufacturers are based on hybrid systems, which consist of a mix of conventional processors and accelerators. This trend is mainly due to the fact that the power consumption cost of the future cpu-only Exascale systems will be unsustainable, thus accelerators such as graphic processing units (GPUs) and many-integrated-core (MIC) will likely be the integral part of the TOP500 (http://www.top500.org/) supercomputers, beyond 2020. The emerging supercomputer architecture will bring new challenges for the code developers. Continuum mechanics codes will particularly be affected, because the traditional synchronous implicit solvers will probably not scale on hybrid Exascale machines. In the previous study[1], we reported on the performance of a conjugate gradient based mesh motion algorithm[2]on Sandy Bridge, Xeon Phi, and K20c. In the present study we report on the comparative study of finite element codes, using PETSC and AmgX solvers on CPU and GPUs, respectively [3,4]. We believe this study will be a good starting point for FEM code developers, who are contemplating a CPU to accelerator transition.

  15. ARCHITECTURE AND ITS WINDAGE

    Directory of Open Access Journals (Sweden)

    Limonad Mikhail Yurievich

    2017-07-01

    Full Text Available The article deals with the composition of the landscape and building on the basis of the laws of aerodynamic resistance of objects to the wind flow and the resulting physical effect of sail. The application of landscape-visual assessment based on windage properties as a criterion for the development of the architectural and town-planning appearance of buildings is presented. Windage is studied as a physical phenomenon arising in landscape forms, buildings, loose materials, surface and vegetation of the relief. Similarities are found between the silhouettes of windage ships and urban buildings. It is revealed that in the architectural qualification the center of the sail and the center of the lateral resistance of the object can help assess the relative position of the elements of the landscape and the appearance of the building in order to achieve compositional integrity. Thus, a technique for assessing visual appearance based on a system of visual moments of sailness with respect to the object’s observation center has been developed. The influence of high-rise buildings on the conditions of the active surface for human stay on domestic and foreign examples is analyzed. Among the described objects there are high-rise buildings on Novy Arbat in Moscow, the sculpture “Motherland Calls!” In Volgograd, Spinaker Tower in Portsmouth (United Kingdom, Burj Al Arab Hotel in Dubai (United Arab Emirates. It is noted that to assess the compositional integrity of the observed landscapes by visual windage, photographs from the ground level and significant heights of window openings are used. It is proposed to use this to assess the existing types and panoramas, for which they need to capture a photo or video of planar or volumetric images, while performing editing to establish the adequacy of visual perception of a person in real conditions. In conclusion, the result of the study reveals the application of the method of assessing visual windage to

  16. Architecture humanitarian emergencies

    DEFF Research Database (Denmark)

    Gomez-Guillamon, Maria; Eskemose Andersen, Jørgen; Contreras, Jorge Lobos

    2013-01-01

    Introduced by scientific articles conserning architecture and human rights in light of cultures, emergencies, social equality and sustainability, democracy, economy, artistic development and science into architecture. Concluding in definition of needs for new roles, processes and education of arc......, Architettura di Alghero in Italy, Architecture and Design of Kocaeli University in Turkey, University of Aguascalientes in Mexico, Architectura y Urbanismo of University of Chile and Escuela de Architectura of Universidad Austral in Chile....

  17. The ATLAS Analysis Architecture

    International Nuclear Information System (INIS)

    Cranmer, K.S.

    2008-01-01

    We present an overview of the ATLAS analysis architecture including the relevant aspects of the computing model and the major architectural aspects of the Athena framework. Emphasis will be given to the interplay between the analysis use cases and the technical aspects of the architecture including the design of the event data model, transient-persistent separation, data reduction strategies, analysis tools, and ROOT interoperability

  18. Fabricating architectural volume

    DEFF Research Database (Denmark)

    Feringa, Jelle; Søndergaard, Asbjørn

    2015-01-01

    The 2011 edition of Fabricate inspired a number of collaborations, this article seeks to highlight three of these. There is a common thread amongst the projects presented: sharing the ambition to close the rift between design and fabrication while incorporating structural design aspects early on....... The development of fabrication techniques in the work presented is considered an inherent part of architectural design and shares the aspiration of developing approaches to manufacturing architecture that are scalable to architectural proportions1 and of practical relevance....

  19. Prison, Architecture and Humans

    OpenAIRE

    2018-01-01

    "What is prison architecture and how can it be studied? How are concepts such as humanism, dignity and solidarity translated into prison architecture? What kind of ideologies and ideas are expressed in various prison buildings from different eras and locations? What is the outside and the inside of a prison, and what is the significance of movement within the prison space? What does a lunch table have to do with prison architecture? How do prisoners experience materiality in serving a prison ...

  20. Architecture for Data Management

    OpenAIRE

    Vukolic, Marko

    2015-01-01

    In this document we present the preliminary architecture of the SUPERCLOUD data management and storage. We start by defining the design requirements of the architecture, motivated by use cases and then review the state-of-the-art. We survey security and dependability technologies and discuss designs for the overall unifying architecture for data management that serves as an umbrella for different security and dependability data management features. Specifically the document lays out the archi...

  1. Collaborative production indicators in information architecture

    Directory of Open Access Journals (Sweden)

    Zayr Claudio Gomes da Silva

    2017-04-01

    Full Text Available Information architecture is considered a strategic domain of collaborative production of Information Science. We describe the conditions of collaborative production in information architecture, considering it a sub-area of the study of Information Science. In order to do so, we specifically address indicators of scientific production that include topics of study, typology and authorship, postgraduate programs and areas to which it is linked, among others. This is an exploratory and descriptive research. The scientific production of the National Meeting of Information Science Research (ENANCIB, from 2003 to 2013, is mapped in the "Network Matters" repository. Bibliometry is used to identify paratextual and textual elements that form evidence of collaborative production in information architecture. We verified the plurality in the academic formation of the researchers that approach information architecture, the sharing of languages, some indications of the disciplinary convergences from the collaboration in coauthorship, as well as a plexus of relations through the indirect citations that represent the sharing of elements Theoretical-methodological approaches in interdisciplinary production. In addition, the academic training of the researchers with the highest productivity index is mainly related to Librarianship and Computer Science. The collaborative production in the information architecture is presented as a multidisciplinary production process, constituting a convergent domain that allows the effectiveness of interdisciplinary practices in Information Science.

  2. Grid Architecture 2

    Energy Technology Data Exchange (ETDEWEB)

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  3. Product Architecture Modularity Strategies

    DEFF Research Database (Denmark)

    Mikkola, Juliana Hsuan

    2003-01-01

    The focus of this paper is to integrate various perspectives on product architecture modularity into a general framework, and also to propose a way to measure the degree of modularization embedded in product architectures. Various trade-offs between modular and integral product architectures...... and how components and interfaces influence the degree of modularization are considered. In order to gain a better understanding of product architecture modularity as a strategy, a theoretical framework and propositions are drawn from various academic literature sources. Based on the literature review...

  4. Exporting Humanist Architecture

    DEFF Research Database (Denmark)

    Nielsen, Tom

    2016-01-01

    values and ethical stands involved in the export of Danish Architecture. Abstract: Danish architecture has, in a sense, been driven by an unwritten contract between the architects and the democratic state and its institutions. This contract may be viewed as an ethos – an architectural tradition......The article is a chapter in the catalogue for the Danish exhibition at the 2016 Architecture Biennale in Venice. The catalogue is conceived at an independent book exploring the theme Art of Many - The Right to Space. The chapter is an essay in this anthology tracing and discussing the different...

  5. Decentralized Software Architecture

    National Research Council Canada - National Science Library

    Khare, Rohit

    2002-01-01

    .... While the term "decentralization" is familiar from political and economic contexts, it has been applied extensively, if indiscriminately, to describe recent trends in software architecture towards...

  6. MC 68020 μp architecture

    International Nuclear Information System (INIS)

    Casals, O.; Dejuan, E.; Labarta, J.

    1988-01-01

    The MC68020 is a 32-bit microprocessor object code compatible with the earlier MC68000 and MC68010. In this paper we describe its architecture and two coprocessors: the MC68851 paged memory management unit and the MC68882 floating point coprocessor. Between its most important characteristics we can point up: addressing mode extensions for enhanced support of high level languages, an on-chip instruction cache and full support of virtual memory. (Author)

  7. Security Tagged Architecture Co-Design (STACD)

    Science.gov (United States)

    2015-09-01

    the FPGA system. • CAD Infrastructure: Setup a standard -cell Application-Specific Integrated Circuit ( ASIC ) flow for the IBM 65nm process. Develop a...HEINER a. REPORT U b. ABSTRACT U c. THIS PAGE U 19b. TELEPHONE NUMBER (Include area code) 315-330-7750 Standard Form 298 (Rev. 8-98) Prescribed by...funded a research project entitled, “Trust-management, Intrusion-tolerance, Accountability , and Reconstitution Architecture. [15]” This project

  8. COYOTE : a finite element computer program for nonlinear heat conduction problems. Part I, theoretical background.

    Energy Technology Data Exchange (ETDEWEB)

    Glass, Micheal W.; Hogan, Roy E., Jr.; Gartling, David K.

    2010-03-01

    The need for the engineering analysis of systems in which the transport of thermal energy occurs primarily through a conduction process is a common situation. For all but the simplest geometries and boundary conditions, analytic solutions to heat conduction problems are unavailable, thus forcing the analyst to call upon some type of approximate numerical procedure. A wide variety of numerical packages currently exist for such applications, ranging in sophistication from the large, general purpose, commercial codes, such as COMSOL, COSMOSWorks, ABAQUS and TSS to codes written by individuals for specific problem applications. The original purpose for developing the finite element code described here, COYOTE, was to bridge the gap between the complex commercial codes and the more simplistic, individual application programs. COYOTE was designed to treat most of the standard conduction problems of interest with a user-oriented input structure and format that was easily learned and remembered. Because of its architecture, the code has also proved useful for research in numerical algorithms and development of thermal analysis capabilities. This general philosophy has been retained in the current version of the program, COYOTE, Version 5.0, though the capabilities of the code have been significantly expanded. A major change in the code is its availability on parallel computer architectures and the increase in problem complexity and size that this implies. The present document describes the theoretical and numerical background for the COYOTE program. This volume is intended as a background document for the user's manual. Potential users of COYOTE are encouraged to become familiar with the present report and the simple example analyses reported in before using the program. The theoretical and numerical background for the finite element computer program, COYOTE, is presented in detail. COYOTE is designed for the multi-dimensional analysis of nonlinear heat conduction

  9. COYOTE: a finite element computer program for nonlinear heat conduction problems. Part I, theoretical background

    International Nuclear Information System (INIS)

    Glass, Micheal W.; Hogan, Roy E. Jr.; Gartling, David K.

    2010-01-01

    The need for the engineering analysis of systems in which the transport of thermal energy occurs primarily through a conduction process is a common situation. For all but the simplest geometries and boundary conditions, analytic solutions to heat conduction problems are unavailable, thus forcing the analyst to call upon some type of approximate numerical procedure. A wide variety of numerical packages currently exist for such applications, ranging in sophistication from the large, general purpose, commercial codes, such as COMSOL, COSMOSWorks, ABAQUS and TSS to codes written by individuals for specific problem applications. The original purpose for developing the finite element code described here, COYOTE, was to bridge the gap between the complex commercial codes and the more simplistic, individual application programs. COYOTE was designed to treat most of the standard conduction problems of interest with a user-oriented input structure and format that was easily learned and remembered. Because of its architecture, the code has also proved useful for research in numerical algorithms and development of thermal analysis capabilities. This general philosophy has been retained in the current version of the program, COYOTE, Version 5.0, though the capabilities of the code have been significantly expanded. A major change in the code is its availability on parallel computer architectures and the increase in problem complexity and size that this implies. The present document describes the theoretical and numerical background for the COYOTE program. This volume is intended as a background document for the user's manual. Potential users of COYOTE are encouraged to become familiar with the present report and the simple example analyses reported in before using the program. The theoretical and numerical background for the finite element computer program, COYOTE, is presented in detail. COYOTE is designed for the multi-dimensional analysis of nonlinear heat conduction problems

  10. ANSYS duplicate finite-element checker routine

    Science.gov (United States)

    Ortega, R.

    1995-01-01

    An ANSYS finite-element code routine to check for duplicated elements within the volume of a three-dimensional (3D) finite-element mesh was developed. The routine developed is used for checking floating elements within a mesh, identically duplicated elements, and intersecting elements with a common face. A space shuttle main engine alternate turbopump development high pressure oxidizer turbopump finite-element model check using the developed subroutine is discussed. Finally, recommendations are provided for duplicate element checking of 3D finite-element models.

  11. Fuel performance analysis code 'FAIR'

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1994-01-01

    For modelling nuclear reactor fuel rod behaviour of water cooled reactors under severe power maneuvering and high burnups, a mechanistic fuel performance analysis code FAIR has been developed. The code incorporates finite element based thermomechanical module, physically based fission gas release module and relevant models for modelling fuel related phenomena, such as, pellet cracking, densification and swelling, radial flux redistribution across the pellet due to the build up of plutonium near the pellet surface, pellet clad mechanical interaction/stress corrosion cracking (PCMI/SSC) failure of sheath etc. The code follows the established principles of fuel rod analysis programmes, such as coupling of thermal and mechanical solutions along with the fission gas release calculations, analysing different axial segments of fuel rod simultaneously, providing means for performing local analysis such as clad ridging analysis etc. The modular nature of the code offers flexibility in affecting modifications easily to the code for modelling MOX fuels and thorium based fuels. For performing analysis of fuel rods subjected to very long power histories within a reasonable amount of time, the code has been parallelised and is commissioned on the ANUPAM parallel processing system developed at Bhabha Atomic Research Centre (BARC). (author). 37 refs

  12. HPMV: human protein mutation viewer - relating sequence mutations to protein sequence architecture and function changes.

    Science.gov (United States)

    Sherman, Westley Arthur; Kuchibhatla, Durga Bhavani; Limviphuvadh, Vachiranee; Maurer-Stroh, Sebastian; Eisenhaber, Birgit; Eisenhaber, Frank

    2015-10-01

    Next-generation sequencing advances are rapidly expanding the number of human mutations to be analyzed for causative roles in genetic disorders. Our Human Protein Mutation Viewer (HPMV) is intended to explore the biomolecular mechanistic significance of non-synonymous human mutations in protein-coding genomic regions. The tool helps to assess whether protein mutations affect the occurrence of sequence-architectural features (globular domains, targeting signals, post-translational modification sites, etc.). As input, HPMV accepts protein mutations - as UniProt accessions with mutations (e.g. HGVS nomenclature), genome coordinates, or FASTA sequences. As output, HPMV provides an interactive cartoon showing the mutations in relation to elements of the sequence architecture. A large variety of protein sequence architectural features were selected for their particular relevance to mutation interpretation. Clicking a sequence feature in the cartoon expands a tree view of additional information including multiple sequence alignments of conserved domains and a simple 3D viewer mapping the mutation to known PDB structures, if available. The cartoon is also correlated with a multiple sequence alignment of similar sequences from other organisms. In cases where a mutation is likely to have a straightforward interpretation (e.g. a point mutation disrupting a well-understood targeting signal), this interpretation is suggested. The interactive cartoon can be downloaded as standalone viewer in Java jar format to be saved and viewed later with only a standard Java runtime environment. The HPMV website is: http://hpmv.bii.a-star.edu.sg/ .

  13. Model Children's Code.

    Science.gov (United States)

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  14. Affine Grassmann codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Beelen, Peter; Ghorpade, Sudhir Ramakant

    2010-01-01

    We consider a new class of linear codes, called affine Grassmann codes. These can be viewed as a variant of generalized Reed-Muller codes and are closely related to Grassmann codes.We determine the length, dimension, and the minimum distance of any affine Grassmann code. Moreover, we show that af...

  15. Enterprise Architecture in the Company Management Framework

    Directory of Open Access Journals (Sweden)

    Bojinov Bojidar Violinov

    2016-11-01

    Full Text Available The study aims to explore the role and importance of the concept of enterprise architecture in modern company management. For this purpose it clarifies the nature, scope, components of the enterprise architecture and relationships within it using the Zachman model. Based on the critical analysis of works by leading scientists, there presented a definition of enterprise architecture as a general description of all elements of strategic management of the company combined with description of its organizational, functional and operational structure, including the relationship between all tangible and intangible resources essential for its normal functioning and development. This in turn enables IT enterprise architecture to be defined as a set of corporate IT resources (hardware, software and technology, their interconnection and integration within the overall architecture of the company, as well as their formal description, methods and tools for their modeling and management in order to achieve strategic business goals of the organization. In conclusion the article summarizes the significance and role of enterprise architecture for strategic management of the company in today’s digital economy. The study underlines the importance of an integrated multidisciplinary approach to the work of a contemporary company, and the need for adequate matching and alignment of IT with business priorities and objectives of the company.

  16. Characterization of the MCNPX computer code in micro processed architectures

    International Nuclear Information System (INIS)

    Almeida, Helder C.; Dominguez, Dany S.; Orellana, Esbel T.V.; Milian, Felix M.

    2009-01-01

    The MCNPX (Monte Carlo N-Particle extended) can be used to simulate the transport of several types of nuclear particles, using probabilistic methods. The technique used for MCNPX is to follow the history of each particle from its origin to its extinction that can be given by absorption, escape or other reasons. To obtain accurate results in simulations performed with the MCNPX is necessary to process a large number of histories, which demand high computational cost. Currently the MCNPX can be installed in virtually all computing platforms available, however there is virtually no information on the performance of the application in each. This paper studies the performance of MCNPX, to work with electrons and photons in phantom Faux on two platforms used by most researchers, Windows and Li nux. Both platforms were tested on the same computer to ensure the reliability of the hardware in the measures of performance. The performance of MCNPX was measured by time spent to run a simulation, making the variable time the main measure of comparison. During the tests the difference in performance between the two platforms MCNPX was evident. In some cases we were able to gain speed more than 10% only with the exchange platforms, without any specific optimization. This shows the relevance of the study to optimize this tool on the platform most appropriate for its use. (author)

  17. Architectural Theory and Graphical Criteria for Modelling Certain Late Gothic Projects by Hernan Ruiz "the Elder"

    Directory of Open Access Journals (Sweden)

    Antonio Luis Ampliato Briones

    2014-10-01

    Full Text Available This paper primarily reflects on the need to create graphical codes for producing images intended to communicate architecture. Each step of the drawing needs to be a deliberate process in which the proposed code highlights the relationship between architectural theory and graphic action. Our aim is not to draw the result of the architectural process but the design structure of the actual process; to draw as we design; to draw as we build. This analysis of the work of the Late Gothic architect Hernan Ruiz the Elder, from Cordoba, addresses two aspects: the historical and architectural investigation, and the graphical project for communication purposes.

  18. JAVA Implementation of the Batched iLab Shared Architecture

    Directory of Open Access Journals (Sweden)

    Lenard Payne

    2013-04-01

    Full Text Available The MIT iLab Shared Architecture is limited currently to running on the Microsoft Windows platform. A JAVA implementation of the Batched iLab Shared Architecture has been developed that can be used on other operating systems and still interoperate with the existing Microsoft .NET web services of MIT’s iLab ServiceBroker. The Batched iLab Shared Architecture has been revised and separates the Labserver into a LabServer that handles experiment management and a LabEquipment that handles experiment execution. The JAVA implementation provides a 3-tier code development model that allows code to be reused and to develop only the code that is specific to each experiment.

  19. A New HLA-Based Distributed Control Architecture for Agricultural Teams of Robots in Hybrid Applications with Real and Simulated Devices or Environments

    Directory of Open Access Journals (Sweden)

    Rafael J. Martínez

    2011-04-01

    Full Text Available The control architecture is one of the most important part of agricultural robotics and other robotic systems. Furthermore its importance increases when the system involves a group of heterogeneous robots that should cooperate to achieve a global goal. A new control architecture is introduced in this paper for groups of robots in charge of doing maintenance tasks in agricultural environments. Some important features such as scalability, code reuse, hardware abstraction and data distribution have been considered in the design of the new architecture. Furthermore, coordination and cooperation among the different elements in the system is allowed in the proposed control system. By integrating a network oriented device server Player, Java Agent Development Framework (JADE and High Level Architecture (HLA, the previous concepts have been considered in the new architecture presented in this paper. HLA can be considered the most important part because it not only allows the data distribution and implicit communication among the parts of the system but also allows to simultaneously operate with simulated and real entities, thus allowing the use of hybrid systems in the development of applications.

  20. A new HLA-based distributed control architecture for agricultural teams of robots in hybrid applications with real and simulated devices or environments.

    Science.gov (United States)

    Nebot, Patricio; Torres-Sospedra, Joaquín; Martínez, Rafael J

    2011-01-01

    The control architecture is one of the most important part of agricultural robotics and other robotic systems. Furthermore its importance increases when the system involves a group of heterogeneous robots that should cooperate to achieve a global goal. A new control architecture is introduced in this paper for groups of robots in charge of doing maintenance tasks in agricultural environments. Some important features such as scalability, code reuse, hardware abstraction and data distribution have been considered in the design of the new architecture. Furthermore, coordination and cooperation among the different elements in the system is allowed in the proposed control system. By integrating a network oriented device server Player, Java Agent Development Framework (JADE) and High Level Architecture (HLA), the previous concepts have been considered in the new architecture presented in this paper. HLA can be considered the most important part because it not only allows the data distribution and implicit communication among the parts of the system but also allows to simultaneously operate with simulated and real entities, thus allowing the use of hybrid systems in the development of applications.