WorldWideScience

Sample records for element code architecture

  1. Elements of Architecture

    DEFF Research Database (Denmark)

    Elements of Architecture explores new ways of engaging architecture in archaeology. It conceives of architecture both as the physical evidence of past societies and as existing beyond the physical environment, considering how people in the past have not just dwelled in buildings but have existed...

  2. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  3. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  4. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  5. A unified architecture of transcriptional regulatory elements

    DEFF Research Database (Denmark)

    Andersson, Robin; Sandelin, Albin Gustav; Danko, Charles G.

    2015-01-01

    Gene expression is precisely controlled in time and space through the integration of signals that act at gene promoters and gene-distal enhancers. Classically, promoters and enhancers are considered separate classes of regulatory elements, often distinguished by histone modifications. However...... and enhancers are considered a single class of functional element, with a unified architecture for transcription initiation. The context of interacting regulatory elements and the surrounding sequences determine local transcriptional output as well as the enhancer and promoter activities of individual elements....

  6. Research and Design in Unified Coding Architecture for Smart Grids

    Directory of Open Access Journals (Sweden)

    Gang Han

    2013-09-01

    Full Text Available Standardized and sharing information platform is the foundation of the Smart Grids. In order to improve the dispatching center information integration of the power grids and achieve efficient data exchange, sharing and interoperability, a unified coding architecture is proposed. The architecture includes coding management layer, coding generation layer, information models layer and application system layer. Hierarchical design makes the whole coding architecture to adapt to different application environments, different interfaces, loosely coupled requirements, which can realize the integration model management function of the power grids. The life cycle and evaluation method of survival of unified coding architecture is proposed. It can ensure the stability and availability of the coding architecture. Finally, the development direction of coding technology of the Smart Grids in future is prospected.

  7. Neural codes of seeing architectural styles.

    Science.gov (United States)

    Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B

    2017-01-10

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.

  8. High Efficiency EBCOT with Parallel Coding Architecture for JPEG2000

    Directory of Open Access Journals (Sweden)

    Chiang Jen-Shiun

    2006-01-01

    Full Text Available This work presents a parallel context-modeling coding architecture and a matching arithmetic coder (MQ-coder for the embedded block coding (EBCOT unit of the JPEG2000 encoder. Tier-1 of the EBCOT consumes most of the computation time in a JPEG2000 encoding system. The proposed parallel architecture can increase the throughput rate of the context modeling. To match the high throughput rate of the parallel context-modeling architecture, an efficient pipelined architecture for context-based adaptive arithmetic encoder is proposed. This encoder of JPEG2000 can work at 180 MHz to encode one symbol each cycle. Compared with the previous context-modeling architectures, our parallel architectures can improve the throughput rate up to 25%.

  9. Neural codes of seeing architectural styles

    OpenAIRE

    Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.

    2017-01-01

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people′s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding sugges...

  10. Novel power saving architecture for FBG based OCDMA code generation

    Science.gov (United States)

    Osadola, Tolulope B.; Idris, Siti K.; Glesk, Ivan

    2013-10-01

    A novel architecture for generating incoherent, 2-dimensional wavelength hopping-time spreading optical CDMA codes is presented. The architecture is designed to facilitate the reuse of optical source signal that is unused after an OCDMA code has been generated using fiber Bragg grating based encoders. Effective utilization of available optical power is therefore achieved by cascading several OCDMA encoders thereby enabling 3dB savings in optical power.

  11. Reversible machine code and its abstract processor architecture

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert; Yokoyama, Tetsuo

    2007-01-01

    A reversible abstract machine architecture and its reversible machine code are presented and formalized. For machine code to be reversible, both the underlying control logic and each instruction must be reversible. A general class of machine instruction sets was proven to be reversible, building...

  12. Particle In Cell Codes on Highly Parallel Architectures

    Science.gov (United States)

    Tableman, Adam

    2014-10-01

    We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.

  13. Requirements for a multifunctional code architecture

    Energy Technology Data Exchange (ETDEWEB)

    Tiihonen, O. [VTT Energy (Finland); Juslin, K. [VTT Automation (Finland)

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed.

  14. Requirements for a multifunctional code architecture

    International Nuclear Information System (INIS)

    Tiihonen, O.; Juslin, K.

    1997-01-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed

  15. TACO: a finite element heat transfer code

    International Nuclear Information System (INIS)

    Mason, W.E. Jr.

    1980-02-01

    TACO is a two-dimensional implicit finite element code for heat transfer analysis. It can perform both linear and nonlinear analyses and can be used to solve either transient or steady state problems. Either plane or axisymmetric geometries can be analyzed. TACO has the capability to handle time or temperature dependent material properties and materials may be either isotropic or orthotropic. A variety of time and temperature dependent loadings and boundary conditions are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additionally, TACO has some specialized features such as internal surface conditions (e.g., contact resistance), bulk nodes, enclosure radiation with view factor calculations, and chemical reactive kinetics. A user subprogram feature allows for any type of functional representation of any independent variable. A bandwidth and profile minimization option is also available in the code. Graphical representation of data generated by TACO is provided by a companion post-processor named POSTACO. The theory on which TACO is based is outlined, the capabilities of the code are explained, the input data required to perform an analysis with TACO are described. Some simple examples are provided to illustrate the use of the code

  16. Elements of neurogeometry functional architectures of vision

    CERN Document Server

    Petitot, Jean

    2017-01-01

    This book describes several mathematical models of the primary visual cortex, referring them to a vast ensemble of experimental data and putting forward an original geometrical model for its functional architecture, that is, the highly specific organization of its neural connections. The book spells out the geometrical algorithms implemented by this functional architecture, or put another way, the “neurogeometry” immanent in visual perception. Focusing on the neural origins of our spatial representations, it demonstrates three things: firstly, the way the visual neurons filter the optical signal is closely related to a wavelet analysis; secondly, the contact structure of the 1-jets of the curves in the plane (the retinal plane here) is implemented by the cortical functional architecture; and lastly, the visual algorithms for integrating contours from what may be rather incomplete sensory data can be modelled by the sub-Riemannian geometry associated with this contact structure. As such, it provides rea...

  17. NASA Lewis Steady-State Heat Pipe Code Architecture

    Science.gov (United States)

    Mi, Ye; Tower, Leonard K.

    2013-01-01

    NASA Glenn Research Center (GRC) has developed the LERCHP code. The PC-based LERCHP code can be used to predict the steady-state performance of heat pipes, including the determination of operating temperature and operating limits which might be encountered under specified conditions. The code contains a vapor flow algorithm which incorporates vapor compressibility and axially varying heat input. For the liquid flow in the wick, Darcy s formula is employed. Thermal boundary conditions and geometric structures can be defined through an interactive input interface. A variety of fluid and material options as well as user defined options can be chosen for the working fluid, wick, and pipe materials. This report documents the current effort at GRC to update the LERCHP code for operating in a Microsoft Windows (Microsoft Corporation) environment. A detailed analysis of the model is presented. The programming architecture for the numerical calculations is explained and flowcharts of the key subroutines are given

  18. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  19. Building code challenging the ethics behind adobe architecture in North Cyprus.

    Science.gov (United States)

    Hurol, Yonca; Yüceer, Hülya; Şahali, Öznem

    2015-04-01

    Adobe masonry is part of the vernacular architecture of Cyprus. Thus, it is possible to use this technology in a meaningful way on the island. On the other hand, although adobe architecture is more sustainable in comparison to other building technologies, the use of it is diminishing in North Cyprus. The application of Turkish building code in the north of the island has created complications in respect of the use of adobe masonry, because this building code demands that reinforced concrete vertical tie-beams are used together with adobe masonry. The use of reinforced concrete elements together with adobe masonry causes problems in relation to the climatic response of the building as well as causing other technical and aesthetic problems. This situation makes the design of adobe masonry complicated and various types of ethical problems also emerge. The objective of this article is to analyse the ethical problems which arise as a consequence of the restrictive character of the building code, by analysing two case studies and conducting an interview with an architect who was involved with the use of adobe masonry in North Cyprus. According to the results of this article there are ethical problems at various levels in the design of both case studies. These problems are connected to the responsibilities of architects in respect of the social benefit, material production, aesthetics and affordability of the architecture as well as presenting distrustful behaviour where the obligations of architects to their clients is concerned.

  20. Implementation of collisions on GPU architecture in the Vorpal code

    Science.gov (United States)

    Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John

    2017-10-01

    The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.

  1. FINELM: a multigroup finite element diffusion code

    International Nuclear Information System (INIS)

    Higgs, C.E.; Davierwalla, D.M.

    1981-06-01

    FINELM is a FORTRAN IV program to solve the Neutron Diffusion Equation in X-Y, R-Z, R-theta, X-Y-Z and R-theta-Z geometries using the method of Finite Elements. Lagrangian elements of linear or higher degree to approximate the spacial flux distribution have been provided. The method of dissections, coarse mesh rebalancing and Chebyshev acceleration techniques are available. Simple user defined input is achieved through extensive input subroutines. The input preparation is described followed by a program structure description. Sample test cases are provided. (Auth.)

  2. A finite element code for electric motor design

    Science.gov (United States)

    Campbell, C. Warren

    1994-01-01

    FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.

  3. FINELM: a multigroup finite element diffusion code. Part II

    International Nuclear Information System (INIS)

    Davierwalla, D.M.

    1981-05-01

    The author presents the axisymmetric case in cylindrical coordinates for the finite element multigroup neutron diffusion code, FINELM. The numerical acceleration schemes incorporated viz. the Lebedev extrapolations and the coarse mesh rebalancing, space collapsing, are discussed. A few benchmark computations are presented as validation of the code. (Auth.)

  4. Periodic Boundary Conditions in the ALEGRA Finite Element Code

    International Nuclear Information System (INIS)

    Aidun, John B.; Robinson, Allen C.; Weatherby, Joe R.

    1999-01-01

    This document describes the implementation of periodic boundary conditions in the ALEGRA finite element code. ALEGRA is an arbitrary Lagrangian-Eulerian multi-physics code with both explicit and implicit numerical algorithms. The periodic boundary implementation requires a consistent set of boundary input sets which are used to describe virtual periodic regions. The implementation is noninvasive to the majority of the ALEGRA coding and is based on the distributed memory parallel framework in ALEGRA. The technique involves extending the ghost element concept for interprocessor boundary communications in ALEGRA to additionally support on- and off-processor periodic boundary communications. The user interface, algorithmic details and sample computations are given

  5. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  6. A code for obtaining temperature distribution by finite element method

    International Nuclear Information System (INIS)

    Bloch, M.

    1984-01-01

    The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt

  7. CONDOR: neutronic code for fuel elements calculation with rods

    International Nuclear Information System (INIS)

    Villarino, E.A.

    1990-01-01

    CONDOR neutronic code is used for the calculation of fuel elements formed by fuel rods. The method employed to obtain the neutronic flux is that of collision probabilities in a multigroup scheme on two-dimensional geometry. This code utilizes new calculation algorithms and normalization of such collision probabilities. Burn-up calculations can be made before the alternative of applying variational methods for response flux calculations or those corresponding to collision normalization. (Author) [es

  8. Modeling of PHWR fuel elements using FUDA code

    International Nuclear Information System (INIS)

    Tripathi, Rahul Mani; Soni, Rakesh; Prasad, P.N.; Pandarinathan, P.R.

    2008-01-01

    The computer code FUDA (Fuel Design Analysis) is used for modeling PHWR fuel bundle operation history and carry out fuel element thermo-mechanical analysis. The radial temperature profile across fuel and sheath, fission gas release, internal gas pressure, sheath stress and strains during the life of fuel bundle are estimated

  9. Computing element evolution towards Exascale and its impact on legacy simulation codes

    International Nuclear Information System (INIS)

    Colin de Verdiere, Guillaume J.L.

    2015-01-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes. (orig.)

  10. Computing element evolution towards Exascale and its impact on legacy simulation codes

    Science.gov (United States)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.

  11. Architectural elements of hybrid navigation systems for future space transportation

    Science.gov (United States)

    Trigo, Guilherme F.; Theil, Stephan

    2017-12-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  12. Architectural elements of hybrid navigation systems for future space transportation

    Science.gov (United States)

    Trigo, Guilherme F.; Theil, Stephan

    2018-06-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  13. Network Coding Parallelization Based on Matrix Operations for Multicore Architectures

    DEFF Research Database (Denmark)

    Wunderlich, Simon; Cabrera, Juan; Fitzek, Frank

    2015-01-01

    such as the Raspberry Pi2 with four cores in the order of up to one full magnitude. The speed increase gain is even higher than the number of cores of the Raspberry Pi2 since the newly introduced approach exploits the cache architecture way better than by-the-book matrix operations. Copyright © 2015 by the Institute...

  14. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  15. FLASH: A finite element computer code for variably saturated flow

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A

  16. Implementing the Freight Transportation Data Architecture : Data Element Dictionary

    Science.gov (United States)

    2015-01-01

    NCFRP Report 9: Guidance for Developing a Freight Data Architecture articulates the value of establishing architecture for linking data across modes, subjects, and levels of geography to obtain essential information for decision making. Central to th...

  17. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell; Heinecke, Alexander; Pabst, Hans; Henry, Greg; Parsani, Matteo; Keyes, David E.

    2016-01-01

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  18. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell

    2016-06-14

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  19. Three-dimensional modeling with finite element codes

    Energy Technology Data Exchange (ETDEWEB)

    Druce, R.L.

    1986-01-17

    This paper describes work done to model magnetostatic field problems in three dimensions. Finite element codes, available at LLNL, and pre- and post-processors were used in the solution of the mathematical model, the output from which agreed well with the experimentally obtained data. The geometry used in this work was a cylinder with ports in the periphery and no current sources in the space modeled. 6 refs., 8 figs.

  20. FINELM: a multigroup finite element diffusion code. Part I

    International Nuclear Information System (INIS)

    Davierwalla, D.M.

    1980-12-01

    The author presents a two dimensional code for multigroup diffusion using the finite element method. It was realized that the extensive connectivity which contributes significantly to the accuracy, results in a matrix which, although symmetric and positive definite, is wide band and possesses an irregular profile. Hence, it was decided to introduce sparsity techniques into the code. The introduction of the R-Z geometry lead to a great deal of changes in the code since the rotational invariance of the removal matrices in X-Y geometry did not carry over in R-Z geometry. Rectangular elements were introduced to remedy the inability of the triangles to model essentially one dimensional problems such as slab geometry. The matter is discussed briefly in the text in the section on benchmark problems. This report is restricted to the general theory of the triangular elements and to the sparsity techniques viz. incomplete disections. The latter makes the size of the problem that can be handled independent of core memory and dependent only on disc storage capacity which is virtually unlimited. (Auth.)

  1. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    Science.gov (United States)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  2. Researching on knowledge architecture of design by analysis based on ASME code

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2003-01-01

    The quality of knowledge-based system's knowledge architecture is one of decisive factors of knowledge-based system's validity and rationality. For designing the ASME code knowledge based system, this paper presents a knowledge acquisition method which is extracting knowledge through document analysis consulted domain experts' knowledge. Then the paper describes knowledge architecture of design by analysis based on the related rules in ASME code. The knowledge of the knowledge architecture is divided into two categories: one is empirical knowledge, and another is ASME code knowledge. Applied as the basement of the knowledge architecture, a general procedural process of design by analysis that is met the engineering design requirements and designers' conventional mode is generalized and explained detailed in the paper. For the sake of improving inference efficiency and concurrent computation of KBS, a kind of knowledge Petri net (KPN) model is proposed and adopted in expressing the knowledge architecture. Furthermore, for validating and verifying of the empirical rules, five knowledge validation and verification theorems are given in the paper. Moreover the research production is applicable to design the knowledge architecture of ASME codes or other engineering standards. (author)

  3. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  4. Do Performance-Based Codes Support Universal Design in Architecture?

    DEFF Research Database (Denmark)

    Grangaard, Sidse; Frandsen, Anne Kathrine

    2016-01-01

    – Universal Design (UD). The empirical material consists of input from six workshops to which all 700 Danish Architectural firms were invited, as well as eight group interviews. The analysis shows that the current prescriptive requirements are criticized for being too homogenous and possibilities...... for differentiation and zoning are required. Therefore, a majority of professionals are interested in a performance-based model because they think that such a model will support ‘accessibility zoning’, achieving flexibility because of different levels of accessibility in a building due to its performance. The common...... of educational objectives is suggested as a tool for such a boost. The research project has been financed by the Danish Transport and Construction Agency....

  5. Industrial applications of N3S finite element code

    International Nuclear Information System (INIS)

    Chabard, J.P.; Pot, G.; Martin, A.

    1993-12-01

    The Research and Development Division of EDF (French utilities) has been working since 1982 on N3S, a 3D finite element code for simulating turbulent incompressible flows (Chabard et al., 1992) which has many applications nowadays dealing with internal flows, thermal hydraulics (Delenne and Pot, 1993), turbomachinery (Combes and Rieutord, 1992). The size of these applications is larger and larger: calculations until 350 000 nodes are in progress (around 2 000 000 unknowns). To achieve so large applications, an important work has been done on the choice of efficient algorithms and on their implementation in order to reduce CPU time and memory allocation. The paper presents the central algorithm of the code, focusing on time and memory optimization. As an illustration, validation test cases and a recent industrial application are discussed. (authors). 11 figs., 2 tabs., 11 refs

  6. Concepts and diagram elements for architectural knowledge management

    NARCIS (Netherlands)

    Orlic, B.; Mak, R.H.; David, I.; Lukkien, J.J.

    2011-01-01

    Capturing architectural knowledge is very important for the evolution of software products. There is increasing awareness that an essential part of this knowledge is in fact the very process of architectural reasoning and decision making, and not just its end results. Therefore, a conceptual

  7. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  8. Advances in Architectural Elements For Future Missions to Titan

    Science.gov (United States)

    Reh, Kim; Coustenis, Athena; Lunine, Jonathan; Matson, Dennis; Lebreton, Jean-Pierre; Vargas, Andre; Beauchamp, Pat; Spilker, Tom; Strange, Nathan; Elliott, John

    2010-05-01

    The future exploration of Titan is of high priority for the solar system exploration community as recommended by the 2003 National Research Council (NRC) Decadal Survey [1] and ESA's Cosmic Vision Program themes. Recent Cassini-Huygens discoveries continue to emphasize that Titan is a complex world with very many Earth-like features. Titan has a dense, nitrogen atmosphere, an active climate and meteorological cycles where conditions are such that the working fluid, methane, plays the role that water does on Earth. Titan's surface, with lakes and seas, broad river valleys, sand dunes and mountains was formed by processes like those that have shaped the Earth. Supporting this panoply of Earth-like processes is an ice crust that floats atop what might be a liquid water ocean. Furthermore, Titan is rich in very many different organic compounds—more so than any place in the solar system, except Earth. The Titan Saturn System Mission (TSSM) concept that followed the 2007 TandEM ESA CV proposal [2] and the 2007 Titan Explorer NASA Flagship study [3], was examined [4,5] and prioritized by NASA and ESA in February 2009 as a mission to follow the Europa Jupiter System Mission. The TSSM study, like others before it, again concluded that an orbiter, a montgolfiere hot-air balloon and a surface package (e.g. lake lander, Geosaucer (instrumented heat shield), …) are very high priority elements for any future mission to Titan. Such missions could be conceived as Flagship/Cosmic Vision L-Class or as individual smaller missions that could possibly fit into NASA New Frontiers or ESA Cosmic Vision M-Class budgets. As a result of a multitude of Titan mission studies, a clear blueprint has been laid out for the work needed to reduce the risks inherent in such missions and the areas where advances would be beneficial for elements critical to future Titan missions have been identified. The purpose of this paper is to provide a brief overview of the flagship mission architecture and

  9. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  10. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    Science.gov (United States)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  11. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  12. Architectural and Algorithmic Requirements for a Next-Generation System Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    V.A. Mousseau

    2010-05-01

    This document presents high-level architectural and system requirements for a next-generation system analysis code (NGSAC) to support reactor safety decision-making by plant operators and others, especially in the context of light water reactor plant life extension. The capabilities of NGSAC will be different from those of current-generation codes, not only because computers have evolved significantly in the generations since the current paradigm was first implemented, but because the decision-making processes that need the support of next-generation codes are very different from the decision-making processes that drove the licensing and design of the current fleet of commercial nuclear power reactors. The implications of these newer decision-making processes for NGSAC requirements are discussed, and resulting top-level goals for the NGSAC are formulated. From these goals, the general architectural and system requirements for the NGSAC are derived.

  13. FEHM, Finite Element Heat and Mass Transfer Code

    International Nuclear Information System (INIS)

    Zyvoloski, G.A.

    2002-01-01

    1 - Description of program or function: FEHM is a numerical simulation code for subsurface transport processes. It models 3-D, time-dependent, multiphase, multicomponent, non-isothermal, reactive flow through porous and fractured media. It can accurately represent complex 3-D geologic media and structures and their effects on subsurface flow and transport. Its capabilities include flow of gas, water, and heat; flow of air, water, and heat; multiple chemically reactive and sorbing tracers; finite element/finite volume formulation; coupled stress module; saturated and unsaturated media; and double porosity and double porosity/double permeability capabilities. 2 - Methods: FEHM uses a preconditioned conjugate gradient solution of coupled linear equations and a fully implicit, fully coupled Newton Raphson solution of nonlinear equations. It has the capability of simulating transport using either a advection/diffusion solution or a particle tracking method. 3 - Restriction on the complexity of the problem: Disk space and machine memory are the only limitations

  14. ELLIPT2D: A Flexible Finite Element Code Written Python

    International Nuclear Information System (INIS)

    Pletzer, A.; Mollis, J.C.

    2001-01-01

    The use of the Python scripting language for scientific applications and in particular to solve partial differential equations is explored. It is shown that Python's rich data structure and object-oriented features can be exploited to write programs that are not only significantly more concise than their counter parts written in Fortran, C or C++, but are also numerically efficient. To illustrate this, a two-dimensional finite element code (ELLIPT2D) has been written. ELLIPT2D provides a flexible and easy-to-use framework for solving a large class of second-order elliptic problems. The program allows for structured or unstructured meshes. All functions defining the elliptic operator are user supplied and so are the boundary conditions, which can be of Dirichlet, Neumann or Robbins type. ELLIPT2D makes extensive use of dictionaries (hash tables) as a way to represent sparse matrices.Other key features of the Python language that have been widely used include: operator over loading, error handling, array slicing, and the Tkinter module for building graphical use interfaces. As an example of the utility of ELLIPT2D, a nonlinear solution of the Grad-Shafranov equation is computed using a Newton iterative scheme. A second application focuses on a solution of the toroidal Laplace equation coupled to a magnetohydrodynamic stability code, a problem arising in the context of magnetic fusion research

  15. Analysis of central enterprise architecture elements in models of six eHealth projects.

    Science.gov (United States)

    Virkanen, Hannu; Mykkänen, Juha

    2014-01-01

    Large-scale initiatives for eHealth services have been established in many countries on regional or national level. The use of Enterprise Architecture has been suggested as a methodology to govern and support the initiation, specification and implementation of large-scale initiatives including the governance of business changes as well as information technology. This study reports an analysis of six health IT projects in relation to Enterprise Architecture elements, focusing on central EA elements and viewpoints in different projects.

  16. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    Science.gov (United States)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  17. Propel: A Discontinuous-Galerkin Finite Element Code for Solving the Reacting Navier-Stokes Equations

    Science.gov (United States)

    Johnson, Ryan; Kercher, Andrew; Schwer, Douglas; Corrigan, Andrew; Kailasanath, Kazhikathra

    2017-11-01

    This presentation focuses on the development of a Discontinuous Galerkin (DG) method for application to chemically reacting flows. The in-house code, called Propel, was developed by the Laboratory of Computational Physics and Fluid Dynamics at the Naval Research Laboratory. It was designed specifically for developing advanced multi-dimensional algorithms to run efficiently on new and innovative architectures such as GPUs. For these results, Propel solves for convection and diffusion simultaneously with detailed transport and thermodynamics. Chemistry is currently solved in a time-split approach using Strang-splitting with finite element DG time integration of chemical source terms. Results presented here show canonical unsteady reacting flow cases, such as co-flow and splitter plate, and we report performance for higher order DG on CPU and GPUs.

  18. CONDOR: a database resource of developmentally associated conserved non-coding elements

    Directory of Open Access Journals (Sweden)

    Smith Sarah

    2007-08-01

    Full Text Available Abstract Background Comparative genomics is currently one of the most popular approaches to study the regulatory architecture of vertebrate genomes. Fish-mammal genomic comparisons have proved powerful in identifying conserved non-coding elements likely to be distal cis-regulatory modules such as enhancers, silencers or insulators that control the expression of genes involved in the regulation of early development. The scientific community is showing increasing interest in characterizing the function, evolution and language of these sequences. Despite this, there remains little in the way of user-friendly access to a large dataset of such elements in conjunction with the analysis and the visualization tools needed to study them. Description Here we present CONDOR (COnserved Non-coDing Orthologous Regions available at: http://condor.fugu.biology.qmul.ac.uk. In an interactive and intuitive way the website displays data on > 6800 non-coding elements associated with over 120 early developmental genes and conserved across vertebrates. The database regularly incorporates results of ongoing in vivo zebrafish enhancer assays of the CNEs carried out in-house, which currently number ~100. Included and highlighted within this set are elements derived from duplication events both at the origin of vertebrates and more recently in the teleost lineage, thus providing valuable data for studying the divergence of regulatory roles between paralogs. CONDOR therefore provides a number of tools and facilities to allow scientists to progress in their own studies on the function and evolution of developmental cis-regulation. Conclusion By providing access to data with an approachable graphics interface, the CONDOR database presents a rich resource for further studies into the regulation and evolution of genes involved in early development.

  19. Architecture proposal for the use of QR code in supply chain management

    Directory of Open Access Journals (Sweden)

    Dalton Matsuo Tavares

    2012-01-01

    Full Text Available Supply chain traceability and visibility are key concerns for many companies. Radio-Frequency Identification (RFID is an enabling technology that allows identification of objects in a fully automated manner via radio waves. Nevertheless, this technology has limited acceptance and high costs. This paper presents a research effort undertaken to design a track and trace solution in supply chains, using quick response code (or QR Code for short as a less complex and cost-effective alternative for RFID in supply chain management (SCM. A first architecture proposal using open source software will be presented as a proof of concept. The system architecture is presented in order to achieve tag generation, the image acquisition and pre-processing, product inventory and tracking. A prototype system for the tag identification is developed and discussed at the end of the paper to demonstrate its feasibility.

  20. FPGA-Based Channel Coding Architectures for 5G Wireless Using High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Swapnil Mhaske

    2017-01-01

    Full Text Available We propose strategies to achieve a high-throughput FPGA architecture for quasi-cyclic low-density parity-check codes based on circulant-1 identity matrix construction. By splitting the node processing operation in the min-sum approximation algorithm, we achieve pipelining in the layered decoding schedule without utilizing additional hardware resources. High-level synthesis compilation is used to design and develop the architecture on the FPGA hardware platform. To validate this architecture, an IEEE 802.11n compliant 608 Mb/s decoder is implemented on the Xilinx Kintex-7 FPGA using the LabVIEW FPGA Compiler in the LabVIEW Communication System Design Suite. Architecture scalability was leveraged to accomplish a 2.48 Gb/s decoder on a single Xilinx Kintex-7 FPGA. Further, we present rapidly prototyped experimentation of an IEEE 802.16 compliant hybrid automatic repeat request system based on the efficient decoder architecture developed. In spite of the mixed nature of data processing—digital signal processing and finite-state machines—LabVIEW FPGA Compiler significantly reduced time to explore the system parameter space and to optimize in terms of error performance and resource utilization. A 4x improvement in the system throughput, relative to a CPU-based implementation, was achieved to measure the error-rate performance of the system over large, realistic data sets using accelerated, in-hardware simulation.

  1. A self-organized internal models architecture for coding sensory-motor schemes

    Directory of Open Access Journals (Sweden)

    Esaú eEscobar Juárez

    2016-04-01

    Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.

  2. Reply to "Comments on Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes"

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available This is a reply to the comments by Gunnam et al. "Comments on 'Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes'", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 704174 on our recent work "Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 723465.

  3. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  4. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    Science.gov (United States)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  5. Improvement of implicit finite element code performance in deep drawing simulations by dynamics contributions

    NARCIS (Netherlands)

    Meinders, Vincent T.; van den Boogaard, Antonius H.; Huetink, Han

    2003-01-01

    To intensify the use of implicit finite element codes for solving large scale problems, the computation time of these codes has to be decreased drastically. A method is developed which decreases the computational time of implicit codes by factors. The method is based on introducing inertia effects

  6. Performance Analysis of an Astrophysical Simulation Code on the Intel Xeon Phi Architecture

    OpenAIRE

    Noormofidi, Vahid; Atlas, Susan R.; Duan, Huaiyu

    2015-01-01

    We have developed the astrophysical simulation code XFLAT to study neutrino oscillations in supernovae. XFLAT is designed to utilize multiple levels of parallelism through MPI, OpenMP, and SIMD instructions (vectorization). It can run on both CPU and Xeon Phi co-processors based on the Intel Many Integrated Core Architecture (MIC). We analyze the performance of XFLAT on configurations with CPU only, Xeon Phi only and both CPU and Xeon Phi. We also investigate the impact of I/O and the multi-n...

  7. Kine-Mould : Manufacturing technology for curved architectural elements in concrete

    NARCIS (Netherlands)

    Schipper, H.R.; Eigenraam, P.; Grünewald, S.; Soru, M.; Nap, P.; Van Overveld, B.; Vermeulen, J.

    2015-01-01

    The production of architectural elements with complex geometry is challenging for concrete manufacturers. Computer-numerically-controlled (CNC) milled foam moulds have been applied frequently in the last decades, resulting in good aesthetical performance. However, still the costs are high and a

  8. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Science.gov (United States)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  9. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    Science.gov (United States)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  10. INGEN: a general-purpose mesh generator for finite element codes

    International Nuclear Information System (INIS)

    Cook, W.A.

    1979-05-01

    INGEN is a general-purpose mesh generator for two- and three-dimensional finite element codes. The basic parts of the code are surface and three-dimensional region generators that use linear-blending interpolation formulas. These generators are based on an i, j, k index scheme that is used to number nodal points, construct elements, and develop displacement and traction boundary conditions. This code can generate truss elements (2 modal points); plane stress, plane strain, and axisymmetry two-dimensional continuum elements (4 to 8 nodal points); plate elements (4 to 8 nodal points); and three-dimensional continuum elements (8 to 21 nodal points). The traction loads generated are consistent with the element generated. The expansion--contraction option is of special interest. This option makes it possible to change an existing mesh such that some regions are refined and others are made coarser than the original mesh. 9 figures

  11. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    Science.gov (United States)

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  12. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications

    International Nuclear Information System (INIS)

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J.

    2004-01-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1x10 8 or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8x10 8 histories. For a smaller number of histories (1x10 8 ) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1x10 8 histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy

  13. Architecture for the Elderly and Frail People, Well-Being Elements Realizations and Outcomes

    DEFF Research Database (Denmark)

    Knudstrup, Mary-Ann

    2011-01-01

    -being elements in the nursing home environments that contribute to enhancing the well-being of the elderly and how these elements is ensured attention during a decision making process related to the design and the establishing of nursing homes. With basis in four Danish representative case studies, various case...... data from the decision making process are collected, covering the planning, the design and the realization of four newly built nursing homes in Denmark. The case studies clearly shows that the architectural well-being elements appear weak in the decision making process, when they are conflicting......The relationship between architecture, housing and well-being of elderly and frail people is a topic of growing interest to consultants and political decision makers working on welfare solutions for elderly citizens. The objective of the research presented here is to highlight which well...

  14. The architecture of cartilage: Elemental maps and scanning transmission ion microscopy/tomography

    International Nuclear Information System (INIS)

    Reinert, Tilo; Reibetanz, Uta; Schwertner, Michael; Vogt, Juergen; Butz, Tilman; Sakellariou, Arthur

    2002-01-01

    Articular cartilage is not just a jelly-like cover of the bone within the joints but a highly sophisticated architecture of hydrated macromolecules, collagen fibrils and cartilage cells. Influences on the physiological balance due to age-related or pathological changes can lead to malfunction and subsequently to degradation of the cartilage. Many activities in cartilage research are dealing with the architecture of joint cartilage but have limited access to elemental distributions. Nuclear microscopy is able to yield spatially resolved elemental concentrations, provides density information and can visualise the arrangement of the collagen fibres. The distribution of the cartilage matrix can be deduced from the elemental and density maps. The findings showed a varying content of collagen and proteoglycan between zones of different cell maturation. Zones of higher collagen content are characterised by aligned collagen fibres that can form tubular structures. Recently we focused on STIM tomography to investigate the three dimensional arrangement of the collagen structures

  15. A Study on Architecture of Malicious Code Blocking Scheme with White List in Smartphone Environment

    Science.gov (United States)

    Lee, Kijeong; Tolentino, Randy S.; Park, Gil-Cheol; Kim, Yong-Tae

    Recently, the interest and demands for mobile communications are growing so fast because of the increasing prevalence of smartphones around the world. In addition, the existing feature phones were replaced by smartphones and it has widely improved while using the explosive growth of Internet users using smartphones, e-commerce enabled Internet banking transactions and the importance of protecting personal information. Therefore, the development of smartphones antivirus products was developed and launched in order to prevent malicious code or virus infection. In this paper, we proposed a new scheme to protect the smartphone from malicious codes and malicious applications that are element of security threats in mobile environment and to prevent information leakage from malicious code infection. The proposed scheme is based on the white list smartphone application which only allows installing authorized applications and to prevent the installation of malicious and untrusted mobile applications which can possibly infect the applications and programs of smartphones.

  16. Understanding Epistatic Interactions between Genes Targeted by Non-coding Regulatory Elements in Complex Diseases

    Directory of Open Access Journals (Sweden)

    Min Kyung Sung

    2014-12-01

    Full Text Available Genome-wide association studies have proven the highly polygenic architecture of complex diseases or traits; therefore, single-locus-based methods are usually unable to detect all involved loci, especially when individual loci exert small effects. Moreover, the majority of associated single-nucleotide polymorphisms resides in non-coding regions, making it difficult to understand their phenotypic contribution. In this work, we studied epistatic interactions associated with three common diseases using Korea Association Resource (KARE data: type 2 diabetes mellitus (DM, hypertension (HT, and coronary artery disease (CAD. We showed that epistatic single-nucleotide polymorphisms (SNPs were enriched in enhancers, as well as in DNase I footprints (the Encyclopedia of DNA Elements [ENCODE] Project Consortium 2012, which suggested that the disruption of the regulatory regions where transcription factors bind may be involved in the disease mechanism. Accordingly, to identify the genes affected by the SNPs, we employed whole-genome multiple-cell-type enhancer data which discovered using DNase I profiles and Cap Analysis Gene Expression (CAGE. Assigned genes were significantly enriched in known disease associated gene sets, which were explored based on the literature, suggesting that this approach is useful for detecting relevant affected genes. In our knowledge-based epistatic network, the three diseases share many associated genes and are also closely related with each other through many epistatic interactions. These findings elucidate the genetic basis of the close relationship between DM, HT, and CAD.

  17. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    Directory of Open Access Journals (Sweden)

    Sandeep Kakde

    2017-12-01

    Full Text Available For binary field and long code lengths, Low Density Parity Check (LDPC code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algorithm. VLSI Architecture is proposed which uses the value re-use property of min-sum algorithm and gives high throughput. The proposed work has been implemented and tested on Xilinx Virtex 5 FPGA. The MATLAB result of LDPC decoder for low bit error rate (BER gives bit error rate in the range of 10-1 to 10-3.5 at SNR=1 to 2 for 20 no of iterations. So it gives good bit error rate performance. The latency of the parallel design of LDPC decoder has also reduced. It has accomplished 141.22 MHz maximum frequency and throughput of 2.02 Gbps while consuming less area of the design.

  18. Software Abstractions and Methodologies for HPC Simulation Codes on Future Architectures

    Directory of Open Access Journals (Sweden)

    Anshu Dubey

    2014-07-01

    Full Text Available Simulations with multi-physics modeling have become crucial to many science and engineering fields, and multi-physics capable scientific software is as important to these fields as instruments and facilities are to experimental sciences. The current generation of mature multi-physics codes would have sustainably served their target communities with modest amount of ongoing investment for enhancing capabilities. However, the revolution occurring in the hardware architecture has made it necessary to tackle the parallelism and performance management in these codes at multiple levels. The requirements of various levels are often at cross-purposes with one another, and therefore hugely complicate the software design. All of these considerations make it essential to approach this challenge cooperatively as a community. We conducted a series of workshops under an NSF-SI2 conceptualization grant to get input from various stakeholders, and to identify broad approaches that might lead to a solution. In this position paper we detail the major concerns articulated by the application code developers, and emerging trends in utilization of programming abstractions that we found through these workshops.

  19. Recent progress of an integrated implosion code and modeling of element physics

    International Nuclear Information System (INIS)

    Nagatomo, H.; Takabe, H.; Mima, K.; Ohnishi, N.; Sunahara, A.; Takeda, T.; Nishihara, K.; Nishiguchu, A.; Sawada, K.

    2001-01-01

    Physics of the inertial fusion is based on a variety of elements such as compressible hydrodynamics, radiation transport, non-ideal equation of state, non-LTE atomic process, and relativistic laser plasma interaction. In addition, implosion process is not in stationary state and fluid dynamics, energy transport and instabilities should be solved simultaneously. In order to study such complex physics, an integrated implosion code including all physics important in the implosion process should be developed. The details of physics elements should be studied and the resultant numerical modeling should be installed in the integrated code so that the implosion can be simulated with available computer within realistic CPU time. Therefore, this task can be basically separated into two parts. One is to integrate all physics elements into a code, which is strongly related to the development of hydrodynamic equation solver. We have developed 2-D integrated implosion code which solves mass, momentum, electron energy, ion energy, equation of states, laser ray-trace, laser absorption radiation, surface tracing and so on. The reasonable results in simulating Rayleigh-Taylor instability and cylindrical implosion are obtained using this code. The other is code development on each element physics and verification of these codes. We had progress in developing a nonlocal electron transport code and 2 and 3 dimension radiation hydrodynamic code. (author)

  20. Convenience of Statistical Approach in Studies of Architectural Ornament and Other Decorative Elements Specific Application

    Science.gov (United States)

    Priemetz, O.; Samoilov, K.; Mukasheva, M.

    2017-11-01

    An ornament is an actual phenomenon of the architecture modern theory, a common element in the practice of design and construction. It has been an important aspect of shaping for millennia. The description of the methods of its application occupies a large place in the studies on the theory and practice of architecture. However, the problem of the saturation of compositions with ornamentation, the specificity of its themes and forms have not been sufficiently studied yet. This aspect requires accumulation of additional knowledge. The application of quantitative methods for the plastic solutions types and a thematic diversity of facade compositions of buildings constructed in different periods creates another tool for an objective analysis of ornament development. It demonstrates the application of this approach for studying the features of the architectural development in Kazakhstan at the end of the XIX - XXI centuries.

  1. The effect of traditional architecture elements on architectureal and planning forming develop and raise the efficency of using the traditional energy (study case Crater/Aden, Yemen)

    International Nuclear Information System (INIS)

    Ghanem, Wadee Ahmed

    2006-01-01

    This paper discuss the role of architecture in Center city-Aden, Republic of Yemen which has a historical traditional architecture which is a unique sample with many elements that make the building of this city as an effective helper in keeping the sources traditional energy. This architecture could be meritoriously described as courtyards, high ceiling for suitable air circling are used as well as the main building material used are local and environmental such as stones, wood and lime stone (Pumic). The research aim at studying and analyzing the planning forming and architectural specification of this city through studying some examples of its buildings to recognize the traditional building role in saving the traditional energy by studying the building material, ventilation system, orientation and opening, for using these elements to raise the efficiency of using the resources of traditional sources. The research is abbreviated to several results such as: 1. Urbanization planning side: a. Elements of urban planning represented in the mass and opening their environmental role. b. Method of forming the urban planning. c. Series in arrangement of elements of urban planning. 2. Architectural side: a. Ratio between solid and void. b. opening shapes. c. internal courtyards. d. Unique architectural elements (Mashrabiyas (Oriels), sky lines, opening covering...etc). e. Building material used . f. building construction methods. g. Kind of walls.(Author)

  2. Impacts of traditional architecture on the use of wood as an element of facade covering in Serbian contemporary architecture

    Directory of Open Access Journals (Sweden)

    Ivanović-Šekularac Jelena

    2011-01-01

    Full Text Available The world trend of re-use of wood and wood products as materials for construction and covering of architectural structures is present not only because of the need to meet the aesthetic, artistic and formal requirements or to seek inspiration in the return to the tradition and nature, but also because of its ecological, economic and energetic feasibility. Furthermore, the use of wood fits into contemporary trends of sustainable development and application of modern technical and technological solutions in the production of materials, in order to maintain a connection to nature, environment and tradition. In this study the author focuses on wood and wood products as an element of facade covering on buildings in our country, in order to extend knowledge about possibilities and limitations of their use and create a base for their greater and correct application. The subject of this research is to examine the application of wood and wood products as an element covering the exterior in combination with other materials applied in our traditional and contemporary homes with the emphasis on functional, representational art and the various possibilities of wood. In this study all the factors that affect the application of wood and wood products have been analyzed and the conclusions have been drawn about the manner of their implementation and the types of wood and wood products protection. The development of modern technological solutions in wood processing led to the production of composite materials based on wood that are highly resistant, stable and much longer lasting than wood. Those materials have maintained in an aesthetic sense all the characteristics of wood that make it unique and inimitable. This is why modern facade coating based on wood should be applied as a facade covering in the exterior of modern architectural buildings in Serbia, and the use wood reduced to a minimum.

  3. High performance 3D neutron transport on peta scale and hybrid architectures within APOLLO3 code

    International Nuclear Information System (INIS)

    Jamelot, E.; Dubois, J.; Lautard, J-J.; Calvin, C.; Baudron, A-M.

    2011-01-01

    APOLLO3 code is a common project of CEA, AREVA and EDF for the development of a new generation system for core physics analysis. We present here the parallelization of two deterministic transport solvers of APOLLO3: MINOS, a simplified 3D transport solver on structured Cartesian and hexagonal grids, and MINARET, a transport solver based on triangular meshes on 2D and prismatic ones in 3D. We used two different techniques to accelerate MINOS: a domain decomposition method, combined with an accelerated algorithm using GPU. The domain decomposition is based on the Schwarz iterative algorithm, with Robin boundary conditions to exchange information. The Robin parameters influence the convergence and we detail how we optimized the choice of these parameters. MINARET parallelization is based on angular directions calculation using explicit message passing. Fine grain parallelization is also available for each angular direction using shared memory multithreaded acceleration. Many performance results are presented on massively parallel architectures using more than 103 cores and on hybrid architectures using some tens of GPUs. This work contributes to the HPC development in reactor physics at the CEA Nuclear Energy Division. (author)

  4. Architecture

    OpenAIRE

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  5. Architecture and program structures for a special purpose finite element computer

    Energy Technology Data Exchange (ETDEWEB)

    Norrie, D.H.; Norrie, C.W.

    1983-01-01

    The development of very large scale integration (VLSI) has made special-purpose computers economically possible. With such a machine, the loss of flexibility compared with a general-purpose computer can be offset by the increased speed which can be obtained by tailoring the architecture to the particular problem or class of problem. The first kind of special-purpose machine has its architecture modelled on the physical structure of the problem and the second kind has its design tailored to the computational algorithm used. The parallel finite element machine (PARFEM) being designed at the University of Calgary for the solution of finite element problems is of the second kind. Its conceptual design is described and progress to date outlined. 14 references.

  6. Archaeometric characterization and provenance determination of sculptures and architectural elements from Gerasa, Jordan

    Science.gov (United States)

    Al-Bashaireh, Khaled

    2018-02-01

    This study aims at the identification of the provenance of white marble sculptures and architectural elements uncovered from the archaeological site of Gerasa and neighboring areas, north Jordan. Most of the marbles are probably of the Roman or Byzantine periods. Optical microscopy, X-ray diffraction, and mass spectrometry were used to investigate petrographic, mineralogical and isotopic characteristics of the samples, respectively. Analytical results were compared with the main reference databases of known Mediterranean marble quarries exploited in antiquity. The collected data show that the most likely main sources of the sculptures were the Greek marble quarries of Paros-2 (Lakkoi), Penteli (Mount Pentelikon, Attica), and Thasos-3 (Thasos Island, Cape Vathy, Aliki); the Asia Minor marble quarries of Proconessus-1 (Marmara) and Aphrodisias (Aphrodisias); and the Italian quarry of Carrara (Apuan Alps). Similarly, the Asia Minor quarries of the fine-grained Docimium (Afyon) and the coarse-grained Proconessus-1 (Marmara) and Thasos-3 are the most likely sources of the architectural elements. The results agree with published data on the wide use of these marbles for sculpture and architectural elements.

  7. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    Science.gov (United States)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  8. Apolux : an innovative computer code for daylight design and analysis in architecture and urbanism

    Energy Technology Data Exchange (ETDEWEB)

    Claro, A.; Pereira, F.O.R.; Ledo, R.Z. [Santa Catarina Federal Univ., Florianopolis, SC (Brazil)

    2005-07-01

    The main capabilities of a new computer program for calculating and analyzing daylighting in architectural space were discussed. Apolux 1.0 was designed to use three-dimensional files generated in graphic editors in the data exchange file (DXF) format and was developed to integrate an architect's design characteristics. An example of its use in a design context development was presented. The program offers fast and flexible manipulation of video card models in different visualization conditions. The algorithm for working with the physics of light is based on the radiosity method representing the surfaces through finite elements divided in small triangular units of area which are fully confronted to each other. The form factors of each triangle are determined in relation to all others in the primary calculation. Visible directions of the sky are also included according to the modular units of a subdivided globe. Following these primary calculations, the different and successive daylighting solutions can be determined under different sky conditions. The program can also change the properties of the materials to quickly recalculate the solutions. The program has been applied in an office building in Florianopolis, Brazil. The four stages of design include initial discussion with the architects about the conceptual possibilities; development of a comparative study based on 2 architectural designs with different conceptual elements regarding daylighting exploitation in order to compare internal daylighting levels and distribution of the 2 options exposed to the same external conditions; study the solar shading devices for specific facades; and, simulations to test the performance of different designs. The program has proven to be very flexible with reliable results. It has the possibility of incorporating situations of the real sky through the input of the Spherical model of real sky luminance values. 3 refs., 14 figs.

  9. Semantic enrichment of medical forms - semi-automated coding of ODM-elements via web services.

    Science.gov (United States)

    Breil, Bernhard; Watermann, Andreas; Haas, Peter; Dziuballe, Philipp; Dugas, Martin

    2012-01-01

    Semantic interoperability is an unsolved problem which occurs while working with medical forms from different information systems or institutions. Standards like ODM or CDA assure structural homogenization but in order to compare elements from different data models it is necessary to use semantic concepts and codes on an item level of those structures. We developed and implemented a web-based tool which enables a domain expert to perform semi-automated coding of ODM-files. For each item it is possible to inquire web services which result in unique concept codes without leaving the context of the document. Although it was not feasible to perform a totally automated coding we have implemented a dialog based method to perform an efficient coding of all data elements in the context of the whole document. The proportion of codable items was comparable to results from previous studies.

  10. Infill architecture: Design approaches for in-between buildings and 'bond' as integrative element

    Directory of Open Access Journals (Sweden)

    Alfirević Đorđe

    2015-01-01

    Full Text Available The aim of the paper is to draw attention to the view that the two key elements in achieving good quality of architecture infill in immediate, current surroundings, are the selection of optimal creative method of infill architecture and adequate application of 'the bond' as integrative element, The success of achievement and the quality of architectural infill mainly depend on the assessment of various circumstances, but also on the professionalism, creativity, sensibility, and finally innovativeness of the architect, In order for the infill procedure to be carried out adequately, it is necessary to carry out the assessment of quality of the current surroundings that the object will be integrated into, and then to choose the creative approach that will allow the object to establish an optimal dialogue with its surroundings, On a wider scale, both theory and the practice differentiate thee main creative approaches to infill objects: amimetic approach (mimesis, bassociative approach and ccontrasting approach, Which of the stated approaches will be chosen depends primarily on the fact whether the existing physical structure into which the object is being infilled is 'distinct', 'specific' or 'indistinct', but it also depends on the inclination of the designer, 'The bond' is a term which in architecture denotes an element or zone of one object, but in some instances it can refer to the whole object which has been articulated in a specific way, with an aim of reaching the solution for the visual conflict as is often the case in situations when there is a clash between the existing objects and the newly designed or reconstructed object, This paper provides in-depth analysis of different types of bonds, such as 'direction as bond', 'cornice as bond', 'structure as bond', 'texture as bond' and 'material as bond', which indicate complexity and multiple layers of the designing process of object interpolation.

  11. Mechanical modelling of PCI with FRAGEMA and CEA finite element codes

    International Nuclear Information System (INIS)

    Joseph, J.; Bernard, Ph.; Atabek, R.; Chantant, M.

    1983-01-01

    In the framework of their common program, CEA and FRAGEMA have undertaken the mechanical modelization of PCI. In the first step two different codes, TITUS and VERDON, have been tested by FRAGEMA and CEA respectively. Whereas the two codes use a finite element method to describe the thermomechanical behaviour of a fuel element, input models are not the same for the two codes: to take into account the presence of cracks in UO 2 , an axisymmetric two dimensional mesh pattern and the Druecker-Prager criterion are used in VERDON and a 3D equivalent method in TITUS. Two rods have been studied with these two methods: PRISCA 04bis and PRISCA 104 which were ramped in SILOE. The results show that the stresses and strains are the same with the two codes. These methods are further applied to the complete series of the common ramp test rods program of FRAGEMA and CEA. (author)

  12. A framework for developing finite element codes for multi-disciplinary applications.

    OpenAIRE

    Dadvand, Pooyan

    2007-01-01

    The world of computing simulation has experienced great progresses in recent years and requires more exigent multidisciplinary challenges to satisfy the new upcoming demands. Increasing the importance of solving multi-disciplinary problems makes developers put more attention to these problems and deal with difficulties involved in developing software in this area. Conventional finite element codes have several difficulties in dealing with multi-disciplinary problems. Many of these codes are d...

  13. Development of three-dimensional transport code by the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1985-01-01

    Development of a three-dimensional neutron transport code by the double finite element method is described. Both of the Galerkin and variational methods are adopted to solve the problem, and then the characteristics of them are compared. Computational results of the collocation method, developed as a technique for the vaviational one, are illustrated in comparison with those of an Ssub(n) code. (author)

  14. Modeling in architectural-planning solutions of agrarian technoparks as elements of the infrastructure

    Science.gov (United States)

    Abdrassilova, Gulnara S.

    2017-09-01

    In the context of development of the agriculture as the driver of the economy of Kazakhstan it is imperative to study new types of agrarian constructions (agroparks, agrotourists complexes, "vertical" farms, conservatories, greenhouses) that can be combined into complexes - agrarian technoparks. Creation of agrarian technoparks as elements of the infrastructure of the agglomeration shall ensure the breakthrough in the field of agrarian goods production, storing and recycling. Modeling of architectural-planning solutions of agrarian technoparks supports development of the theory and practice of designing objects based on innovative approaches.

  15. Modeling approach for annular-fuel elements using the ASSERT-PV subchannel code

    International Nuclear Information System (INIS)

    Dominguez, A.N.; Rao, Y.

    2012-01-01

    The internally and externally cooled annular fuel (hereafter called annular fuel) is under consideration for a new high burn-up fuel bundle design in Atomic Energy of Canada Limited (AECL) for its current, and its Generation IV reactor. An assessment of different options to model a bundle fuelled with annular fuel elements is presented. Two options are discussed: 1) Modify the subchannel code ASSERT-PV to handle multiple types of elements in the same bundle, and 2) coupling ASSERT-PV with an external application. Based on this assessment, the selected option is to couple ASSERT-PV with the thermalhydraulic system code CATHENA. (author)

  16. Tri-Lab Co-Design Milestone: In-Depth Performance Portability Analysis of Improved Integrated Codes on Advanced Architecture.

    Energy Technology Data Exchange (ETDEWEB)

    Hoekstra, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Simon David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Richards, David [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bergen, Ben [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-01

    This milestone is a tri-lab deliverable supporting ongoing Co-Design efforts impacting applications in the Integrated Codes (IC) program element Advanced Technology Development and Mitigation (ATDM) program element. In FY14, the trilabs looked at porting proxy application to technologies of interest for ATS procurements. In FY15, a milestone was completed evaluating proxy applications in multiple programming models and in FY16, a milestone was completed focusing on the migration of lessons learned back into production code development. This year, the co-design milestone focuses on extracting the knowledge gained and/or code revisions back into production applications.

  17. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    Science.gov (United States)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  18. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide-Semiconductor Image Sensors.

    Science.gov (United States)

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-05-02

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.

  19. Finite Macro-Element Mesh Deformation in a Structured Multi-Block Navier-Stokes Code

    Science.gov (United States)

    Bartels, Robert E.

    2005-01-01

    A mesh deformation scheme is developed for a structured multi-block Navier-Stokes code consisting of two steps. The first step is a finite element solution of either user defined or automatically generated macro-elements. Macro-elements are hexagonal finite elements created from a subset of points from the full mesh. When assembled, the finite element system spans the complete flow domain. Macro-element moduli vary according to the distance to the nearest surface, resulting in extremely stiff elements near a moving surface and very pliable elements away from boundaries. Solution of the finite element system for the imposed boundary deflections generally produces smoothly varying nodal deflections. The manner in which distance to the nearest surface has been found to critically influence the quality of the element deformation. The second step is a transfinite interpolation which distributes the macro-element nodal deflections to the remaining fluid mesh points. The scheme is demonstrated for several two-dimensional applications.

  20. Looking back on 10 years of the ATLAS Metadata Interface. Reflections on architecture, code design and development methods

    International Nuclear Information System (INIS)

    Fulachier, J; Albrand, S; Lambert, F; Aidel, O

    2014-01-01

    The 'ATLAS Metadata Interface' framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we describe briefly the architecture and the main uses of the framework within the experiment (TagCollector for release management and Dataset Discovery). These two applications, which share almost 2000 registered users, are superficially quite different, however much of the code is shared and they have been developed and maintained over a decade almost completely by the same team of 3 people. We discuss how the architectural principles established at the beginning of the project have allowed us to continue both to integrate the new technologies and to respond to the new metadata use cases which inevitably appear over such a time period.

  1. Thermomechanical DART code improvements for LEU VHD dispersion and monolithic fuel element analysis

    International Nuclear Information System (INIS)

    Taboada, H.; Saliba, R.; Moscarda, M.V.; Rest, J.

    2005-01-01

    A collaboration agreement between ANL/US DOE and CNEA Argentina in the area of Low Enriched Uranium Advanced Fuels has been in place since October 16, 1997 under the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy. An annex concerning DART code optimization has been operative since February 8, 1999. Previously, as a part of this annex a visual FASTDART version and also a DART THERMAL version were presented during RERTR 2000, 2002 and RERTR 2003 Meetings. During this past year the following activities were completed: Optimization of DART TM code Al diffusion parameters by testing predictions against reliable data from RERTR experiments. Improvements on the 3-D thermo-mechanical version of the code for modeling the irradiation behavior of LEU U-Mo monolithic fuel. Concerning the first point, by means of an optimization of parameters of the Al diffusion through the interaction product theoretical expression, a reasonable agreement between DART temperature calculations with reliable RERTR PIE data was reached. The 3-D thermomechanical code complex is based upon a finite element thermal-elastic code named TERMELAS, and irradiation behavior provided by the DART code. An adequate and progressive process of coupling calculations of both codes at each time step is currently developed. Compatible thermal calculation between both codes was reached. This is the first stage to benchmark and validate against RERTR PIE data the coupling process. (author)

  2. Axisym finite element code: modifications for pellet-cladding mechanical interaction analysis

    International Nuclear Information System (INIS)

    Engelman, G.P.

    1978-10-01

    Local strain concentrations in nuclear fuel rods are known to be potential sites for failure initiation. Assessment of such strain concentrations requires a two-dimensional analysis of stress and strain in both the fuel and the cladding during pellet-cladding mechanical interaction. To provide such a capability in the FRAP (Fuel Rod Analysis Program) codes, the AXISYM code (a small finite element program developed at the Idaho National Engineering Laboratory) was modified to perform a detailed fuel rod deformation analysis. This report describes the modifications which were made to the AXISYM code to adapt it for fuel rod analysis and presents comparisons made between the two-dimensional AXISYM code and the FRACAS-II code. FRACAS-II is the one-dimensional (generalized plane strain) fuel rod mechanical deformation subcode used in the FRAP codes. Predictions of these two codes should be comparable away from the fuel pellet free ends if the state of deformation at the pellet midplane is near that of generalized plane strain. The excellent agreement obtained in these comparisons checks both the correctness of the AXISYM code modifications as well as the validity of the assumption of generalized plane strain upon which the FRACAS-II subcode is based

  3. PRIAM: A self consistent finite element code for particle simulation in electromagnetic fields

    International Nuclear Information System (INIS)

    Le Meur, G.; Touze, F.

    1990-06-01

    A 2 1/2 dimensional, relativistic particle simulation code is described. A short review of the used mixed finite element method is given. The treatment of the driving terms (charge and current densities), initial, boundary conditions are exposed. Graphical results are shown

  4. SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages

    Energy Technology Data Exchange (ETDEWEB)

    Russel, E. [Lawrence Livermore National Lab., CA (United States)

    1997-11-01

    This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.

  5. Eigensolution of finite element problems in a completely connected parallel architecture

    Science.gov (United States)

    Akl, Fred A.; Morel, Michael R.

    1989-01-01

    A parallel algorithm for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi)=(M)(phi)(omega), where (K) and (M) are of order N, and (omega) is of order q is presented. The parallel algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm has been successfully implemented on a tightly coupled multiple-instruction-multiple-data (MIMD) parallel processing computer, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor, or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macro-tasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18 and 3.61 are achieved on two, four, six and eight processors, respectively.

  6. Parallel eigenanalysis of finite element models in a completely connected architecture

    Science.gov (United States)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  7. Free material stiffness design of laminated composite structures using commercial finite element analysis codes

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup; Lindgaard, Esben; Lund, Erik

    2015-01-01

    In this work optimum stiffness design of laminated composite structures is performed using the commercially available programs ANSYS and MATLAB. Within these programs a Free Material Optimization algorithm is implemented based on an optimality condition and a heuristic update scheme. The heuristic...... update scheme is needed because commercially available finite element analysis software is used. When using a commercial finite element analysis code it is not straight forward to implement a computationally efficient gradient based optimization algorithm. Examples considered in this work are a clamped......, where full access to the finite element analysis core is granted. This comparison displays the possibility of using commercially available programs for stiffness design of laminated composite structures....

  8. A sliding point contact model for the finite element structures code EURDYN

    International Nuclear Information System (INIS)

    Smith, B.L.

    1986-01-01

    A method is developed by which sliding point contact between two moving deformable structures may be incorporated within a lumped mass finite element formulation based on displacements. The method relies on a simple mechanical interpretation of the contact constraint in terms of equivalent nodal forces and avoids the use of nodal connectivity via a master slave arrangement or pseudo contact element. The methodology has been iplemented into the EURDYN finite element program for the (2D axisymmetric) version coupled to the hydro code SEURBNUK. Sample calculations are presented illustrating the use of the model in various contact situations. Effects due to separation and impact of structures are also included. (author)

  9. FEAST: a two-dimensional non-linear finite element code for calculating stresses

    International Nuclear Information System (INIS)

    Tayal, M.

    1986-06-01

    The computer code FEAST calculates stresses, strains, and displacements. The code is two-dimensional. That is, either plane or axisymmetric calculations can be done. The code models elastic, plastic, creep, and thermal strains and stresses. Cracking can also be simulated. The finite element method is used to solve equations describing the following fundamental laws of mechanics: equilibrium; compatibility; constitutive relations; yield criterion; and flow rule. FEAST combines several unique features that permit large time-steps in even severely non-linear situations. The features include a special formulation for permitting many finite elements to simultaneously cross the boundary from elastic to plastic behaviour; accomodation of large drops in yield-strength due to changes in local temperature and a three-step predictor-corrector method for plastic analyses. These features reduce computing costs. Comparisons against twenty analytical solutions and against experimental measurements show that predictions of FEAST are generally accurate to ± 5%

  10. Establishing Base Elements of Perspective in Order to Reconstruct Architectural Buildings from Photographs

    Science.gov (United States)

    Dzwierzynska, Jolanta

    2017-12-01

    The use of perspective images, especially historical photographs for retrieving information about presented architectural environment is a fast developing field recently. The photography image is a perspective image with secure geometrical connection with reality, therefore it is possible to reverse this process. The aim of the herby study is establishing requirements which a photographic perspective representation should meet for a reconstruction purpose, as well as determination of base elements of perspective such as a horizon line and a circle of depth, which is a key issue in any reconstruction. The starting point in the reconstruction process is geometrical analysis of the photograph, especially determination of the kind of perspective projection applied, which is defined by the building location towards a projection plane. Next, proper constructions can be used. The paper addresses the problem of establishing base elements of perspective on the basis of the photograph image in the case when camera calibration is impossible to establish. It presents different geometric construction methods selected dependently on the starting assumptions. Therefore, the methods described in the paper seem to be universal. Moreover, they can be used even in the case of poor quality photographs with poor perspective geometry. Such constructions can be realized with computer aid when the photographs are in digital form as it is presented in the paper. The accuracy of the applied methods depends on the photography image accuracy, as well as drawing accuracy, however, it is sufficient for further reconstruction. Establishing base elements of perspective presented in the paper is especially useful in difficult cases of reconstruction, when one lacks information about reconstructed architectural form and it is necessary to lean on solid geometry.

  11. MOVE-Pro: a low power and high code density TTA architecture

    NARCIS (Netherlands)

    He, Y.; She, D.; Mesman, B.; Corporaal, H.

    2011-01-01

    Transport Triggered Architectures (TTAs) possess many advantageous, such as modularity, flexibility, and scalability. As an exposed datapath architecture, TTAs can effectively reduce the register file (RF) pressure in both number of accesses and number of RF ports. However, the conventional TTAs

  12. OpenCL code generation for low energy wide SIMD architectures with explicit datapath.

    NARCIS (Netherlands)

    She, D.; He, Y.; Waeijen, L.J.W.; Corporaal, H.; Jeschke, H.; Silvén, O.

    2013-01-01

    Energy efficiency is one of the most important aspects in designing embedded processors. The use of a wide SIMD processor architecture is a promising approach to build energy-efficient high performance embedded processors. In this paper, we propose a configurable wide SIMD architecture that utilizes

  13. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    Science.gov (United States)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  14. Finite element study of scaffold architecture design and culture conditions for tissue engineering.

    Science.gov (United States)

    Olivares, Andy L; Marsal, Elia; Planell, Josep A; Lacroix, Damien

    2009-10-01

    Tissue engineering scaffolds provide temporary mechanical support for tissue regeneration and transfer global mechanical load to mechanical stimuli to cells through its architecture. In this study the interactions between scaffold pore morphology, mechanical stimuli developed at the cell microscopic level, and culture conditions applied at the macroscopic scale are studied on two regular scaffold structures. Gyroid and hexagonal scaffolds of 55% and 70% porosity were modeled in a finite element analysis and were submitted to an inlet fluid flow or compressive strain. A mechanoregulation theory based on scaffold shear strain and fluid shear stress was applied for determining the influence of each structures on the mechanical stimuli on initial conditions. Results indicate that the distribution of shear stress induced by fluid perfusion is very dependent on pore distribution within the scaffold. Gyroid architectures provide a better accessibility of the fluid than hexagonal structures. Based on the mechanoregulation theory, the differentiation process in these structures was more sensitive to inlet fluid flow than axial strain of the scaffold. This study provides a computational approach to determine the mechanical stimuli at the cellular level when cells are cultured in a bioreactor and to relate mechanical stimuli with cell differentiation.

  15. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A

  16. Validation of the 3D finite element transport theory code EVENT for shielding applications

    International Nuclear Information System (INIS)

    Warner, Paul; Oliveira, R.E. de

    2000-01-01

    This paper is concerned with the validation of the 3D deterministic neutral-particle transport theory code EVENT for shielding applications. The code is based on the finite element-spherical harmonics (FE-P N ) method which has been extensively developed over the last decade. A general multi-group, anisotropic scattering formalism enables the code to address realistic steady state and time dependent, multi-dimensional coupled neutron/gamma radiation transport problems involving high scattering and deep penetration alike. The powerful geometrical flexibility and competitive computational effort makes the code an attractive tool for shielding applications. In recognition of this, EVENT is currently in the process of being adopted by the UK nuclear industry. The theory behind EVENT is described and its numerical implementation is outlined. Numerical results obtained by the code are compared with predictions of the Monte Carlo code MCBEND and also with the results from benchmark shielding experiments. In particular, results are presented for the ASPIS experimental configuration for both neutron and gamma ray calculations using the BUGLE 96 nuclear data library. (author)

  17. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    Energy Technology Data Exchange (ETDEWEB)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.

  18. Determination of Major and Minor Elements in the Code River Sediments

    International Nuclear Information System (INIS)

    Sri Murniasih; Sukirno; Bambang Irianto

    2007-01-01

    Analyze major and minor elements in the Code river sediments has been done. The aim of this research is to determine the concentration of major and minor elements in the Code river sediments from upstream to downstream. The instrument used were X-ray Fluorescence using Si(Li) detector. The results show that major elements were Fe (1.66 ± 0.1% - 4.20 ± 0.7%) and Ca (4.43 ± 0.6% - 9.08 ± 1.3%); while minor elements were Ba (178.791 ± 21.1 ppm - 616.56 ± 59.4 ppm); Sr (148.22 ± 21.9 ppm - 410.25 ± 30.5 ppm); and Zr (9.71 ± 1.1 ppm - 22.11 ± 3.4 ppm). ANAVA method (confidence level of α 0.05 ) for statistic test was used. It was showed that there were significant influence of the sampling location difference on the concentration of major and minor elements in the sediment samples. (author)

  19. Large Eddy Simulation of turbulent flows in compound channels with a finite element code

    International Nuclear Information System (INIS)

    Xavier, C.M.; Petry, A.P.; Moeller, S.V.

    2011-01-01

    This paper presents the numerical investigation of the developing flow in a compound channel formed by a rectangular main channel and a gap in one of the sidewalls. A three dimensional Large Eddy Simulation computational code with the classic Smagorinsky model is introduced, where the transient flow is modeled through the conservation equations of mass and momentum of a quasi-incompressible, isothermal continuous medium. Finite Element Method, Taylor-Galerkin scheme and linear hexahedrical elements are applied. Numerical results of velocity profile show the development of a shear layer in agreement with experimental results obtained with Pitot tube and hot wires. (author)

  20. ABAQUS/EPGEN - a general purpose finite element code with emphasis on nonlinear applications

    International Nuclear Information System (INIS)

    Hibbitt, H.D.

    1984-01-01

    The article contains a summary description of ABAQUS, a finite element program designed for general use in nonlinear as well as linear structural problems, in the context of its application to nuclear structural integrity analysis. The article begins with a discussion of the design criteria and methods upon which the code development has been based. The engineering modelling capabilities, currently implemented in the program - elements, constitutive models and analysis procedures - are then described. Finally, a few demonstration examples are presented, to illustrate some of the program's features that are of interest in structural integrity analysis associated with nuclear power plants. (orig.)

  1. Modelling 3-D mechanical phenomena in a 1-D industrial finite element code: results and perspectives

    International Nuclear Information System (INIS)

    Guicheret-Retel, V.; Trivaudey, F.; Boubakar, M.L.; Masson, R.; Thevenin, Ph.

    2005-01-01

    Assessing fuel rod integrity in PWR reactors must enjoin two opposite goals: a one-dimensional finite element code (axial revolution symmetry) is needed to provide industrial results at the scale of the reactor core, while the main risk of cladding failure [e.g. pellet-cladding interaction (PCI)] is based on fully three-dimensional phenomena. First, parametric three-dimensional elastic calculations were performed to identify the relevant parameters (fragment number, contact pellet-cladding conditions, etc.) as regards PCI. Axial fragment number as well as friction coefficient are shown to play a major role in PCI as opposed to other parameters. Next, the main limitations of the one-dimensional hypothesis of the finite element code CYRANO3 are identified. To overcome these limitations, both two- and three-dimensional emulations of CYRANO3 were developed. These developments are shown to significantly improve the results provided by CYRANO3. (authors)

  2. Analysis of piping systems by finite element method using code SAP-IV

    International Nuclear Information System (INIS)

    Cizelj, L.; Ogrizek, D.

    1987-01-01

    Due to extensive and multiple use of the computer code SAP-IV we have decided to install it on VAX 11/750 machine. Installation required a large quantity of programming due to great discrepancies between the CDC (the original program version) and the VAX. Testing was performed basically in the field of pipe elements, based on a comparison between results obtained with the codes PSAFE2, DOCIJEV, PIPESD and SAP -V. Besides, the model of reactor pressure vessel with 3-D thick shell elements was done. The capabilities show good agreement with the results of other programs mentioned above. Along with the package installation, the graphical postprocessors being developed for mesh plotting. (author)

  3. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    OpenAIRE

    Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar

    2017-01-01

    For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...

  4. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    Science.gov (United States)

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  5. Collision detection of convex polyhedra on the NVIDIA GPU architecture for the discrete element method

    CSIR Research Space (South Africa)

    Govender, Nicolin

    2015-09-01

    Full Text Available consideration due to the architectural differences between CPU and GPU platforms. This paper describes the DEM algorithms and heuristics that are optimized for the parallel NVIDIA Kepler GPU architecture in detail. This includes a GPU optimized collision...

  6. Architecture of vagal motor units controlling striated muscle of esophagus: peripheral elements patterning peristalsis?

    Science.gov (United States)

    Powley, Terry L; Mittal, Ravinder K; Baronowsky, Elizabeth A; Hudson, Cherie N; Martin, Felecia N; McAdams, Jennifer L; Mason, Jacqueline K; Phillips, Robert J

    2013-12-01

    Little is known about the architecture of the vagal motor units that control esophageal striated muscle, in spite of the fact that these units are necessary, and responsible, for peristalsis. The present experiment was designed to characterize the motor neuron projection fields and terminal arbors forming esophageal motor units. Nucleus ambiguus compact formation neurons of the rat were labeled by bilateral intracranial injections of the anterograde tracer dextran biotin. After tracer transport, thoracic and abdominal esophagi were removed and prepared as whole mounts of muscle wall without mucosa or submucosa. Labeled terminal arbors of individual vagal motor neurons (n=78) in the esophageal wall were inventoried, digitized and analyzed morphometrically. The size of individual vagal motor units innervating striated muscle, throughout thoracic and abdominal esophagus, averaged 52 endplates per motor neuron, a value indicative of fine motor control. A majority (77%) of the motor terminal arbors also issued one or more collateral branches that contacted neurons, including nitric oxide synthase-positive neurons, of local myenteric ganglia. Individual motor neuron terminal arbors co-innervated, or supplied endplates in tandem to, both longitudinal and circular muscle fibers in roughly similar proportions (i.e., two endplates to longitudinal for every three endplates to circular fibers). Both the observation that vagal motor unit collaterals project to myenteric ganglia and the fact that individual motor units co-innervate longitudinal and circular muscle layers are consistent with the hypothesis that elements contributing to peristaltic programming inhere, or are "hardwired," in the peripheral architecture of esophageal motor units. © 2013.

  7. Governance of extended lifecycle in large-scale eHealth initiatives: analyzing variability of enterprise architecture elements.

    Science.gov (United States)

    Mykkänen, Juha; Virkanen, Hannu; Tuomainen, Mika

    2013-01-01

    The governance of large eHealth initiatives requires traceability of many requirements and design decisions. We provide a model which we use to conceptually analyze variability of several enterprise architecture (EA) elements throughout the extended lifecycle of development goals using interrelated projects related to the national ePrescription in Finland.

  8. Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures

    Science.gov (United States)

    2015-09-01

    Hardware Architecture 55 5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.2 RISC -V Extensions...56 5.3 RISC -V Tag Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.3.1 Basic Return Pointer Policy...60 5.4 RISC -V Policy Test Framework . . . . . . . . . . . . . . . . . . . . . 60 6 Compiler Support 63 6.1 LLVM

  9. Finite element methods in a simulation code for offshore wind turbines

    Science.gov (United States)

    Kurz, Wolfgang

    1994-06-01

    Offshore installation of wind turbines will become important for electricity supply in future. Wind conditions above sea are more favorable than on land and appropriate locations on land are limited and restricted. The dynamic behavior of advanced wind turbines is investigated with digital simulations to reduce time and cost in development and design phase. A wind turbine can be described and simulated as a multi-body system containing rigid and flexible bodies. Simulation of the non-linear motion of such a mechanical system using a multi-body system code is much faster than using a finite element code. However, a modal representation of the deformation field has to be incorporated in the multi-body system approach. The equations of motion of flexible bodies due to deformation are generated by finite element calculations. At Delft University of Technology the simulation code DUWECS has been developed which simulates the non-linear behavior of wind turbines in time domain. The wind turbine is divided in subcomponents which are represented by modules (e.g. rotor, tower etc.).

  10. An Investigation of the Methods of Logicalizing the Code-Checking System for Architectural Design Review in New Taipei City

    Directory of Open Access Journals (Sweden)

    Wei-I Lee

    2016-12-01

    Full Text Available The New Taipei City Government developed a Code-checking System (CCS using Building Information Modeling (BIM technology to facilitate an architectural design review in 2014. This system was intended to solve problems caused by cognitive gaps between designer and reviewer in the design review process. Along with considering information technology, the most important issue for the system’s development has been the logicalization of literal building codes. Therefore, to enhance the reliability and performance of the CCS, this study uses the Fuzzy Delphi Method (FDM on the basis of design thinking and communication theory to investigate the semantic difference and cognitive gaps among participants in the design review process and to propose the direction of system development. Our empirical results lead us to recommend grouping multi-stage screening and weighted assisted logicalization of non-quantitative building codes to improve the operability of CCS. Furthermore, CCS should integrate the Expert Evaluation System (EES to evaluate the design value under qualitative building codes.

  11. Evaluating the performance of the particle finite element method in parallel architectures

    Science.gov (United States)

    Gimenez, Juan M.; Nigro, Norberto M.; Idelsohn, Sergio R.

    2014-05-01

    This paper presents a high performance implementation for the particle-mesh based method called particle finite element method two (PFEM-2). It consists of a material derivative based formulation of the equations with a hybrid spatial discretization which uses an Eulerian mesh and Lagrangian particles. The main aim of PFEM-2 is to solve transport equations as fast as possible keeping some level of accuracy. The method was found to be competitive with classical Eulerian alternatives for these targets, even in their range of optimal application. To evaluate the goodness of the method with large simulations, it is imperative to use of parallel environments. Parallel strategies for Finite Element Method have been widely studied and many libraries can be used to solve Eulerian stages of PFEM-2. However, Lagrangian stages, such as streamline integration, must be developed considering the parallel strategy selected. The main drawback of PFEM-2 is the large amount of memory needed, which limits its application to large problems with only one computer. Therefore, a distributed-memory implementation is urgently needed. Unlike a shared-memory approach, using domain decomposition the memory is automatically isolated, thus avoiding race conditions; however new issues appear due to data distribution over the processes. Thus, a domain decomposition strategy for both particle and mesh is adopted, which minimizes the communication between processes. Finally, performance analysis running over multicore and multinode architectures are presented. The Courant-Friedrichs-Lewy number used influences the efficiency of the parallelization and, in some cases, a weighted partitioning can be used to improve the speed-up. However the total cputime for cases presented is lower than that obtained when using classical Eulerian strategies.

  12. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    Science.gov (United States)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  13. KIN SP: A boundary element method based code for single pile kinematic bending in layered soil

    Directory of Open Access Journals (Sweden)

    Stefano Stacul

    2018-02-01

    Full Text Available In high seismicity areas, it is important to consider kinematic effects to properly design pile foundations. Kinematic effects are due to the interaction between pile and soil deformations induced by seismic waves. One of the effect is the arise of significant strains in weak soils that induce bending moments on piles. These moments can be significant in presence of a high stiffness contrast in a soil deposit. The single pile kinematic interaction problem is generally solved with beam on dynamic Winkler foundation approaches (BDWF or using continuous models. In this work, a new boundary element method (BEM based computer code (KIN SP is presented where the kinematic analysis is preceded by a free-field response analysis. The analysis results of this method, in terms of bending moments at the pile-head and at the interface of a two-layered soil, are influenced by many factors including the soil–pile interface discretization. A parametric study is presented with the aim to suggest the minimum number of boundary elements to guarantee the accuracy of a BEM solution, for typical pile–soil relative stiffness values as a function of the pile diameter, the location of the interface of a two-layered soil and of the stiffness contrast. KIN SP results have been compared with simplified solutions in literature and with those obtained using a quasi-three-dimensional (3D finite element code.

  14. Current status of the transient integral fuel element performance code URANUS

    International Nuclear Information System (INIS)

    Preusser, T.; Lassmann, K.

    1983-01-01

    To investigate the behavior of fuel pins during normal and off-normal operation, the integral fuel rod code URANUS has been extended to include a transient version. The paper describes the current status of the program system including a presentation of newly developed models for hypothetical accident investigation. The main objective of current development work is to improve the modelling of fuel and clad material behavior during fast transients. URANUS allows detailed analysis of experiments until the onset of strong material transport phenomena. Transient fission gas analysis is carried out due to the coupling with a special version of the LANGZEIT-KURZZEIT-code (KfK). Fuel restructuring and grain growth kinetics models have been improved recently to better characterize pre-experimental steady-state operation; transient models are under development. Extensive verification of the new version has been carried out by comparison with analytical solutions, experimental evidence, and code-to-code evaluation studies. URANUS, with all these improvements, has been successfully applied to difficult fast breeder fuel rod analysis including TOP, LOF, TUCOP, local coolant blockage and specific carbide fuel experiments. Objective of further studies is the description of transient PCMI. It is expected that the results of these developments will contribute significantly to the understanding of fuel element structural behavior during severe transients. (orig.)

  15. Development of Multi-Scale Finite Element Analysis Codes for High Formability Sheet Metal Generation

    International Nuclear Information System (INIS)

    Nnakamachi, Eiji; Kuramae, Hiroyuki; Ngoc Tam, Nguyen; Nakamura, Yasunori; Sakamoto, Hidetoshi; Morimoto, Hideo

    2007-01-01

    In this study, the dynamic- and static-explicit multi-scale finite element (F.E.) codes are developed by employing the homogenization method, the crystalplasticity constitutive equation and SEM-EBSD measurement based polycrystal model. These can predict the crystal morphological change and the hardening evolution at the micro level, and the macroscopic plastic anisotropy evolution. These codes are applied to analyze the asymmetrical rolling process, which is introduced to control the crystal texture of the sheet metal for generating a high formability sheet metal. These codes can predict the yield surface and the sheet formability by analyzing the strain path dependent yield, the simple sheet forming process, such as the limit dome height test and the cylindrical deep drawing problems. It shows that the shear dominant rolling process, such as the asymmetric rolling, generates ''high formability'' textures and eventually the high formability sheet. The texture evolution and the high formability of the newly generated sheet metal experimentally were confirmed by the SEM-EBSD measurement and LDH test. It is concluded that these explicit type crystallographic homogenized multi-scale F.E. code could be a comprehensive tool to predict the plastic induced texture evolution, anisotropy and formability by the rolling process and the limit dome height test analyses

  16. A comparison of two three-dimensional shell-element transient electromagnetics codes

    International Nuclear Information System (INIS)

    Yugo, J.J.; Williamson, D.E.

    1992-01-01

    Electromagnetic forces due to eddy currents strongly influence the design of components for the next generation of fusion devices. An effort has been made to benchmark two computer programs used to generate transient electromagnetic loads: SPARK and EddyCuFF. Two simple transient field problems were analyzed, both of which had been previously analyzed by the SPARK code with results recorded in the literature. A third problem that uses an ITER inboard blanket benchmark model was analyzed as well. This problem was driven with a self-consistent, distributed multifilament plasma model generated by an axisymmetric physics code. The benchmark problems showed good agreement between the two shell-element codes. Variations in calculated eddy currents of 1--3% have been found for similar, finely meshed models. A difference of 8% was found in induced current and 20% in force for a coarse mesh and complex, multifilament field driver. Because comparisons were made to results obtained from literature, model preparation and code execution times were not evaluated

  17. Verification of Advective Bar Elements Implemented in the Aria Thermal Response Code.

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Brantley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    A verification effort was undertaken to evaluate the implementation of the new advective bar capability in the Aria thermal response code. Several approaches to the verification process were taken : a mesh refinement study to demonstrate solution convergence in the fluid and the solid, visually examining the mapping of the advective bar element nodes to the surrounding surfaces, and a comparison of solutions produced using the advective bars for simple geometries with solutions from commercial CFD software . The mesh refinement study has shown solution convergence for simple pipe flow in both temperature and velocity . Guidelines were provided to achieve appropriate meshes between the advective bar elements and the surrounding volume. Simulations of pipe flow using advective bars elements in Aria have been compared to simulations using the commercial CFD software ANSYS Fluent (r) and provided comparable solutions in temperature and velocity supporting proper implementation of the new capability. Verification of Advective Bar Elements iv Acknowledgements A special thanks goes to Dean Dobranich for his guidance and expertise through all stages of this effort . His advice and feedback was instrumental to its completion. Thanks also goes to Sam Subia and Tolu Okusanya for helping to plan many of the verification activities performed in this document. Thank you to Sam, Justin Lamb and Victor Brunini for their assistance in resolving issues encountered with running the advective bar element model. Finally, thanks goes to Dean, Sam, and Adam Hetzler for reviewing the document and providing very valuable comments.

  18. An innovative methodology for the non-destructive diagnosis of architectural elements of ancient historical buildings.

    Science.gov (United States)

    Fais, Silvana; Casula, Giuseppe; Cuccuru, Francesco; Ligas, Paola; Bianchi, Maria Giovanna

    2018-03-12

    In the following we present a new non-invasive methodology aimed at the diagnosis of stone building materials used in historical buildings and architectural elements. This methodology consists of the integrated sequential application of in situ proximal sensing methodologies such as the 3D Terrestrial Laser Scanner for the 3D modelling of investigated objects together with laboratory and in situ non-invasive multi-techniques acoustic data, preceded by an accurate petrographical study of the investigated stone materials by optical and scanning electron microscopy. The increasing necessity to integrate different types of techniques in the safeguard of the Cultural Heritage is the result of the following two interdependent factors: 1) The diagnostic process on the building stone materials of monuments is increasingly focused on difficult targets in critical situations. In these cases, the diagnosis using only one type of non-invasive technique may not be sufficient to investigate the conservation status of the stone materials of the superficial and inner parts of the studied structures 2) Recent technological and scientific developments in the field of non-invasive diagnostic techniques for different types of materials favors and supports the acquisition, processing and interpretation of huge multidisciplinary datasets.

  19. Microcomputed tomography and microfinite element modeling for evaluating polymer scaffolds architecture and their mechanical properties.

    Science.gov (United States)

    Alberich-Bayarri, Angel; Moratal, David; Ivirico, Jorge L Escobar; Rodríguez Hernández, José C; Vallés-Lluch, Ana; Martí-Bonmatí, Luis; Estellés, Jorge Más; Mano, Joao F; Pradas, Manuel Monleón; Ribelles, José L Gómez; Salmerón-Sánchez, Manuel

    2009-10-01

    Detailed knowledge of the porous architecture of synthetic scaffolds for tissue engineering, their mechanical properties, and their interrelationship was obtained in a nondestructive manner. Image analysis of microcomputed tomography (microCT) sections of different scaffolds was done. The three-dimensional (3D) reconstruction of the scaffold allows one to quantify scaffold porosity, including pore size, pore distribution, and struts' thickness. The porous morphology and porosity as calculated from microCT by image analysis agrees with that obtained experimentally by scanning electron microscopy and physically measured porosity, respectively. Furthermore, the mechanical properties of the scaffold were evaluated by making use of finite element modeling (FEM) in which the compression stress-strain test is simulated on the 3D structure reconstructed from the microCT sections. Elastic modulus as calculated from FEM is in agreement with those obtained from the stress-strain experimental test. The method was applied on qualitatively different porous structures (interconnected channels and spheres) with different chemical compositions (that lead to different elastic modulus of the base material) suitable for tissue regeneration. The elastic properties of the constructs are explained on the basis of the FEM model that supports the main mechanical conclusion of the experimental results: the elastic modulus does not depend on the geometric characteristics of the pore (pore size, interconnection throat size) but only on the total porosity of the scaffold. (c) 2009 Wiley Periodicals, Inc.

  20. Objective Oriented Design of Architecture for TH System Safety Analysis Code and Verification

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Bub Dong

    2008-03-15

    In this work, objective oriented design of generic system analysis code has been tried based on the previous works in KAERI for two phase three field Pilot code. It has been performed to implement of input and output design, TH solver, component model, special TH models, heat structure solver, general table, trip and control, and on-line graphics. All essential features for system analysis has been designed and implemented in the final product SYSTF code. The computer language C was used for implementation in the Visual studio 2008 IDE (Integrated Development Environment) since it has easier and lighter than C++ feature. The code has simple and essential features of models and correlation, special component, special TH model and heat structure model. However the input features is able to simulate the various scenarios, such as steady state, non LOCA transient and LOCA accident. The structure validity has been tested through the various verification tests and it has been shown that the developed code can treat the non LOCA and LOCA simulation. However more detailed design and implementation of models are required to get the physical validity of SYSTF code simulation.

  1. Objective Oriented Design of Architecture for TH System Safety Analysis Code and Verification

    International Nuclear Information System (INIS)

    Chung, Bub Dong

    2008-03-01

    In this work, objective oriented design of generic system analysis code has been tried based on the previous works in KAERI for two phase three field Pilot code. It has been performed to implement of input and output design, TH solver, component model, special TH models, heat structure solver, general table, trip and control, and on-line graphics. All essential features for system analysis has been designed and implemented in the final product SYSTF code. The computer language C was used for implementation in the Visual studio 2008 IDE (Integrated Development Environment) since it has easier and lighter than C++ feature. The code has simple and essential features of models and correlation, special component, special TH model and heat structure model. However the input features is able to simulate the various scenarios, such as steady state, non LOCA transient and LOCA accident. The structure validity has been tested through the various verification tests and it has been shown that the developed code can treat the non LOCA and LOCA simulation. However more detailed design and implementation of models are required to get the physical validity of SYSTF code simulation

  2. From Requirements to code: an Architecture-centric Approach for producing Quality Systems

    OpenAIRE

    Bucchiarone, Antonio; Di Ruscio, Davide; Muccini, Henry; Pelliccione, Patrizio

    2009-01-01

    When engineering complex and distributed software and hardware systems (increasingly used in many sectors, such as manufacturing, aerospace, transportation, communication, energy, and health-care), quality has become a big issue, since failures can have economics consequences and can also endanger human life. Model-based specifications of a component-based system permit to explicitly model the structure and behaviour of components and their integration. In particular Software Architectures (S...

  3. Implementation of thermo-viscoplastic constitutive equations into the finite element code ABAQUS

    International Nuclear Information System (INIS)

    Youn, Sam Son; Lee, Soon Bok; Kim, Jong Bum; Lee, Hyeong Yeon; Yoo, Bong

    1998-01-01

    Sophisticated viscoplatic constitutive laws describing material behavior at high temperature have been implemented in the general-purpose finite element code ABAQUS to predict the viscoplastic response of structures to cyclic loading. Because of the complexity of viscoplastic constitutive equation, the general implementation methods are developed. The solution of the non-linear system of algebraic equations arising from time discretization is determined using line-search and back-tracking in combination with Newton method. The time integration method of the constitutive equations is based on semi-implicit method with efficient time step control. For numerical examples, the viscoplastic model proposed by Chaboche is implemented and several applications are illustrated

  4. Introduction of polycrystal constitutive laws in a finite element code with applications to zirconium forming

    International Nuclear Information System (INIS)

    Maudlin, P.J.; Tome, C.N.; Kaschner, G.C.; Gray, G.T. III

    1998-01-01

    In this work the authors simulate the compressive deformation of heavily textured zirconium sheet using a finite element code with the constitutive response given by a polycrystal self-consistent model. They show that the strong anisotropy of the response can be explained in terms of the texture and the relative activity of prismatic (easy) and pyramidal (hard) slip modes. The simulations capture the yield anisotropy observed for so-called through-thickness and in-plane compression tests in terMs of the loading curves and final specimen geometries

  5. Modeling turbine-missile impacts using the HONDO finite-element code

    International Nuclear Information System (INIS)

    Schuler, K.W.

    1981-11-01

    Calculations have been performed using the dynamic finite element code HONDO to simulate a full scale rocket sled test. In the test a rocket sled was used to launch at a velocity of 150 m/s (490 ft/s), a 1527 kg (3366 lb) fragment of a steam turbine rotor disk into a structure which was a simplified model of a steam turbine casing. In the calculations the material behavior of and boundary conditions on the target structure were varied to assess its energy absorbing characteristics. Comparisons are made between the calculations and observations of missile velocity and strain histories of various points of the target structure

  6. Mechanical modelization of PCI with Fragema and CEA finite element codes

    International Nuclear Information System (INIS)

    Bernard, P.; Joseph, J.; Atabek, R.; Chantant, M.

    1982-03-01

    In order to modelize the PCI phenomenon during a power ramp test two finite element codes have been used by FRAGEMA and CEA, TITUS and VERDON. The results given, by the 3D equivalent method developed with TITUS, and VERDON are equivalent, in particular the strains and the equivalent Von Mises stresses at the pellet to pellet interface are quite similar. An evaluation was made to explain experimental ramp tests results. These results come from FRISCA 04bis and FRISCA 104 rods which were ramp tested in SILOE. The choice of the equivalent Von Mises stress seems to be quite a good criterion to explain the failure threshold

  7. SP_Ace: a new code to derive stellar parameters and elemental abundances

    Science.gov (United States)

    Boeche, C.; Grebel, E. K.

    2016-03-01

    Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters

  8. A free surface algorithm in the N3S finite element code for turbulent flows

    International Nuclear Information System (INIS)

    Nitrosso, B.; Pot, G.; Abbes, B.; Bidot, T.

    1995-08-01

    In this paper, we present a free surface algorithm which was implemented in the N3S code. Free surfaces are represented by marker particles which move through a mesh. It is assumed that the free surface is located inside each element that contains markers and surrounded by at least one element with no marker inside. The mesh is then locally adjusted in order to coincide with the free surface which is well defined by the forefront marker particles. After describing the governing equations and the N3S solving methods, we present the free surface algorithm. Results obtained for two-dimensional and three-dimensional industrial problems of mould filling are presented. (authors). 5 refs., 2 figs

  9. Tests of a 3D Self Magnetic Field Solver in the Finite Element Gun Code MICHELLE

    CERN Document Server

    Nelson, Eric M

    2005-01-01

    We have recently implemented a prototype 3d self magnetic field solver in the finite-element gun code MICHELLE. The new solver computes the magnetic vector potential on unstructured grids. The solver employs edge basis functions in the curl-curl formulation of the finite-element method. A novel current accumulation algorithm takes advantage of the unstructured grid particle tracker to produce a compatible source vector, for which the singular matrix equation is easily solved by the conjugate gradient method. We will present some test cases demonstrating the capabilities of the prototype 3d self magnetic field solver. One test case is self magnetic field in a square drift tube. Another is a relativistic axisymmetric beam freely expanding in a round pipe.

  10. User's manual for the FEHM application - A finite-element heat- and mass-transfer code

    International Nuclear Information System (INIS)

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved in the FEHM application by using the finite-element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat- and mass-transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. In fact, FEHM is capable of describing flow that is dominated in many areas by fracture and fault flow, including the inherently three-dimensional flow that results from permeation to and from faults and fractures. The code can handle coupled heat and mass-transfer effects, such as boiling, dryout, and condensation that can occur in the near-field region surrounding the potential repository and the natural convection that occurs through Yucca Mountain due to seasonal temperature changes. This report outlines the uses and capabilities of the FEHM application, initialization of code variables, restart procedures, and error processing. The report describes all the data files, the input data, including individual input records or parameters, and the various output files. The system interface is described, including the software environment and installation instructions

  11. Fuel element thermo-mechanical analysis during transient events using the FMS and FETMA codes

    International Nuclear Information System (INIS)

    Hernandez Lopez Hector; Hernandez Martinez Jose Luis; Ortiz Villafuerte Javier

    2005-01-01

    In the Instituto Nacional de Investigaciones Nucleares of Mexico, the Fuel Management System (FMS) software package has been used for long time to simulate the operation of a BWR nuclear power plant in steady state, as well as in transient events. To evaluate the fuel element thermo-mechanical performance during transient events, an interface between the FMS codes and our own Fuel Element Thermo Mechanical Analysis (FETMA) code is currently being developed and implemented. In this work, the results of the thermo-mechanical behavior of fuel rods in the hot channel during the simulation of transient events of a BWR nuclear power plant are shown. The transient events considered for this work are a load rejection and a feedwater control failure, which among the most important events that can occur in a BWR. The results showed that conditions leading to fuel rod failure at no time appeared for both events. Also, it is shown that a transient due load rejection is more demanding on terms of safety that the failure of a controller of the feedwater. (authors)

  12. FEHMN 1.0: Finite element heat and mass transfer code

    International Nuclear Information System (INIS)

    Zyvoloski, G.; Dash, Z.; Kelkar, S.

    1991-04-01

    A computer code is described which can simulate non-isothermal multiphase multicomponent flow in porous media. It is applicable to natural-state studies of geothermal systems and ground-water flow. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved using the finite element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat and mass transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. A summary of the equations in the model and the numerical solution procedure are provided in this report. A user's guide and sample problems are also included. The main use of FEHMN will be to assist in the understanding of flow fields in the saturated zone below the proposed Yucca Mountain Repository. 33 refs., 27 figs., 12 tabs

  13. Cellulose as an Architectural Element in Spatially Structured Escherichia coli Biofilms

    Science.gov (United States)

    Serra, Diego O.; Richter, Anja M.

    2013-01-01

    Morphological form in multicellular aggregates emerges from the interplay of genetic constitution and environmental signals. Bacterial macrocolony biofilms, which form intricate three-dimensional structures, such as large and often radially oriented ridges, concentric rings, and elaborate wrinkles, provide a unique opportunity to understand this interplay of “nature and nurture” in morphogenesis at the molecular level. Macrocolony morphology depends on self-produced extracellular matrix components. In Escherichia coli, these are stationary phase-induced amyloid curli fibers and cellulose. While the widely used “domesticated” E. coli K-12 laboratory strains are unable to generate cellulose, we could restore cellulose production and macrocolony morphology of E. coli K-12 strain W3110 by “repairing” a single chromosomal SNP in the bcs operon. Using scanning electron and fluorescence microscopy, cellulose filaments, sheets and nanocomposites with curli fibers were localized in situ at cellular resolution within the physiologically two-layered macrocolony biofilms of this “de-domesticated” strain. As an architectural element, cellulose confers cohesion and elasticity, i.e., tissue-like properties that—together with the cell-encasing curli fiber network and geometrical constraints in a growing colony—explain the formation of long and high ridges and elaborate wrinkles of wild-type macrocolonies. In contrast, a biofilm matrix consisting of the curli fiber network only is brittle and breaks into a pattern of concentric dome-shaped rings separated by deep crevices. These studies now set the stage for clarifying how regulatory networks and in particular c-di-GMP signaling operate in the three-dimensional space of highly structured and “tissue-like” bacterial biofilms. PMID:24097954

  14. Development of dynamic explicit crystallographic homogenization finite element analysis code to assess sheet metal formability

    International Nuclear Information System (INIS)

    Nakamura, Yasunori; Tam, Nguyen Ngoc; Ohata, Tomiso; Morita, Kiminori; Nakamachi, Eiji

    2004-01-01

    The crystallographic texture evolution induced by plastic deformation in the sheet metal forming process has a great influence on its formability. In the present study, a dynamic explicit finite element (FE) analysis code is newly developed by introducing a crystallographic homogenization method to estimate the polycrystalline sheet metal formability, such as the extreme thinning and 'earing'. This code can predict the plastic deformation induced texture evolution at the micro scale and the plastic anisotropy at the macro scale, simultaneously. This multi-scale analysis can couple the microscopic crystal plasticity inhomogeneous deformation with the macroscopic continuum deformation. In this homogenization process, the stress at the macro scale is defined by the volume average of those of the corresponding microscopic crystal aggregations in satisfying the equation of motion and compatibility condition in the micro scale 'unit cell', where the periodicity of deformation is satisfied. This homogenization algorithm is implemented in the conventional dynamic explicit finite element code by employing the updated Lagrangian formulation and the rate type elastic/viscoplastic constitutive equation.At first, it has been confirmed through a texture evolution analyses in cases of typical deformation modes that Taylor's 'constant strain homogenization algorithm' yields extreme concentration toward the preferred crystal orientations compared with our homogenization one. Second, we study the plastic anisotropy effects on 'earing' in the hemispherical cup deep drawing process of pure ferrite phase sheet metal. By the comparison of analytical results with those of Taylor's assumption, conclusions are drawn that the present newly developed dynamic explicit crystallographic homogenization FEM shows more reasonable prediction of plastic deformation induced texture evolution and plastic anisotropy at the macro scale

  15. The use of the MCNP code for the quantitative analysis of elements in geological formations

    Energy Technology Data Exchange (ETDEWEB)

    Cywicka-Jakiel, T.; Woynicka, U. [The Henryk Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Zorski, T. [University of Mining and Metallurgy, Faculty of Geology, Geophysics and Environmental Protection, Krakow (Poland)

    2003-07-01

    The Monte Carlo modelling calculations using the MCNP code have been performed, which support the spectrometric neutron-gamma (SNGL) borehole logging. The SNGL enables the lithology identification through the quantitative analysis of the elements in geological formations and thus can be very useful for the oil and gas industry as well as for prospecting of the potential host rocks for radioactive waste disposal. In the SNGL experiment, gamma-rays induced by the neutron interactions with the nuclei of the rock elements are detected using the gamma-ray probe of complex mechanical and electronic construction. The probe has to be calibrated for a wide range of the elemental concentrations, to assure the proper quantitative analysis. The Polish Calibration Station in Zielona Gora is equipped with a limited number of calibration standards. An extension of the experimental calibration and the evaluation of the effect of the so-called side effects (for example the borehole and formation salinity variation) on the accuracy of the SNGL method can be done by the use of the MCNP code. The preliminary MCNP results showing the effect of the borehole and formation fluids salinity variations on the accuracy of silicon (Si), calcium (Ca) and iron (Fe) content determination are presented in the paper. The main effort has been focused on a modelling of the complex SNGL probe situated in a fluid filled borehole, surrounded by a geological formation. Track length estimate of the photon flux from the (n,gamma) interactions as a function of gamma-rays energy was used. Calculations were run on the PC computer with AMD Athlon 1.33 GHz processor. Neutron and photon cross-sections libraries were taken from the MCNP4c package and based mainly on the ENDF/B-6, ENDF/B-5 and MCPLIB02 data. The results of simulated experiment are in conformity with results of the real experiment performed with the use of the main lithology models (sandstones, limestones and dolomite). (authors)

  16. The use of the MCNP code for the quantitative analysis of elements in geological formations

    International Nuclear Information System (INIS)

    Cywicka-Jakiel, T.; Woynicka, U.; Zorski, T.

    2003-01-01

    The Monte Carlo modelling calculations using the MCNP code have been performed, which support the spectrometric neutron-gamma (SNGL) borehole logging. The SNGL enables the lithology identification through the quantitative analysis of the elements in geological formations and thus can be very useful for the oil and gas industry as well as for prospecting of the potential host rocks for radioactive waste disposal. In the SNGL experiment, gamma-rays induced by the neutron interactions with the nuclei of the rock elements are detected using the gamma-ray probe of complex mechanical and electronic construction. The probe has to be calibrated for a wide range of the elemental concentrations, to assure the proper quantitative analysis. The Polish Calibration Station in Zielona Gora is equipped with a limited number of calibration standards. An extension of the experimental calibration and the evaluation of the effect of the so-called side effects (for example the borehole and formation salinity variation) on the accuracy of the SNGL method can be done by the use of the MCNP code. The preliminary MCNP results showing the effect of the borehole and formation fluids salinity variations on the accuracy of silicon (Si), calcium (Ca) and iron (Fe) content determination are presented in the paper. The main effort has been focused on a modelling of the complex SNGL probe situated in a fluid filled borehole, surrounded by a geological formation. Track length estimate of the photon flux from the (n,gamma) interactions as a function of gamma-rays energy was used. Calculations were run on the PC computer with AMD Athlon 1.33 GHz processor. Neutron and photon cross-sections libraries were taken from the MCNP4c package and based mainly on the ENDF/B-6, ENDF/B-5 and MCPLIB02 data. The results of simulated experiment are in conformity with results of the real experiment performed with the use of the main lithology models (sandstones, limestones and dolomite). (authors)

  17. Code conforming determination of cumulative usage factors for general elastic-plastic finite element analyses

    International Nuclear Information System (INIS)

    Rudolph, Juergen; Goetz, Andreas; Hilpert, Roland

    2012-01-01

    The procedures of fatigue analyses of several relevant nuclear and conventional design codes (ASME, KTA, EN, AD) for power plant components differentiate between an elastic, simplified elastic-plastic and elastic-plastic fatigue check. As a rule, operational load levels will exclude the purely elastic fatigue check. The application of the code procedure of the simplified elastic-plastic fatigue check is common practice. Nevertheless, resulting cumulative usage factors may be overly conservative mainly due to high code based plastification penalty factors Ke. As a consequence, the more complex and still code conforming general elastic-plastic fatigue analysis methodology based on non-linear finite element analysis (FEA) is applied for fatigue design as an alternative. The requirements of the FEA and the material law to be applied have to be clarified in a first step. Current design codes only give rough guidelines on these relevant items. While the procedure for the simplified elastic-plastic fatigue analysis and the associated code passages are based on stress related cycle counting and the determination of pseudo elastic equivalent stress ranges, an adaptation to elastic-plastic strains and strain ranges is required for the elastic-plastic fatigue check. The associated requirements are explained in detail in the paper. If the established and implemented evaluation mechanism (cycle counting according to the peak and valley respectively the rainflow method, calculation of stress ranges from arbitrary load-time histories and determination of cumulative usage factors based on all load events) is to be retained, a conversion of elastic-plastic strains and strain ranges into pseudo elastic stress ranges is required. The algorithm to be applied is described in the paper. It has to be implemented in the sense of an extended post processing operation of FEA e.g. by APDL scripts in ANSYS registered . Variations of principal stress (strain) directions during the loading

  18. ABAQUS-EPGEN: a general-purpose finite element code. Volume 3. Example problems manual

    International Nuclear Information System (INIS)

    Hibbitt, H.D.; Karlsson, B.I.; Sorensen, E.P.

    1983-03-01

    This volume is the Example and Verification Problems Manual for ABAQUS/EPGEN. Companion volumes are the User's, Theory and Systems Manuals. This volume contains two major parts. The bulk of the manual (Sections 1-8) contains worked examples that are discussed in detail, while Appendix A documents a large set of basic verification cases that provide the fundamental check of the elements in the code. The examples in Sections 1-8 illustrate and verify significant aspects of the program's capability. Most of these problems provide verification, but they have also been chosen to allow discussion of modeling and analysis techniques. Appendix A contains basic verification cases. Each of these cases verifies one element in the program's library. The verification consists of applying all possible load or flux types (including thermal loading of stress elements), and all possible foundation or film/radiation conditions, and checking the resulting force and stress solutions or flux and temperature results. This manual provides program verification. All of the problems described in the manual are run and the results checked, for each release of the program, and these verification results are made available

  19. Guided waves dispersion equations for orthotropic multilayered pipes solved using standard finite elements code.

    Science.gov (United States)

    Predoi, Mihai Valentin

    2014-09-01

    The dispersion curves for hollow multilayered cylinders are prerequisites in any practical guided waves application on such structures. The equations for homogeneous isotropic materials have been established more than 120 years ago. The difficulties in finding numerical solutions to analytic expressions remain considerable, especially if the materials are orthotropic visco-elastic as in the composites used for pipes in the last decades. Among other numerical techniques, the semi-analytical finite elements method has proven its capability of solving this problem. Two possibilities exist to model a finite elements eigenvalue problem: a two-dimensional cross-section model of the pipe or a radial segment model, intersecting the layers between the inner and the outer radius of the pipe. The last possibility is here adopted and distinct differential problems are deduced for longitudinal L(0,n), torsional T(0,n) and flexural F(m,n) modes. Eigenvalue problems are deduced for the three modes classes, offering explicit forms of each coefficient for the matrices used in an available general purpose finite elements code. Comparisons with existing solutions for pipes filled with non-linear viscoelastic fluid or visco-elastic coatings as well as for a fully orthotropic hollow cylinder are all proving the reliability and ease of use of this method. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Calculation of normal tissue complication probability and dose-volume histogram reduction schemes for tissues with a critical element architecture

    International Nuclear Information System (INIS)

    Niemierko, Andrzej; Goitein, Michael

    1991-01-01

    The authors investigate a model of normal tissue complication probability for tissues that may be represented by a critical element architecture. They derive formulas for complication probability that apply to both a partial volume irradiation and to an arbitrary inhomogeneous dose distribution. The dose-volume isoeffect relationship which is a consequence of a critical element architecture is discussed and compared to the empirical power law relationship. A dose-volume histogram reduction scheme for a 'pure' critical element model is derived. In addition, a point-based algorithm which does not require precomputation of a dose-volume histogram is derived. The existing published dose-volume histogram reduction algorithms are analyzed. The authors show that the existing algorithms, developed empirically without an explicit biophysical model, have a close relationship to the critical element model at low levels of complication probability. However, it is also showed that they have aspects which are not compatible with a critical element model and the authors propose a modification to one of them to circumvent its restriction to low complication probabilities. (author). 26 refs.; 7 figs

  1. Porting the 3D Gyrokinetic Particle-in-cell Code GTC to the CRAY/NEC SX-6 Vector Architecture: Perspectives and Challenges

    International Nuclear Information System (INIS)

    Ethier, S.; Lin, Z.

    2003-01-01

    Several years of optimization on the super-scalar architecture has made it more difficult to port the current version of the 3D particle-in-cell code GTC to the CRAY/NEC SX-6 vector architecture. This paper explains the initial work that has been done to port this code to the SX-6 computer and to optimize the most time consuming parts. Early performance results are shown and compared to the same test done on the IBM SP Power 3 and Power 4 machines

  2. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  3. Exposing Hierarchical Parallelism in the FLASH Code for Supernova Simulation on Summit and Other Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Papatheodore, Thomas L. [ORNL; Messer, Bronson [ORNL

    2017-11-01

    Since roughly 100 million years after the big bang, the primordial elements hydrogen (H), helium (He), and lithium (Li) have been synthesized into heavier elements by thermonuclear reactions inside of the stars. The change in stellar composition resulting from these reactions causes stars to evolve over the course of their lives. Although most stars burn through their nuclear fuel and end their lives quietly as inert, compact objects, whereas others end in explosive deaths. These stellar explosions are called supernovae and are among the most energetic events known to occur in our universe. Supernovae themselves further process the matter of their progenitor stars and distribute this material into the interstellar medium of their host galaxies. In the process, they generate ∼1051 ergs of kinetic energy by sending shock waves into their surroundings, thereby contributing to galactic dynamics as well.

  4. Non-linear heat transfer computer code by finite element method

    International Nuclear Information System (INIS)

    Nagato, Kotaro; Takikawa, Noboru

    1977-01-01

    The computer code THETA-2D for the calculation of temperature distribution by the two-dimensional finite element method was made for the analysis of heat transfer in a high temperature structure. Numerical experiment was performed for the numerical integration of the differential equation of heat conduction. The Runge-Kutta method of the numerical experiment produced an unstable solution. A stable solution was obtained by the β method with the β value of 0.35. In high temperature structures, the radiative heat transfer can not be neglected. To introduce a term of the radiative heat transfer, a functional neglecting the radiative heat transfer was derived at first. Then, the radiative term was added after the discretion by variation method. Five model calculations were carried out by the computer code. Calculation of steady heat conduction was performed. When estimated initial temperature is 1,000 degree C, reasonable heat blance was obtained. In case of steady-unsteady temperature calculation, the time integral by THETA-2D turned out to be under-estimation for enthalpy change. With a one-dimensional model, the temperature distribution in a structure, in which heat conductivity is dependent on temperature, was calculated. Calculation with a model which has a void inside was performed. Finally, model calculation for a complex system was carried out. (Kato, T.)

  5. FEMAXI-III. An axisymmetric finite element computer code for the analysis of fuel rod performance

    International Nuclear Information System (INIS)

    Ichikawa, M.; Nakajima, T.; Okubo, T.; Iwano, Y.; Ito, K.; Kashima, K.; Saito, H.

    1980-01-01

    For the analysis of local deformation of fuel rods, which is closely related to PCI failure in LWR, FEMAXI-III has been developed as an improved version based on the essential models of FEMAXI-II, MIPAC, and FEAST codes. The major features of FEMAXI-III are as follows: Elasto-plasticity, creep, pellet cracking, relocation, densification, hot pressing, swelling, fission gas release, and their interrelated effects are considered. Contact conditions between pellet and cladding are exactly treated, where sliding or sticking is defined by iterations. Special emphasis is placed on creep and pellet cracking. In the former, an implicit algorithm is applied to improve numerical stability. In the latter, the pellet is assumed to be non-tension material. The recovery of pellet stiffness under compression is related to initial relocation. Quadratic isoparametric elements are used. The skyline method is applied to solve linear stiffness equation to reduce required core memories. The basic performance of the code has been proven to be satisfactory. (author)

  6. A code for quantitative analysis of light elements in thick samples by PIGE

    International Nuclear Information System (INIS)

    Mateus, R.; Jesus, A.P.; Ribeiro, J.P.

    2005-01-01

    This work presents a code developed for the quantitative analysis of light elements in thick samples by PIGE. The new method avoids the use of standards in the analysis, using a formalism similar to the one used for PIXE analysis, where the excitation function of the nuclear reaction related to the gamma-ray emission is integrated along the depth of the sample. In order to check the validity of the code, we present results for the analysis of Lithium, Boron, Fluorine and Sodium in thick samples. For this purpose, the experimental values of the excitation functions of the reactions 7 Li(p,p'γ) 7 Li, 10 B(p,αγ) 7 Be, 19 F(p,p'γ) 19 F and 23 Na(p,p'γ) 23 Na were used as input. For stopping power cross-sections calculations the semi-empirical equations of Ziegler et al. and the Bragg's rule were used. Agreement between the experimental and the calculated gamma-ray yields was always better than 7.5%

  7. Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding

    Science.gov (United States)

    Dung, Lan-Rong; Lin, Meng-Chun

    This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.

  8. Architecture for time or transform domain decoding of reed-solomon codes

    Science.gov (United States)

    Shao, Howard M. (Inventor); Truong, Trieu-Kie (Inventor); Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  9. THE MEDIEVAL AND OTTOMAN HAMMAMS OF ALGERIA; ELEMENTS FOR A HISTORICAL STUDY OF BATHS ARCHITECTURE IN NORTH AFRICA

    Directory of Open Access Journals (Sweden)

    Nabila Cherif-Seffadj

    2009-03-01

    Full Text Available Algerian medinas (Islamic cities have several traditional public baths (hammams. However, these hammams are the least known in the Maghreb countries. The first French archaeological surveys carried out on Islamic monuments and sites in Algeria, have found few historic baths in medieval towns. All along the highlands route, from Algiers (capital city of Algeria located in the North to Tlemcen (city in the Western part of Algeria, these structures are found in all the cities founded after the Islamic religion expanded in the Western North Africa. These buildings are often associated to large mosques. In architectural history, these baths illustrate original spatial and organizational compositions under form proportions, methods of construction, ornamental elements and the technical skills of their builders. The ancient traditions of bathing interpreted in this building type are an undeniable legacy. They are present through architectural typology and technical implementation reflecting the important architectural heritage of the great Roman cities in Algeria. Furthermore, these traditions and buildings evolved through different eras. Master builders, who left Andalusia to seek refuge in the Maghreb countries, added the construction and ornamentation skills and techniques brought from Muslim Spain, while the Ottomans contribution in the history of many urban cities is important. Hence, the dual appellation of the hammam as “Moorish bath” and “Turkish bath” in Algeria is the perfect illustration of the evolution of bath architecture in Algeria.

  10. Cracking the omega code: hydraulic architecture of the cycad leaf axis.

    Science.gov (United States)

    Tomlinson, P Barry; Ricciardi, Alison; Huggett, Brett A

    2018-03-05

    The leaf axis of members of the order Cycadales ('cycads') has long been recognized by its configuration of independent vascular bundles that, in transverse section, resemble the Greek letter omega (hence the 'omega pattern'). This provides a useful diagnostic character for the order, especially when applied to paleobotany. The function of this pattern has never been elucidated. Here we provide a three-dimensional analysis and explain the pattern in terms of the hydraulic architecture of the pinnately compound cycad leaf. The genus Cycas was used as a simple model, because each leaflet is supplied by a single vascular bundle. Sequential sectioning was conducted throughout the leaf axis and photographed with a digital camera. Photographs were registered and converted to a cinematic format, which provided an objective method of analysis. The omega pattern in the petiole can be sub-divided into three vascular components, an abaxial 'circle', a central 'column' and two adaxial 'wings', the last being the only direct source of vascular supply to the leaflets. Each leaflet is supplied by a vascular bundle that has divided or migrated directly from the closest wing bundle. There is neither multiplication nor anastomoses of vascular bundles in the other two components. Thus, as one proceeds from base to apex along the leaf axis, the number of vascular bundles in circle and column components is reduced distally by their uniform migration throughout all components. Consequently, the distal leaflets are irrigated by the more abaxial bundles, guaranteeing uniform water supply along the length of the axis. The omega pattern exemplifies one of the many solutions plants have achieved in supplying distal appendages of an axis with a uniform water supply. Our method presents a model that can be applied to other genera of cycads with more complex vascular organization. © The Author(s) 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights

  11. Analysis of lower head failure with simplified models and a finite element code

    Energy Technology Data Exchange (ETDEWEB)

    Koundy, V. [CEA-IPSN-DPEA-SEAC, Service d' Etudes des Accidents, Fontenay-aux-Roses (France); Nicolas, L. [CEA-DEN-DM2S-SEMT, Service d' Etudes Mecaniques et Thermiques, Gif-sur-Yvette (France); Combescure, A. [INSA-Lyon, Lab. Mecanique des Solides, Villeurbanne (France)

    2001-07-01

    The objective of the OLHF (OECD lower head failure) experiments is to characterize the timing, mode and size of lower head failure under high temperature loading and reactor coolant system pressure due to a postulated core melt scenario. Four tests have been performed at Sandia National Laboratories (USA), in the frame of an OECD project. The experimental results have been used to develop and validate predictive analysis models. Within the framework of this project, several finite element calculations were performed. In parallel, two simplified semi-analytical methods were developed in order to get a better understanding of the role of various parameters on the creep phenomenon, e.g. the behaviour of the lower head material and its geometrical characteristics on the timing, mode and location of failure. Three-dimensional modelling of crack opening and crack propagation has also been carried out using the finite element code Castem 2000. The aim of this paper is to present the two simplified semi-analytical approaches and to report the status of the 3D crack propagation calculations. (authors)

  12. Modification of the finite element heat and mass transfer code (FEHMN) to model multicomponent reactive transport

    International Nuclear Information System (INIS)

    Viswanathan, H.S.

    1995-01-01

    The finite element code FEHMN is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developed hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent K d model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also provide that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies

  13. Architectural elements and bounding surfaces in fluvial deposits: anatomy of the Kayenta formation (lower jurassic), Southwest Colorado

    Science.gov (United States)

    Miall, Andrew D.

    1988-03-01

    Three well-exposed outcrops in the Kayenta Formation (Lower Jurassic), near Dove Creek in southwestern Colorado, were studied using lateral profiles, in order to test recent regarding architectural-element analysis and the classification and interpretation of internal bounding surfaces. Examination of bounding surfaces within and between elements in the Kayenta outcrops raises problems in applying the three-fold classification of Allen (1983). Enlarging this classification to a six-fold hierarchy permits the discrimination of surfaces intermediate between Allen's second- and third-order types, corresponding to the upper bounding surfaces of macroforms, and internal erosional "reactivation" surfaces within the macroforms. Examples of the first five types of surface occur in the Kayenta outcrops at Dove Creek. The new classifications is offered as a general solution to the problem of description of complex, three-dimensional fluvial sandstone bodies. The Kayenta Formation at Dove Creek consists of a multistorey sandstone body, including the deposits of lateral- and downstream-accreted macroforms. The storeys show no internal cyclicity, neither within individual elements nor through the overall vertical thickness of the formation. Low paleocurrent variance indicates low sinuosity flow, whereas macroform geometry and orientation suggest low to moderate sinuosity. The many internal minor erosion surfaces draped with mud and followed by intraclast breccias imply frequent rapid stage fluctuation, consistent with variable (seasonal? monsonal? ephemmeral?) flow. The results suggest a fluvial architecture similar to that of the South Saskatchewan River, through with a three-dimensional geometry unlike that interpreted from surface studies of that river.

  14. Elements of regional architecture in the works of architect Ivan Antić

    Directory of Open Access Journals (Sweden)

    Milašinović-Marić Dijana

    2017-01-01

    Full Text Available The body of work of Ivan Antić (Belgrade, 1923-2005, one of the most important Serbian architects who created his works in the period from 1955 to 1990, represents almost a reification of ideals of the times he lived in, both in terms of form and in structural and substantive terms. His work is placed within a rationalistic concept which is essentially experienced as an undisturbed harmony between his personality and the contemporary architectural expression. However, besides such way of interpretation, his architecture also includes examples indicating the thinking about the folk tradition, architectural heritage, the primordial, as well as the archetypal, typical for a region. In the context of the body of work of architect Ivan Antić, this paper will particularly place accent and track such threads of thinking which are, in an obvious or transparent sense, expressed in a series of realized solutions and designs such as the Guard's Home in Dedinje (Belgrade, 1957-1958, Children's Home (Jermenovci, 1956-1957, Museum of the Genocide in Šumarice which he designed together with I. Raspopović (Kragujevac, 1968-1975, 'Politika' Cultural Centre (Krupanj, 1976-1981, '25th May' Sports Center (Belgrade, 1971-1973, or his own house in Lisović near Belgrade. All the abovementioned buildings, as well as numerous other, which belong at the top of Serbian architecture, reflect the spirit of the time in which he created them. They clearly indicate the unbreakable bond which exists in architecture between the inherited, vernacular, contemporary and personal architect's attitude.

  15. Numerical experiments in finite element analysis of thermoelastoplastic behaviour of materials. Further developments of the PLASTEF code

    International Nuclear Information System (INIS)

    Basombrio, F.G.; Sarmiento, G.S.

    1980-01-01

    In a previous paper the finite element code PLASTEF for the numerical simulation of thermoelastoplastic behaviour of materials was presented in its general outline. This code employs an initial stress incremental procedure for given histories of loads and temperature. It has been formulated for medium sized computers. The present work is an extension of the previous paper to consider additional aspects of the variable temperature case. Non-trivial tests of this type of situation are described. Finally, details are given of some concrete applications to the prediction of thermoelastoplastic collapse of nuclear fuel element cladding. (author)

  16. SAFE: A computer code for the steady-state and transient thermal analysis of LMR fuel elements

    International Nuclear Information System (INIS)

    Hayes, S.L.

    1993-12-01

    SAFE is a computer code developed for both the steady-state and transient thermal analysis of single LMR fuel elements. The code employs a two-dimensional control-volume based finite difference methodology with fully implicit time marching to calculate the temperatures throughout a fuel element and its associated coolant channel for both the steady-state and transient events. The code makes no structural calculations or predictions whatsoever. It does, however, accept as input structural parameters within the fuel such as the distributions of porosity and fuel composition, as well as heat generation, to allow a thermal analysis to be performed on a user-specified fuel structure. The code was developed with ease of use in mind. An interactive input file generator and material property correlations internal to the code are available to expedite analyses using SAFE. This report serves as a complete design description of the code as well as a user's manual. A sample calculation made with SAFE is included to highlight some of the code's features. Complete input and output files for the sample problem are provided

  17. Criticality analysis of the storage tubes for irradiated fuel elements from the IEA-R1 with the MCNP code

    International Nuclear Information System (INIS)

    Maragni, M.G.; Moreira, J.M.L.

    1992-01-01

    A criticality safety analysis has been carried out for the storage tubes for irradiated fuel elements from the IEA-R1 research reactor. The analysis utilized the MCNP computer code which allows exact simulations of complex geometries. Aiming reducing the amount of input data, the fuel element cross-sections have been spatially smeared out. The earth material interstice between fuel elements has been approximated conservatively as concrete because its composition was unknown. The storage tubes have been found subcritical for the most adverse conditions (water flooding and un-irradiated fuel elements). A similar analysis with the KENO-IV computer code overestimated the KEF result but still confirmed the criticality safety of the storage tubes. (author)

  18. Modification of the finite element heat and mass transfer code (FEHM) to model multicomponent reactive transport

    International Nuclear Information System (INIS)

    Viswanathan, H.S.

    1996-08-01

    The finite element code FEHMN, developed by scientists at Los Alamos National Laboratory (LANL), is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developing hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent Kd model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The new chemical capabilities of FEHMN are illustrated by using Los Alamos National Laboratory's site scale model of Yucca Mountain to model two-dimensional, vadose zone 14 C transport. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also prove that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies

  19. A new three-tier architecture design for multi-sphere neutron spectrometer with the FLUKA code

    Science.gov (United States)

    Huang, Hong; Yang, Jian-Bo; Tuo, Xian-Guo; Liu, Zhi; Wang, Qi-Biao; Wang, Xu

    2016-07-01

    The current commercially, available Bonner sphere neutron spectrometer (BSS) has high sensitivity to neutrons below 20 MeV, which causes it to be poorly placed to measure neutrons ranging from a few MeV to 100 MeV. The paper added moderator layers and the auxiliary material layer upon 3He proportional counters with FLUKA code, with a view to improve. The results showed that the responsive peaks to neutrons below 20 MeV gradually shift to higher energy region and decrease slightly with the increasing moderator thickness. On the contrary, the response for neutrons above 20 MeV was always very low until we embed auxiliary materials such as copper (Cu), lead (Pb), tungsten (W) into moderator layers. This paper chose the most suitable auxiliary material Pb to design a three-tier architecture multi-sphere neutron spectrometer (NBSS). Through calculating and comparing, the NBSS was advantageous in terms of response for 5-100 MeV and the highest response was 35.2 times the response of polyethylene (PE) ball with the same PE thickness.

  20. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    International Nuclear Information System (INIS)

    Paolucci, Roberto; Stupazzini, Marco

    2008-01-01

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion

  1. A FINITE-ELEMENTS APPROACH TO THE STUDY OF FUNCTIONAL ARCHITECTURE IN SKELETAL-MUSCLE

    NARCIS (Netherlands)

    OTTEN, E; HULLIGER, M

    1994-01-01

    A mathematical model that simulates the mechanical processes inside a skeletal muscle under various conditions of muscle recruitment was formulated. The model is based on the finite-elements approach and simulates both contractile and passive elastic elements. Apart from the classic strategy of

  2. A dual origin of the Xist gene from a protein-coding gene and a set of transposable elements.

    Directory of Open Access Journals (Sweden)

    Eugeny A Elisaphenko

    2008-06-01

    Full Text Available X-chromosome inactivation, which occurs in female eutherian mammals is controlled by a complex X-linked locus termed the X-inactivation center (XIC. Previously it was proposed that genes of the XIC evolved, at least in part, as a result of pseudogenization of protein-coding genes. In this study we show that the key XIC gene Xist, which displays fragmentary homology to a protein-coding gene Lnx3, emerged de novo in early eutherians by integration of mobile elements which gave rise to simple tandem repeats. The Xist gene promoter region and four out of ten exons found in eutherians retain homology to exons of the Lnx3 gene. The remaining six Xist exons including those with simple tandem repeats detectable in their structure have similarity to different transposable elements. Integration of mobile elements into Xist accompanies the overall evolution of the gene and presumably continues in contemporary eutherian species. Additionally we showed that the combination of remnants of protein-coding sequences and mobile elements is not unique to the Xist gene and is found in other XIC genes producing non-coding nuclear RNA.

  3. PLASTEF: a code for the numerical simulation of thermoelastoplastic behaviour of materials using the finite element method

    International Nuclear Information System (INIS)

    Basombrio, F.G.; Sanchez Sarmiento, G.

    1978-01-01

    A general code for solving two-dimensional thermo-elastoplastic problems in geometries of arbitrary shape using the finite element method, is presented. The initial stress incremental procedure was adopted, for given histories of load and temperature. Some classical applications are included. (Auth.)

  4. Ethical codes. Fig leaf argument, ballast or cultural element for radiation protection?

    International Nuclear Information System (INIS)

    Gellermann, Rainer

    2014-01-01

    The international association for radiation protection (IRPA) adopted in May 2004 a Code of Ethics in order to allow their members to hold an adequate professional level of ethical line of action. Based on this code of ethics the professional body of radiation protection (Fachverband fuer Strahlenschutz) has developed its own ethical code and adopted in 2005.

  5. Technology of Oak Architectural and Decorative Elements Manufacturing for Iconostasis Recreating in Krestovozdvizhensky Temple in Village of Syrostan, Chelyabinsk region

    Science.gov (United States)

    Yudin, V.

    2017-11-01

    Due to the historical peculiarities of Russia, by the end of the 20th century many temples were destroyed or they lost their iconostases which most often were made of wood. When it became necessary to revive the traditional craft it turned out that it was lost almost completely which negatively affects the quality of the wooden iconostases restoration and their new construction. The article aims to fill the loss of knowledge and skills that make up the content of one of the most interesting types of the architectural and monumental and decorative art through study of the forms of preserved fragments once being a very rich historical and cultural heritage. Similar works on the study of wooden iconostases aimed at the recreation of oak decorative wooden elements and restoration practice have not been performed so far which gives it a character of particular relevance for the architectural science. New and relevant technological improvements are not rejected but skillfully introduced into the arsenal of techniques and means of modern restorers and carvers to facilitate the recovery of iconostasis construction from a crisis state and the transition to the subsequent continuation of the tradition development. The deep knowledge of the research subject allowed one to use oak decorative elements in the manufacture for recreating the iconostasis of the Krestovozdvizhensky temple in the village of Syrostan, the Chelyabinsk region. This material is undoubtedly of a scientific and reference nature as well as economic efficiency for all those who wish to join the noble traditional iconostasis making art.

  6. Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures

    International Nuclear Information System (INIS)

    Ohana, N; Lanti, E; Tran, T M; Brunner, S; Hariri, F; Villard, L; Jocksch, A; Gheller, C

    2016-01-01

    With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices. (paper)

  7. Transduplication resulted in the incorporation of two protein-coding sequences into the Turmoil-1 transposable element of C. elegans

    Directory of Open Access Journals (Sweden)

    Pupko Tal

    2008-10-01

    Full Text Available Abstract Transposable elements may acquire unrelated gene fragments into their sequences in a process called transduplication. Transduplication of protein-coding genes is common in plants, but is unknown of in animals. Here, we report that the Turmoil-1 transposable element in C. elegans has incorporated two protein-coding sequences into its inverted terminal repeat (ITR sequences. The ITRs of Turmoil-1 contain a conserved RNA recognition motif (RRM that originated from the rsp-2 gene and a fragment from the protein-coding region of the cpg-3 gene. We further report that an open reading frame specific to C. elegans may have been created as a result of a Turmoil-1 insertion. Mutations at the 5' splice site of this open reading frame may have reactivated the transduplicated RRM motif. Reviewers This article was reviewed by Dan Graur and William Martin. For the full reviews, please go to the Reviewers' Reports section.

  8. Evaluation of finite element codes for demonstrating the performance of radioactive material packages in hypothetical accident drop scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Tso, C.F. [Arup (United Kingdom); Hueggenberg, R. [Gesellschaft fuer Nuklear-Behaelter mbH (Germany)

    2004-07-01

    Drop testing and analysis are the two methods for demonstrating the performance of packages in hypothetical drop accident scenarios. The exact purpose of the tests and the analyses, and the relative prominence of the two in the license application, may depend on the Competent Authority and will vary between countries. The Finite Element Method (FEM) is a powerful analysis tool. A reliable finite element (FE) code when used correctly and appropriately, will allow a package's behaviour to be simulated reliably. With improvements in computing power, and in sophistication and reliability of FE codes, it is likely that FEM calculations will increasingly be used as evidence of drop test performance when seeking Competent Authority approval. What is lacking at the moment, however, is a standardised method of assessing a FE code in order to determine whether it is sufficiently reliable or pessimistic. To this end, the project Evaluation of Codes for Analysing the Drop Test Performance of Radioactive Material Transport Containers, funded by the European Commission Directorate-General XVII (now Directorate-General for Energy and Transport) and jointly performed by Arup and Gesellschaft fuer Nuklear-Behaelter mbH, was carried out in 1998. The work consisted of three components: Survey of existing finite element software, with a view to finding codes that may be capable of analysing drop test performance of radioactive material packages, and to produce an inventory of them. Develop a set of benchmark problems to evaluate software used for analysing the drop test performance of packages. Evaluate the finite element codes by testing them against the benchmarks This paper presents a summary of this work.

  9. Evaluation of finite element codes for demonstrating the performance of radioactive material packages in hypothetical accident drop scenarios

    International Nuclear Information System (INIS)

    Tso, C.F.; Hueggenberg, R.

    2004-01-01

    Drop testing and analysis are the two methods for demonstrating the performance of packages in hypothetical drop accident scenarios. The exact purpose of the tests and the analyses, and the relative prominence of the two in the license application, may depend on the Competent Authority and will vary between countries. The Finite Element Method (FEM) is a powerful analysis tool. A reliable finite element (FE) code when used correctly and appropriately, will allow a package's behaviour to be simulated reliably. With improvements in computing power, and in sophistication and reliability of FE codes, it is likely that FEM calculations will increasingly be used as evidence of drop test performance when seeking Competent Authority approval. What is lacking at the moment, however, is a standardised method of assessing a FE code in order to determine whether it is sufficiently reliable or pessimistic. To this end, the project Evaluation of Codes for Analysing the Drop Test Performance of Radioactive Material Transport Containers, funded by the European Commission Directorate-General XVII (now Directorate-General for Energy and Transport) and jointly performed by Arup and Gesellschaft fuer Nuklear-Behaelter mbH, was carried out in 1998. The work consisted of three components: Survey of existing finite element software, with a view to finding codes that may be capable of analysing drop test performance of radioactive material packages, and to produce an inventory of them. Develop a set of benchmark problems to evaluate software used for analysing the drop test performance of packages. Evaluate the finite element codes by testing them against the benchmarks This paper presents a summary of this work

  10. Development of a three-dimensional neutron transport code DFEM based on the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1996-01-01

    A three-dimensional neutron transport code DFEM has been developed by the double finite element method to analyze reactor cores with complex geometry as large fast reactors. Solution algorithm is based on the double finite element method in which the space and angle finite elements are employed. A reactor core system can be divided into some triangular and/or quadrangular prism elements, and the spatial distribution of neutron flux in each element is approximated with linear basis functions. As for the angular variables, various basis functions are applied, and their characteristics were clarified by comparison. In order to enhance the accuracy, a general method is derived to remedy the truncation errors at reflective boundaries, which are inherent in the conventional FEM. An adaptive acceleration method and the source extrapolation method were applied to accelerate the convergence of the iterations. The code structure is outlined and explanations are given on how to prepare input data. A sample input list is shown for reference. The eigenvalue and flux distribution for real scale fast reactors and the NEA benchmark problems were presented and discussed in comparison with the results of other transport codes. (author)

  11. Contribution of transposable elements and distal enhancers to evolution of human-specific features of interphase chromatin architecture in embryonic stem cells.

    Science.gov (United States)

    Glinsky, Gennadi V

    2018-03-01

    Transposable elements have made major evolutionary impacts on creation of primate-specific and human-specific genomic regulatory loci and species-specific genomic regulatory networks (GRNs). Molecular and genetic definitions of human-specific changes to GRNs contributing to development of unique to human phenotypes remain a highly significant challenge. Genome-wide proximity placement analysis of diverse families of human-specific genomic regulatory loci (HSGRL) identified topologically associating domains (TADs) that are significantly enriched for HSGRL and designated rapidly evolving in human TADs. Here, the analysis of HSGRL, hESC-enriched enhancers, super-enhancers (SEs), and specific sub-TAD structures termed super-enhancer domains (SEDs) has been performed. In the hESC genome, 331 of 504 (66%) of SED-harboring TADs contain HSGRL and 68% of SEDs co-localize with HSGRL, suggesting that emergence of HSGRL may have rewired SED-associated GRNs within specific TADs by inserting novel and/or erasing existing non-coding regulatory sequences. Consequently, markedly distinct features of the principal regulatory structures of interphase chromatin evolved in the hESC genome compared to mouse: the SED quantity is 3-fold higher and the median SED size is significantly larger. Concomitantly, the overall TAD quantity is increased by 42% while the median TAD size is significantly decreased (p = 9.11E-37) in the hESC genome. Present analyses illustrate a putative global role for transposable elements and HSGRL in shaping the human-specific features of the interphase chromatin organization and functions, which are facilitated by accelerated creation of novel transcription factor binding sites and new enhancers driven by targeted placement of HSGRL at defined genomic coordinates. A trend toward the convergence of TAD and SED architectures of interphase chromatin in the hESC genome may reflect changes of 3D-folding patterns of linear chromatin fibers designed to enhance both

  12. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  13. Implementation of the full viscoresistive magnetohydrodynamic equations in a nonlinear finite element code

    Energy Technology Data Exchange (ETDEWEB)

    Haverkort, J.W. [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090 GB Amsterdam (Netherlands); Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven (Netherlands); Blank, H.J. de [Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven (Netherlands); Huysmans, G.T.A. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Pratt, J. [Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven (Netherlands); Koren, B., E-mail: b.koren@tue.nl [Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

    2016-07-01

    Numerical simulations form an indispensable tool to understand the behavior of a hot plasma that is created inside a tokamak for providing nuclear fusion energy. Various aspects of tokamak plasmas have been successfully studied through the reduced magnetohydrodynamic (MHD) model. The need for more complete modeling through the full MHD equations is addressed here. Our computational method is presented along with measures against possible problems regarding pollution, stability, and regularity. The problem of ensuring continuity of solutions in the center of a polar grid is addressed in the context of a finite element discretization of the full MHD equations. A rigorous and generally applicable solution is proposed here. Useful analytical test cases are devised to verify the correct implementation of the momentum and induction equation, the hyperdiffusive terms, and the accuracy with which highly anisotropic diffusion can be simulated. A striking observation is that highly anisotropic diffusion can be treated with the same order of accuracy as isotropic diffusion, even on non-aligned grids, as long as these grids are generated with sufficient care. This property is shown to be associated with our use of a magnetic vector potential to describe the magnetic field. Several well-known instabilities are simulated to demonstrate the capabilities of the new method. The linear growth rate of an internal kink mode and a tearing mode are benchmarked against the results of a linear MHD code. The evolution of a tearing mode and the resulting magnetic islands are simulated well into the nonlinear regime. The results are compared with predictions from the reduced MHD model. Finally, a simulation of a ballooning mode illustrates the possibility to use our method as an ideal MHD method without the need to add any physical dissipation.

  14. Nucleoporins as components of the nuclear pore complex core structure and Tpr as the architectural element of the nuclear basket.

    Science.gov (United States)

    Krull, Sandra; Thyberg, Johan; Björkroth, Birgitta; Rackwitz, Hans-Richard; Cordes, Volker C

    2004-09-01

    The vertebrate nuclear pore complex (NPC) is a macromolecular assembly of protein subcomplexes forming a structure of eightfold radial symmetry. The NPC core consists of globular subunits sandwiched between two coaxial ring-like structures of which the ring facing the nuclear interior is capped by a fibrous structure called the nuclear basket. By postembedding immunoelectron microscopy, we have mapped the positions of several human NPC proteins relative to the NPC core and its associated basket, including Nup93, Nup96, Nup98, Nup107, Nup153, Nup205, and the coiled coil-dominated 267-kDa protein Tpr. To further assess their contributions to NPC and basket architecture, the genes encoding Nup93, Nup96, Nup107, and Nup205 were posttranscriptionally silenced by RNA interference (RNAi) in HeLa cells, complementing recent RNAi experiments on Nup153 and Tpr. We show that Nup96 and Nup107 are core elements of the NPC proper that are essential for NPC assembly and docking of Nup153 and Tpr to the NPC. Nup93 and Nup205 are other NPC core elements that are important for long-term maintenance of NPCs but initially dispensable for the anchoring of Nup153 and Tpr. Immunogold-labeling for Nup98 also results in preferential labeling of NPC core regions, whereas Nup153 is shown to bind via its amino-terminal domain to the nuclear coaxial ring linking the NPC core structures and Tpr. The position of Tpr in turn is shown to coincide with that of the nuclear basket, with different Tpr protein domains corresponding to distinct basket segments. We propose a model in which Tpr constitutes the central architectural element that forms the scaffold of the nuclear basket.

  15. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L. [Oak Ridge National Lab., TN (United States); Sartori, E. [OCDE/OECD NEA Data Bank, Issy-les-Moulineaux (France); Viedma, L.G. de [Consejo de Seguridad Nuclear, Madrid (Spain)

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  16. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    International Nuclear Information System (INIS)

    Kirk, B.L.; Sartori, E.; Viedma, L.G. de

    1997-01-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee's Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community's computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management

  17. Development of severe accident analysis code - Development of a finite element code for lower head failure analysis

    Energy Technology Data Exchange (ETDEWEB)

    Huh, Hoon; Lee, Choong Ho; Choi, Tae Hoon; Kim, Hyun Sup; Kim, Se Ho; Kang, Woo Jong; Seo, Chong Kwan [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-08-01

    The study concerns the development of analysis models and computer codes for lower head failure analysis when a severe accident occurs in a nuclear reactor system. Although the lower head failure modes consists of several failure modes, the study this year was focused on the global rupture with the collapse pressure and mode by limit analysis and elastic deformation. The behavior of molten core causes elevation of temperature in the reactor vessel wall and deterioration of load-carrying capacity of a reactor vessel. The behavior of molten core and the heat transfer modes were, therefore, postulated in several types and the temperature distributions according to the assumed heat flux modes were calculated. The collapse pressure of a nuclear reactor lower head decreases rapidly with elevation of temperature as time passes. The calculation shows the safety of a nuclear reactor is enhanced with the lager collapse pressure when the hot spot is located far from the pole. 42 refs., 2 tabs., 31 figs. (author)

  18. Rn3D: A finite element code for simulating gas flow and radon transport in variably saturated, nonisothermal porous media

    International Nuclear Information System (INIS)

    Holford, D.J.

    1994-01-01

    This document is a user's manual for the Rn3D finite element code. Rn3D was developed to simulate gas flow and radon transport in variably saturated, nonisothermal porous media. The Rn3D model is applicable to a wide range of problems involving radon transport in soil because it can simulate either steady-state or transient flow and transport in one-, two- or three-dimensions (including radially symmetric two-dimensional problems). The porous materials may be heterogeneous and anisotropic. This manual describes all pertinent mathematics related to the governing, boundary, and constitutive equations of the model, as well as the development of the finite element equations used in the code. Instructions are given for constructing Rn3D input files and executing the code, as well as a description of all output files generated by the code. Five verification problems are given that test various aspects of code operation, complete with example input files, FORTRAN programs for the respective analytical solutions, and plots of model results. An example simulation is presented to illustrate the type of problem Rn3D is designed to solve. Finally, instructions are given on how to convert Rn3D to simulate systems other than radon, air, and water

  19. Design and performance of coded aperture optical elements for the CESR-TA x-ray beam size monitor

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, J.P.; Chatterjee, A.; Conolly, C.; Edwards, E.; Ehrlichman, M.P. [Cornell University, Ithaca, NY 14853 (United States); Flanagan, J.W. [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Department of Accelerator Science, Graduate University for Advanced Studies (SOKENDAI), Tsukuba (Japan); Fontes, E. [Cornell University, Ithaca, NY 14853 (United States); Heltsley, B.K., E-mail: bkh2@cornell.edu [Cornell University, Ithaca, NY 14853 (United States); Lyndaker, A.; Peterson, D.P.; Rider, N.T.; Rubin, D.L.; Seeley, R.; Shanks, J. [Cornell University, Ithaca, NY 14853 (United States)

    2014-12-11

    We describe the design and performance of optical elements for an x-ray beam size monitor (xBSM), a device measuring e{sup +} and e{sup −} beam sizes in the CESR-TA storage ring. The device can measure vertical beam sizes of 10–100μm on a turn-by-turn, bunch-by-bunch basis at e{sup ±} beam energies of ∼2–5GeV. x-rays produced by a hard-bend magnet pass through a single- or multiple-slit (coded aperture) optical element onto a detector. The coded aperture slit pattern and thickness of masking material forming that pattern can both be tuned for optimal resolving power. We describe several such optical elements and show how well predictions of simple models track measured performances. - Highlights: • We characterize optical element performance of an e{sup ±} x-ray beam size monitor. • We standardize beam size resolving power measurements to reference conditions. • Standardized resolving power measurements compare favorably to model predictions. • Key model features include simulation of photon-counting statistics and image fitting. • Results validate a coded aperture design optimized for the x-ray spectrum encountered.

  20. Investigation of coolant thermal mixing within 28-element CANDU fuel bundles using the ASSERT-PV thermal hydraulics code

    International Nuclear Information System (INIS)

    Lightston, M.F.; Rock, R.

    1996-01-01

    This paper presents the results of a study of the thermal mixing of single-phase coolant in 28-element CANDU fuel bundles under steady-state conditions. The study, which is based on simulations performed using the ASSERT-PV thermal hydraulic code, consists of two main parts. In the first part the various physical mechanisms that contribute to coolant mixing are identified and their impact is isolated via ASSERT-PV simulations. The second part is concerned with development of a preliminary model suitable for use in the fuel and fuel channel code FACTAR to predict the thermal mixing that occurs between flow annuli. (author)

  1. The NPOESS Preparatory Project (NPP) Science Data Segment (SDS) Data Depository and Distribution Element (SD3E) System Architecture

    Science.gov (United States)

    Ho, Evelyn L.; Schweiss, Robert J.

    2008-01-01

    supports science mission data assessment by assuring the timely and validated acquisition and subsequent transfer of the NPP Science Mission data to the SDS Elements and NPP Science Team. The six science elements that interface with the SD3E span across the NASA Goddard Space Flight Center (GSFC), the NASA Jet Propulsion Laboratory (JPL), and the University of Wisconsin. As the primary communication vehicle for the science elements and science team, the SD3E has an interface to the operational data providers: National Environment Satellite, Data, and Information Service (NESDIS) Interface Data Processing System (IDPS) and the National Oceanic Atmospheric Administration's (NOAA) Comprehensive Large Array-data Stewardship system (CLASS) Archive Data System (ADS), that are responsible for product generation and archive and distribution respectively. The SD3E is designed to be a semi-customizable and semi-automated system. This system is designed to provide flexibility and ease of use for the science users in accessing the latest data products by creating a rolling data cache that temporarily stores the products locally before transferring the data to the SDS Measurement based elements for the land, ocean, atmosphere, sounder, and ozone. This paper describes the design and architecture of one of the nine SDS elements, the SD3E, and how this system has provided a mechanism for efficient data exchange, how it has helped in alleviating some of the network traffic and usage, and how it has contributed to reducing operational costs.

  2. The relationship between 3D bone architectural parameters and elastic moduli of three orthogonal directions predicted from finite elements analysis

    International Nuclear Information System (INIS)

    Park, Kwan Soo; Lee, Sam Sun; Huh, Kyung Hoe; Yi, Wan Jin; Heo, Min Suk; Choi, Soon Chul

    2008-01-01

    To investigate the relationship between 3D bone architectural parameters and direction-related elastic moduli of cancellous bone of mandibular condyle. Two micro-pigs (Micro-pigR, PWG Genetics Korea) were used. Each pig was about 12 months old and weighing around 44 kg. 31 cylindrical bone specimen were obtained from cancellous bone of condyles for 3D analysis and measured by micro-computed tomography. Six parameters were trabecular thickness (Tb.Th), bone specific surface (BS/BV), percent bone volume (BV/TV), structure model index (SMI), degree of anisotropy (DA) and 3-dimensional fractal dimension (3DFD). Elastic moduli of three orthogonal directions (superiorinferior (SI), medial-lateral (ML), andterior-posterior (AP) direction) were calculated through finite element analysis. Elastic modulus of superior-inferior direction was higher than those of other directions. Elastic moduli of 3 orthogonal directions showed different correlation with 3D architectural parameters. Elastic moduli of SI and ML directions showed significant strong to moderate correlation with BV/TV, SMI and 3DFD. Elastic modulus of cancellous bone of pig mandibular condyle was highest in the SI direction and it was supposed that the change into plate-like structure of trabeculae was mainly affected by increase of trabeculae of SI and ML directions.

  3. From structure from motion to historical building information modeling: populating a semantic-aware library of architectural elements

    Science.gov (United States)

    Santagati, Cettina; Lo Turco, Massimiliano

    2017-01-01

    In recent years, we have witnessed a huge diffusion of building information modeling (BIM) approaches in the field of architectural design, although very little research has been undertaken to explore the value, criticalities, and advantages attributable to the application of these methodologies in the cultural heritage domain. Furthermore, the last developments in digital photogrammetry lead to the easy generation of reliable low-cost three-dimensional textured models that could be used in BIM platforms to create semantic-aware objects that could compose a specific library of historical architectural elements. In this case, the transfer between the point cloud and its corresponding parametric model is not so trivial and the level of geometrical abstraction could not be suitable with the scope of the BIM. The aim of this paper is to explore and retrace the milestone works on this crucial topic in order to identify the unsolved issues and to propose and test a unique and simple workflow practitioner centered and based on the use of the latest available solutions for point cloud managing into commercial BIM platforms.

  4. User's Manual for the FEHM Application-A Finite-Element Heat- and Mass-Transfer Code

    Energy Technology Data Exchange (ETDEWEB)

    George A. Zyvoloski; Bruce A. Robinson; Zora V. Dash; Lynn L. Trease

    1997-07-07

    This document is a manual for the use of the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multicomponent flow in porous media. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved in the FEHM application by using the finite-element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat- and mass-transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. In fact, FEHM is capable of describing flow that is dominated in many areas by fracture and fault flow, including the inherently three-dimensional flow that results from permeation to and from faults and fractures. The code can handle coupled heat and mass-transfer effects, such as boiling, dryout, and condensation that can occur in the near-field region surrounding the potential repository and the natural convection that occurs through Yucca Mountain due to seasonal temperature changes. The code is also capable of incorporating the various adsorption mechanisms, ranging from simple linear relations to nonlinear isotherms, needed to describe the very complex transport processes at Yucca Mountain. This report outlines the uses and capabilities of the FEHM application, initialization of code variables, restart procedures, and error processing. The report describes all the data files, the input data

  5. Implementation of second moment closure turbulence model for incompressible flows in the industrial finite element code N3S

    International Nuclear Information System (INIS)

    Pot, G.; Laurence, D.; Rharif, N.E.; Leal de Sousa, L.; Compe, C.

    1995-12-01

    This paper deals with the introduction of a second moment closure turbulence model (Reynolds Stress Model) in an industrial finite element code, N3S, developed at Electricite de France.The numerical implementation of the model in N3S will be detailed in 2D and 3D. Some details are given concerning finite element computations and solvers. Then, some results will be given, including a comparison between standard k-ε model, R.S.M. model and experimental data for some test case. (authors). 22 refs., 3 figs

  6. A non-linear, finite element, heat conduction code to calculate temperatures in solids of arbitrary geometry

    International Nuclear Information System (INIS)

    Tayal, M.

    1987-01-01

    Structures often operate at elevated temperatures. Temperature calculations are needed so that the design can accommodate thermally induced stresses and material changes. A finite element computer called FEAT has been developed to calculate temperatures in solids of arbitrary shapes. FEAT solves the classical equation for steady state conduction of heat. The solution is obtained for two-dimensional (plane or axisymmetric) or for three-dimensional problems. Gap elements are use to simulate interfaces between neighbouring surfaces. The code can model: conduction; internal generation of heat; prescribed convection to a heat sink; prescribed temperatures at boundaries; prescribed heat fluxes on some surfaces; and temperature-dependence of material properties like thermal conductivity. The user has a option of specifying the detailed variation of thermal conductivity with temperature. For convenience to the nuclear fuel industry, the user can also opt for pre-coded values of thermal conductivity, which are obtained from the MATPRO data base (sponsored by the U.S. Nuclear Regulatory Commission). The finite element method makes FEAT versatile, and enables it to accurately accommodate complex geometries. The optional link to MATPRO makes it convenient for the nuclear fuel industry to use FEAT, without loss of generality. Special numerical techniques make the code inexpensive to run, for the type of material non-linearities often encounter in the analysis of nuclear fuel. The code, however, is general, and can be used for other components of the reactor, or even for non-nuclear systems. The predictions of FEAT have been compared against several analytical solutions. The agreement is usually better than 5%. Thermocouple measurements show that the FEAT predictions are consistent with measured changes in temperatures in simulated pressure tubes. FEAT was also found to predict well, the axial variations in temperatures in the end-pellets(UO 2 ) of two fuel elements irradiated

  7. Analysis of experiments of the University of Hannover with the Cathare code on fluid dynamic effects in the fuel element top nozzle area during refilling and reflooding

    International Nuclear Information System (INIS)

    Bestion, D.

    1989-11-01

    The CATHARE code is used to calculate the experiment of the University of Hannover concerning the flooding limit at the fuel element top nozzle area. Some qualitative and quantitativ limit at the fuel element top nozzle area. on both the actual fluid dynamics which is observed in the experiments and on the corresponding code behaviour. Shortcomings of the present models are clearly identified. New developments are proposed which should extend the code capabilities

  8. Full scale seismic simulation of a nuclear reactor with parallel finite element analysis code for assembled structure

    International Nuclear Information System (INIS)

    Yamada, Tomonori

    2010-01-01

    The safety requirement of nuclear power plant attracts much attention nowadays. With the growing computing power, numerical simulation is one of key technologies to meet this safety requirement. Center for Computational Science and e-Systems of Japan Atomic Energy Agency has been developing a finite element analysis code for assembled structure to accurately evaluate the structural integrity of nuclear power plant in its entirety under seismic events. Because nuclear power plant is very huge assembled structure with tens of millions of mechanical components, the finite element model of each component is assembled into one structure and non-conforming meshes of mechanical components are bonded together inside the code. The main technique to bond these mechanical components is triple sparse matrix multiplication with multiple point constrains and global stiffness matrix. In our code, this procedure is conducted in a component by component manner, so that the working memory size and computing time for this multiplication are available on the current computing environment. As an illustrative example, seismic simulation of a real nuclear reactor of High Temperature engineering Test Reactor, which is located at the O-arai research and development center of JAEA, with 80 major mechanical components was conducted. Consequently, our code successfully simulated detailed elasto-plastic deformation of nuclear reactor and its computational performance was investigated. (author)

  9. Thermal hydraulic calculation of wire-wrapped bundles using a finite element method. Thesee code

    International Nuclear Information System (INIS)

    Rouzaud, P.; Gay, B.; Verviest, R.

    1981-07-01

    The physical and mathematical models used in the THESEE code now under development by the CEA/CEN Cadarache are presented. The objective of this code is to predict the fine three-dimensional temperature field in the sodium in a wire-wrapped rod bundle. Numerical results of THESEE are compared with measurements obtained by Belgonucleaire in 1976 in a sodium-cooled seven-rod bundle

  10. The Genomic Architecture of Novel Simulium damnosum Wolbachia Prophage Sequence Elements and Implications for Onchocerciasis Epidemiology

    Directory of Open Access Journals (Sweden)

    James L. Crainey

    2017-05-01

    Full Text Available Research interest in Wolbachia is growing as new discoveries and technical advancements reveal the public health importance of both naturally occurring and artificial infections. Improved understanding of the Wolbachia bacteriophages (WOs WOcauB2 and WOcauB3 [belonging to a sub-group of four WOs encoding serine recombinases group 1 (sr1WOs], has enhanced the prospect of novel tools for the genetic manipulation of Wolbachia. The basic biology of sr1WOs, including host range and mode of genomic integration is, however, still poorly understood. Very few sr1WOs have been described, with two such elements putatively resulting from integrations at the same Wolbachia genome loci, about 2 kb downstream from the FtsZ cell-division gene. Here, we characterize the DNA sequence flanking the FtsZ gene of wDam, a genetically distinct line of Wolbachia isolated from the West African onchocerciasis vector Simulium squamosum E. Using Roche 454 shot-gun and Sanger sequencing, we have resolved >32 kb of WO prophage sequence into three contigs representing three distinct prophage elements. Spanning ≥36 distinct WO open reading frame gene sequences, these prophage elements correspond roughly to three different WO modules: a serine recombinase and replication module (sr1RRM, a head and base-plate module and a tail module. The sr1RRM module contains replication genes and a Holliday junction recombinase and is unique to the sr1 group WOs. In the extreme terminal of the tail module there is a SpvB protein homolog—believed to have insecticidal properties and proposed to have a role in how Wolbachia parasitize their insect hosts. We propose that these wDam prophage modules all derive from a single WO genome, which we have named here sr1WOdamA1. The best-match database sequence for all of our sr1WOdamA1-predicted gene sequences was annotated as of Wolbachia or Wolbachia phage sourced from an arthropod. Clear evidence of exchange between sr1WOdamA1 and other Wolbachia

  11. Trabecular architecture of the manual elements reflects locomotor patterns in primates.

    Science.gov (United States)

    Matarazzo, Stacey A

    2015-01-01

    The morphology of trabecular bone has proven sensitive to loading patterns in the long bones and metacarpal heads of primates. It is expected that we should also see differences in the manual digits of primates that practice different methods of locomotion. Primate proximal and middle phalanges are load-bearing elements that are held in different postures and experience different mechanical strains during suspension, quadrupedalism, and knuckle walking. Micro CT scans of the middle phalanx, proximal phalanx and the metacarpal head of the third ray were used to examine the pattern of trabecular orientation in Pan, Gorilla, Pongo, Hylobates and Macaca. Several zones, i.e., the proximal ends of both phalanges and the metacarpal heads, were capable of distinguishing between knuckle-walking, quadrupedal, and suspensory primates. Orientation and shape seem to be the primary distinguishing factors but differences in bone volume, isotropy index, and degree of anisotropy were seen across included taxa. Suspensory primates show primarily proximodistal alignment in all zones, and quadrupeds more palmar-dorsal orientation in several zones. Knuckle walkers are characterized by having proximodistal alignment in the proximal ends of the phalanges and a palmar-dorsal alignment in the distal ends and metacarpal heads. These structural differences may be used to infer locmotor propensities of extinct primate taxa.

  12. Modeling Architectural Patterns Using Architectural Primitives

    NARCIS (Netherlands)

    Zdun, Uwe; Avgeriou, Paris

    2005-01-01

    Architectural patterns are a key point in architectural documentation. Regrettably, there is poor support for modeling architectural patterns, because the pattern elements are not directly matched by elements in modeling languages, and, at the same time, patterns support an inherent variability that

  13. Implementation of advanced finite element technology in structural analysis computer codes

    International Nuclear Information System (INIS)

    Kohli, T.D.; Wiley, J.W.; Koss, P.W.

    1975-01-01

    Advances in finite element technology over the last several years have been rapid and have largely outstripped the ability of general purpose programs in the public domain to assimilate them. As a result, it has become the burden of the structural analyst to incorporate these advances himself. This paper discusses the implementation and extension of specific technological advances in Bechtel structural analysis programs. In general these advances belong in two categories: (1) the finite elements themselves and (2) equation solution algorithms. Improvements in the finite elements involve increased accuracy of the elements and extension of their applicability to various specialized modelling situations. Improvements in solution algorithms have been almost exclusively aimed at expanding problem solving capacity. (Auth.)

  14. The Wims-Traca code for the calculation of fuel elements. User's manual and input data

    International Nuclear Information System (INIS)

    Anhert, C.

    1980-01-01

    The set of modifications and new options developped for the Wims-D code is explained. The input data of the new version Wims-Traca are described. The printed output of results is also explained. The contents and the source of the nuclear data in the basic library is exposed. (author)

  15. Development of a finite element code to solve thermo-hydro-mechanical coupling and simulate induced seismicity.

    Science.gov (United States)

    María Gómez Castro, Berta; De Simone, Silvia; Rossi, Riccardo; Larese De Tetto, Antonia; Carrera Ramírez, Jesús

    2015-04-01

    Coupled thermo-hydro-mechanical modeling is essential for CO2 storage because of (1) large amounts of CO2 will be injected, which will cause large pressure buildups and might compromise the mechanical stability of the caprock seal, (2) the most efficient technique to inject CO2 is the cold injection, which induces thermal stress changes in the reservoir and seal. These stress variations can cause mechanical failure in the caprock and can also trigger induced earthquakes. To properly assess these effects, numerical models that take into account the short and long-term thermo-hydro-mechanical coupling are an important tool. For this purpose, there is a growing need of codes that couple these processes efficiently and accurately. This work involves the development of an open-source, finite element code written in C ++ for correctly modeling the effects of thermo-hydro-mechanical coupling in the field of CO2 storage and in others fields related to these processes (geothermal energy systems, fracking, nuclear waste disposal, etc.), and capable to simulate induced seismicity. In order to be able to simulate earthquakes, a new lower dimensional interface element will be implemented in the code to represent preexisting fractures, where pressure continuity will be imposed across the fractures.

  16. Finite element code FENIA verification and application for 3D modelling of thermal state of radioactive waste deep geological repository

    Science.gov (United States)

    Butov, R. A.; Drobyshevsky, N. I.; Moiseenko, E. V.; Tokarev, U. N.

    2017-11-01

    The verification of the FENIA finite element code on some problems and an example of its application are presented in the paper. The code is being developing for 3D modelling of thermal, mechanical and hydrodynamical (THM) problems related to the functioning of deep geological repositories. Verification of the code for two analytical problems has been performed. The first one is point heat source with exponential heat decrease, the second one - linear heat source with similar behavior. Analytical solutions have been obtained by the authors. The problems have been chosen because they reflect the processes influencing the thermal state of deep geological repository of radioactive waste. Verification was performed for several meshes with different resolution. Good convergence between analytical and numerical solutions was achieved. The application of the FENIA code is illustrated by 3D modelling of thermal state of a prototypic deep geological repository of radioactive waste. The repository is designed for disposal of radioactive waste in a rock at depth of several hundred meters with no intention of later retrieval. Vitrified radioactive waste is placed in the containers, which are placed in vertical boreholes. The residual decay heat of radioactive waste leads to containers, engineered safety barriers and host rock heating. Maximum temperatures and corresponding times of their establishment have been determined.

  17. Modelling the attenuation in the ATHENA finite elements code for the ultrasonic testing of austenitic stainless steel welds.

    Science.gov (United States)

    Chassignole, B; Duwig, V; Ploix, M-A; Guy, P; El Guerjouma, R

    2009-12-01

    Multipass welds made in austenitic stainless steel, in the primary circuit of nuclear power plants with pressurized water reactors, are characterized by an anisotropic and heterogeneous structure that disturbs the ultrasonic propagation and makes ultrasonic non-destructive testing difficult. The ATHENA 2D finite element simulation code was developed to help understand the various physical phenomena at play. In this paper, we shall describe the attenuation model implemented in this code to give an account of wave scattering phenomenon through polycrystalline materials. This model is in particular based on the optimization of two tensors that characterize this material on the basis of experimental values of ultrasonic velocities attenuation coefficients. Three experimental configurations, two of which are representative of the industrial welds assessment case, are studied in view of validating the model through comparison with the simulation results. We shall thus provide a quantitative proof that taking into account the attenuation in the ATHENA code dramatically improves the results in terms of the amplitude of the echoes. The association of the code and detailed characterization of a weld's structure constitutes a remarkable breakthrough in the interpretation of the ultrasonic testing on this type of component.

  18. Relational Architecture

    DEFF Research Database (Denmark)

    Reeh, Henrik

    2018-01-01

    in a scholarly institution (element #3), as well as the certified PhD scholar (element #4) and the architectural profession, notably its labour market (element #5). This first layer outlines the contemporary context which allows architectural research to take place in a dynamic relationship to doctoral education...... a human and institutional development going on since around 1990 when the present PhD institution was first implemented in Denmark. To be sure, the model is centred around the PhD dissertation (element #1). But it involves four more components: the PhD candidate (element #2), his or her supervisor...... and interrelated fields in which history, place, and sound come to emphasize architecture’s relational qualities rather than the apparent three-dimensional solidity of constructed space. A third layer of relational architecture is at stake in the professional experiences after the defence of the authors...

  19. Algorithms and data structures for massively parallel generic adaptive finite element codes

    KAUST Repository

    Bangerth, Wolfgang

    2011-12-01

    Today\\'s largest supercomputers have 100,000s of processor cores and offer the potential to solve partial differential equations discretized by billions of unknowns. However, the complexity of scaling to such large machines and problem sizes has so far prevented the emergence of generic software libraries that support such computations, although these would lower the threshold of entry and enable many more applications to benefit from large-scale computing. We are concerned with providing this functionality for mesh-adaptive finite element computations. We assume the existence of an "oracle" that implements the generation and modification of an adaptive mesh distributed across many processors, and that responds to queries about its structure. Based on querying the oracle, we develop scalable algorithms and data structures for generic finite element methods. Specifically, we consider the parallel distribution of mesh data, global enumeration of degrees of freedom, constraints, and postprocessing. Our algorithms remove the bottlenecks that typically limit large-scale adaptive finite element analyses. We demonstrate scalability of complete finite element workflows on up to 16,384 processors. An implementation of the proposed algorithms, based on the open source software p4est as mesh oracle, is provided under an open source license through the widely used deal.II finite element software library. © 2011 ACM 0098-3500/2011/12-ART10 $10.00.

  20. Parallel Finite Element Particle-In-Cell Code for Simulations of Space-charge Dominated Beam-Cavity Interactions

    International Nuclear Information System (INIS)

    Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.

    2007-01-01

    Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented

  1. Changes in cis-regulatory elements of a key floral regulator are associated with divergence of inflorescence architectures.

    Science.gov (United States)

    Kusters, Elske; Della Pina, Serena; Castel, Rob; Souer, Erik; Koes, Ronald

    2015-08-15

    Higher plant species diverged extensively with regard to the moment (flowering time) and position (inflorescence architecture) at which flowers are formed. This seems largely caused by variation in the expression patterns of conserved genes that specify floral meristem identity (FMI), rather than changes in the encoded proteins. Here, we report a functional comparison of the promoters of homologous FMI genes from Arabidopsis, petunia, tomato and Antirrhinum. Analysis of promoter-reporter constructs in petunia and Arabidopsis, as well as complementation experiments, showed that the divergent expression of leafy (LFY) and the petunia homolog aberrant leaf and flower (ALF) results from alterations in the upstream regulatory network rather than cis-regulatory changes. The divergent expression of unusual floral organs (UFO) from Arabidopsis, and the petunia homolog double top (DOT), however, is caused by the loss or gain of cis-regulatory promoter elements, which respond to trans-acting factors that are expressed in similar patterns in both species. Introduction of pUFO:UFO causes no obvious defects in Arabidopsis, but in petunia it causes the precocious and ectopic formation of flowers. This provides an example of how a change in a cis-regulatory region can account for a change in the plant body plan. © 2015. Published by The Company of Biologists Ltd.

  2. ABCXYZ: vector potential (A) and magnetic field (B) code (C) for Cartesian (XYZ) geometry using general current elements

    International Nuclear Information System (INIS)

    Anderson, D.V.; Breazeal, J.; Finan, C.H.; Johnston, B.M.

    1976-01-01

    ABCXYZ is a computer code for obtaining the Cartesian components of the vector potential and the magnetic field on an observed grid from an arrangement of current-carrying wires. Arbitrary combinations of straight line segments, arcs, and loops are allowed in the specification of the currents. Arbitrary positions and orientations of the current-carrying elements are also allowed. Specification of the wire diameter permits the computation of well-defined fields, even in the interiors of the conductors. An optical feature generates magnetic field lines. Extensive graphical and printed output is available to the user including contour, grid-line, and field-line plots. 12 figures, 1 table

  3. Critical state and magnetization loss in multifilamentary superconducting wire solved through the commercial finite element code ANSYS

    Science.gov (United States)

    Farinon, S.; Fabbricatore, P.; Gömöry, F.

    2010-11-01

    The commercially available finite element code ANSYS has been adapted to solve the critical state of single strips and multifilamentary tapes. We studied a special algorithm which approaches the critical state by an iterative adjustment of the material resistivity. Then, we proved its validity by comparing the results obtained for a thin strip to the Brand theory for the transport current and magnetization cases. Also, the challenging calculation of the magnetization loss of a real multifilamentary BSCCO tape showed the usefulness of our method. Finally, we developed several methods to enhance the speed of convergence, making the proposed process quite competitive in the existing survey of ac losses simulations.

  4. NCEL: two dimensional finite element code for steady-state temperature distribution in seven rod-bundle

    International Nuclear Information System (INIS)

    Hrehor, M.

    1979-01-01

    The paper deals with an application of the finite element method to the heat transfer study in seven-pin models of LMFBR fuel subassembly. The developed code NCEL solves two-dimensional steady state heat conduction equation in the whole subassembly model cross-section and enebles to perform the analysis of thermal behaviour in both normal and accidental operational conditions as eccentricity of the central rod or full or partial (porous) blockage of some part of the cross-flow area. The heat removal is simulated by heat sinks in coolant under conditions of subchannels slug flow approximation

  5. Finite element circuit theory of the numerical code EDDYMULT for solving eddy current problems in a multi-torus system

    International Nuclear Information System (INIS)

    Nakamura, Yukiharu; Ozeki, Takahisa

    1986-07-01

    The finite element circuit theory is extended to the general eddy current problem in a multi-torus system, which consists of various torus conductors and axisymmetric coil systems. The numerical procedures are devised to avoid practical restrictions of computer storage and computing time, that is, the reduction technique of eddy current eigen modes to save storage and the introduction of shape function into the double area integral of mode coupling to save time. The numerical code EDDYMULT based on the theory is developed to use in designing tokamak device from the viewpoints of the evaluation of electromagnetic loading on the device components and the control analysis of tokamak equilibrium. (author)

  6. Solution of 2D and 3D hexagonal geometry benchmark problems by using the finite element diffusion code DIFGEN

    International Nuclear Information System (INIS)

    Gado, J.

    1986-02-01

    The four group, 2D and 3D hexagonal geometry HTGR benchmark problems and a 2D hexagonal geometry PWR (WWER) benchmark problem have been solved by using the finite element diffusion code DIFGEN. The hexagons (or hexagonal prisms) were subdivided into first order or second order triangles or quadrilaterals (or triangular or quadrilateral prisms). In the 2D HTGR case of the number of the inserted absorber rods was also varied (7, 6, 0 or 37 rods). The calculational results are in a good agreement with the results of other calculations. The larger systematic series of DIFGEN calculations have given a quantitative picture on the convergence properties of various finite element modellings of hexagonal grids in DIFGEN. (orig.)

  7. User`s manual for the FEHM application -- A finite-element heat- and mass-transfer code

    Energy Technology Data Exchange (ETDEWEB)

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The equations of heat and mass transfer for multiphase flow in porous and permeable media are solved in the FEHM application by using the finite-element method. The permeability and porosity of the medium are allowed to depend on pressure and temperature. The code also has provisions for movable air and water phases and noncoupled tracers; that is, tracer solutions that do not affect the heat- and mass-transfer solutions. The tracers can be passive or reactive. The code can simulate two-dimensional, two-dimensional radial, or three-dimensional geometries. In fact, FEHM is capable of describing flow that is dominated in many areas by fracture and fault flow, including the inherently three-dimensional flow that results from permeation to and from faults and fractures. The code can handle coupled heat and mass-transfer effects, such as boiling, dryout, and condensation that can occur in the near-field region surrounding the potential repository and the natural convection that occurs through Yucca Mountain due to seasonal temperature changes. This report outlines the uses and capabilities of the FEHM application, initialization of code variables, restart procedures, and error processing. The report describes all the data files, the input data, including individual input records or parameters, and the various output files. The system interface is described, including the software environment and installation instructions.

  8. Interfacing VPSC with finite element codes. Demonstration of irradiation growth simulation in a cladding tube

    Energy Technology Data Exchange (ETDEWEB)

    Patra, Anirban [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-03-23

    This Milestone report shows good progress in interfacing VPSC with the FE codes ABAQUS and MOOSE, to perform component-level simulations of irradiation-induced deformation in Zirconium alloys. In this preliminary application, we have performed an irradiation growth simulation in the quarter geometry of a cladding tube. We have benchmarked VPSC-ABAQUS and VPSC-MOOSE predictions with VPSC-SA predictions to verify the accuracy of the VPSCFE interface. Predictions from the FE simulations are in general agreement with VPSC-SA simulations and also with experimental trends.

  9. Systemic Architecture

    DEFF Research Database (Denmark)

    Poletto, Marco; Pasquero, Claudia

    -up or tactical design, behavioural space and the boundary of the natural and the artificial realms within the city and architecture. A new kind of "real-time world-city" is illustrated in the form of an operational design manual for the assemblage of proto-architectures, the incubation of proto-gardens...... and the coding of proto-interfaces. These prototypes of machinic architecture materialize as synthetic hybrids embedded with biological life (proto-gardens), computational power, behavioural responsiveness (cyber-gardens), spatial articulation (coMachines and fibrous structures), remote sensing (FUNclouds...

  10. ABAQUS-EPGEN: a general-purpose finite-element code. Volume 1. User's manual

    International Nuclear Information System (INIS)

    Hibbitt, H.D.; Karlsson, B.I.; Sorensen, E.P.

    1982-10-01

    This document is the User's Manual for ABAQUS/EPGEN, a general purpose finite element computer program, designed specifically to serve advanced structural analysis needs. The program contains very general libraries of elements, materials and analysis procedures, and is highly modular, so that complex combinations of features can be put together to model physical problems. The program is aimed at production analysis needs, and for this purpose aspects such as ease-of-use, reliability, flexibility and efficiency have received maximum attention. The input language is designed to make it straightforward to describe complicated models; the analysis procedures are highly automated with the program choosing time or load increments based on user supplied tolerances and controls; and the program offers a wide range of post-processing options for display of the analysis results

  11. A Reference Architecture for Provisioning of Tools as a Service: Meta-Model, Ontologies and Design Elements

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali; Sheng, Quan Z.

    2016-01-01

    Software Architecture (SA) plays a critical role in designing, developing and evolving cloud-based platforms that can be used to provision different types of services to consumers on demand. In this paper, we present a Reference Architecture (RA) for designing cloud-based Tools as a service SPACE...... (TSPACE) for provisioning a bundled suite of tools by following the Software as a Service (SaaS) model. The reference architecture has been designed by leveraging information structuring approaches and by using well-known architecture design principles and patterns. The RA has been documented using view...

  12. The computer code EURDYN - 1 M (release 1) for transient dynamic fluid-structure interaction. Pt.1: governing equations and finite element modelling

    International Nuclear Information System (INIS)

    Donea, J.; Fasoli-Stella, P.; Giuliani, S.; Halleux, J.P.; Jones, A.V.

    1980-01-01

    This report describes the governing equations and the finite element modelling used in the computer code EURDYN - 1 M. The code is a non-linear transient dynamic program for the analysis of coupled fluid-structure systems; It is designed for safety studies on LMFBR components (primary containment and fuel subassemblies)

  13. Monte Carlo method implemented in a finite element code with application to dynamic vacuum in particle accelerators

    CERN Document Server

    Garion, C

    2009-01-01

    Modern particle accelerators require UHV conditions during their operation. In the accelerating cavities, breakdowns can occur, releasing large amount of gas into the vacuum chamber. To determine the pressure profile along the cavity as a function of time, the time-dependent behaviour of the gas has to be simulated. To do that, it is useful to apply accurate three-dimensional method, such as Test Particles Monte Carlo. In this paper, a time-dependent Test Particles Monte Carlo is used. It has been implemented in a Finite Element code, CASTEM. The principle is to track a sample of molecules during time. The complex geometry of the cavities can be created either in the FE code or in a CAD software (CATIA in our case). The interface between the two softwares to export the geometry from CATIA to CASTEM is given. The algorithm of particle tracking for collisionless flow in the FE code is shown. Thermal outgassing, pumping surfaces and electron and/or ion stimulated desorption can all be generated as well as differ...

  14. Development of finite element code for the analysis of coupled thermo-hydro-mechanical behaviors of saturated-unsaturated medium

    International Nuclear Information System (INIS)

    Ohnishi, Y.; Shibata, H.; Kobayashi, A.

    1985-01-01

    A model is presented which describes fully coupled thermo-hydro-mechanical behavior of porous geologic medium. The mathematical formulation for the model utilizes the Biot theory for the consolidation and the energy balance equation. The medium is in the condition of saturated-unsaturated flow, then the free surfaces are taken into consideration in the model. The model, incorporated in a finite element numerical procedure, was implemented in a two-dimensional computer code. The code was developed under the assumptions that the medium is poro-elastic and in plane strain condition; water in the ground does not change its phase; heat is transferred by conductive and convective flow. Analytical solutions pertaining to consolidation theory for soils and rocks, thermoelasticity for solids and hydrothermal convection theory provided verification of stress and fluid flow couplings, respectively in the coupled model. Several types of problems are analyzed. The one is a study of some of the effects of completely coupled thermo-hydro-mechanical behavior on the response of a saturated-unsaturated porous rock containing a buried heat source. Excavation of an underground opening which has radioactive wastes at elevated temperatures is modeled and analyzed. The results shows that the coupling phenomena can be estimated at some degree by the numerical procedure. The computer code has a powerful ability to analyze of the repository the complex nature of the repository

  15. Dynamic analysis of aircraft impact using the linear elastic finite element codes FINEL, SAP and STARDYNE

    International Nuclear Information System (INIS)

    Lundsager, P.; Krenk, S.

    1975-08-01

    The static and dynamic response of a cylindrical/ spherical containment to a Boeing 720 impact is computed using 3 different linear elastic computer codes: FINEL, SAP and STARDYNE. Stress and displacement fields are shown together with time histories for a point in the impact zone. The main conclusions from this study are: - In this case the maximum dynamic load factors for stress and displacements were close to 1, but a static analysis alone is not fully sufficient. - More realistic load time histories should be considered. - The main effects seem to be local. The present study does not indicate general collapse from elastic stresses alone. - Further study of material properties at high rates is needed. (author)

  16. STAT, GAPS, STRAIN, DRWDIM: a system of computer codes for analyzing HTGR fuel test element metrology data. User's manual

    Energy Technology Data Exchange (ETDEWEB)

    Saurwein, J.J.

    1977-08-01

    A system of computer codes has been developed to statistically reduce Peach Bottom fuel test element metrology data and to compare the material strains and fuel rod-fuel hole gaps computed from these data with HTGR design code predictions. The codes included in this system are STAT, STRAIN, GAPS, and DRWDIM. STAT statistically evaluates test element metrology data yielding fuel rod, fuel body, and sleeve irradiation-induced strains; fuel rod anisotropy; and additional data characterizing each analyzed fuel element. STRAIN compares test element fuel rod and fuel body irradiation-induced strains computed from metrology data with the corresponding design code predictions. GAPS compares test element fuel rod, fuel hole heat transfer gaps computed from metrology data with the corresponding design code predictions. DRWDIM plots the measured and predicted gaps and strains. Although specifically developed to expedite the analysis of Peach Bottom fuel test elements, this system can be applied, without extensive modification, to the analysis of Fort St. Vrain or other HTGR-type fuel test elements.

  17. The finite element structural analysis code SAP IV conversion from CDC to IBM

    International Nuclear Information System (INIS)

    Harrop, L.P.

    1977-02-01

    SAP IV is a general three dimensional, linear, static and dynamic finite element structural analysis program. The program which was obtained from the Earthquake Engineering Research Center, University of California, Berkeley, was written in FORTRAM for a CDC 6400. Its main use was anticipated to be the seismic analysis of reactor structures. SAP IV may also prove useful for fracture mechanics studies as well as the usual elastic stress analysis of structures. A brief description of SAP IV and a more detailed account of the FORTRAN conversion required to make SAP IV run successfully on the UKAEA Harwell IBM 370/168 are given. (author)

  18. Nonlinear dynamics of laser systems with elements of a chaos: Advanced computational code

    Science.gov (United States)

    Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Kuznetsova, A. A.; Buyadzhi, A. A.; Prepelitsa, G. P.; Ternovsky, V. B.

    2017-10-01

    A general, uniform chaos-geometric computational approach to analysis, modelling and prediction of the non-linear dynamics of quantum and laser systems (laser and quantum generators system etc) with elements of the deterministic chaos is briefly presented. The approach is based on using the advanced generalized techniques such as the wavelet analysis, multi-fractal formalism, mutual information approach, correlation integral analysis, false nearest neighbour algorithm, the Lyapunov’s exponents analysis, and surrogate data method, prediction models etc There are firstly presented the numerical data on the topological and dynamical invariants (in particular, the correlation, embedding, Kaplan-York dimensions, the Lyapunov’s exponents, Kolmogorov’s entropy and other parameters) for laser system (the semiconductor GaAs/GaAlAs laser with a retarded feedback) dynamics in a chaotic and hyperchaotic regimes.

  19. The Use of the STAGS Finite Element Code in Stitched Structures Development

    Science.gov (United States)

    Jegley, Dawn C.; Lovejoy, Andrew E.

    2014-01-01

    In the last 30 years NASA has worked in collaboration with industry to develop enabling technologies needed to make aircraft more fuel-efficient and more affordable. The focus on the airframe has been to reduce weight, improve damage tolerance and better understand structural behavior under realistic flight and ground loading conditions. Stitched structure is a technology that can address the weight savings, cost reduction, and damage tolerance goals, but only if it is supported by accurate analytical techniques. Development of stitched technology began in the 1990's as a partnership between NASA and Boeing (McDonnell Douglas at the time) under the Advanced Composites Technology Program and has continued under various titles and programs and into the Environmentally Responsible Aviation Project today. These programs contained development efforts involving manufacturing development, design, detailed analysis, and testing. Each phase of development, from coupons to large aircraft components was supported by detailed analysis to prove that the behavior of these structures was well-understood and predictable. The Structural Analysis of General Shells (STAGS) computer code was a critical tool used in the development of many stitched structures. As a key developer of STAGS, Charles Rankin's contribution to the programs was quite significant. Key features of STAGS used in these analyses and discussed in this paper include its accurate nonlinear and post-buckling capabilities, its ability to predict damage growth, and the use of Lagrange constraints and follower forces.

  20. Software architecture 2

    CERN Document Server

    Oussalah, Mourad Chabanne

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural templa

  1. Software architecture 1

    CERN Document Server

    Oussalah , Mourad Chabane

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural template

  2. HEFF---A user's manual and guide for the HEFF code for thermal-mechanical analysis using the boundary-element method

    International Nuclear Information System (INIS)

    St John, C.M.; Sanjeevan, K.

    1991-12-01

    The HEFF Code combines a simple boundary-element method of stress analysis with the closed form solutions for constant or exponentially decaying heat sources in an infinite elastic body to obtain an approximate method for analysis of underground excavations in a rock mass with heat generation. This manual describes the theoretical basis for the code, the code structure, model preparation, and step taken to assure that the code correctly performs its intended functions. The material contained within the report addresses the Software Quality Assurance Requirements for the Yucca Mountain Site Characterization Project. 13 refs., 26 figs., 14 tabs

  3. Ethical codes. Fig leaf argument, ballast or cultural element for radiation protection?; Ethik-Codes. Feigenblatt, Ballast oder Kulturelement fuer den Strahlenschutz?

    Energy Technology Data Exchange (ETDEWEB)

    Gellermann, Rainer [Nuclear Control and Consulting GmbH, Braunschweig (Germany)

    2014-07-01

    The international association for radiation protection (IRPA) adopted in May 2004 a Code of Ethics in order to allow their members to hold an adequate professional level of ethical line of action. Based on this code of ethics the professional body of radiation protection (Fachverband fuer Strahlenschutz) has developed its own ethical code and adopted in 2005.

  4. Structure-aided prediction of mammalian transcription factor complexes in conserved non-coding elements

    KAUST Repository

    Guturu, H.

    2013-11-11

    Mapping the DNA-binding preferences of transcription factor (TF) complexes is critical for deciphering the functions of cis-regulatory elements. Here, we developed a computational method that compares co-occurring motif spacings in conserved versus unconserved regions of the human genome to detect evolutionarily constrained binding sites of rigid TF complexes. Structural data were used to estimate TF complex physical plausibility, explore overlapping motif arrangements seldom tackled by non-structure-aware methods, and generate and analyse three-dimensional models of the predicted complexes bound to DNA. Using this approach, we predicted 422 physically realistic TF complex motifs at 18% false discovery rate, the majority of which (326, 77%) contain some sequence overlap between binding sites. The set of mostly novel complexes is enriched in known composite motifs, predictive of binding site configurations in TF-TF-DNA crystal structures, and supported by ChIP-seq datasets. Structural modelling revealed three cooperativity mechanisms: direct protein-protein interactions, potentially indirect interactions and \\'through-DNA\\' interactions. Indeed, 38% of the predicted complexes were found to contain four or more bases in which TF pairs appear to synergize through overlapping binding to the same DNA base pairs in opposite grooves or strands. Our TF complex and associated binding site predictions are available as a web resource at http://bejerano.stanford.edu/complex.

  5. Structure-aided prediction of mammalian transcription factor complexes in conserved non-coding elements

    KAUST Repository

    Guturu, H.; Doxey, A. C.; Wenger, A. M.; Bejerano, G.

    2013-01-01

    Mapping the DNA-binding preferences of transcription factor (TF) complexes is critical for deciphering the functions of cis-regulatory elements. Here, we developed a computational method that compares co-occurring motif spacings in conserved versus unconserved regions of the human genome to detect evolutionarily constrained binding sites of rigid TF complexes. Structural data were used to estimate TF complex physical plausibility, explore overlapping motif arrangements seldom tackled by non-structure-aware methods, and generate and analyse three-dimensional models of the predicted complexes bound to DNA. Using this approach, we predicted 422 physically realistic TF complex motifs at 18% false discovery rate, the majority of which (326, 77%) contain some sequence overlap between binding sites. The set of mostly novel complexes is enriched in known composite motifs, predictive of binding site configurations in TF-TF-DNA crystal structures, and supported by ChIP-seq datasets. Structural modelling revealed three cooperativity mechanisms: direct protein-protein interactions, potentially indirect interactions and 'through-DNA' interactions. Indeed, 38% of the predicted complexes were found to contain four or more bases in which TF pairs appear to synergize through overlapping binding to the same DNA base pairs in opposite grooves or strands. Our TF complex and associated binding site predictions are available as a web resource at http://bejerano.stanford.edu/complex.

  6. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...... to this systematic thinking of the building technique we get a diverse and functional architecture. Creating a new and clearer story telling about new and smart system based thinking behind the architectural expression....

  7. Summary Report for ASC L2 Milestone #4782: Assess Newly Emerging Programming and Memory Models for Advanced Architectures on Integrated Codes

    Energy Technology Data Exchange (ETDEWEB)

    Neely, J. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hornung, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Black, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Robinson, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-29

    This document serves as a detailed companion to the powerpoint slides presented as part of the ASC L2 milestone review for Integrated Codes milestone #4782 titled “Assess Newly Emerging Programming and Memory Models for Advanced Architectures on Integrated Codes”, due on 9/30/2014, and presented for formal program review on 9/12/2014. The program review committee is represented by Mike Zika (A Program Project Lead for Kull), Brian Pudliner (B Program Project Lead for Ares), Scott Futral (DEG Group Lead in LC), and Mike Glass (Sierra Project Lead at Sandia). This document, along with the presentation materials, and a letter of completion signed by the review committee will act as proof of completion for this milestone.

  8. Summary of the models and methods for the FEHM application - a finite-element heat- and mass-transfer code

    International Nuclear Information System (INIS)

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The mathematical models and numerical methods employed by the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multi-component flow in porous media, are described. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The component models of FEHM are discussed. The first major component, Flow- and Energy-Transport Equations, deals with heat conduction; heat and mass transfer with pressure- and temperature-dependent properties, relative permeabilities and capillary pressures; isothermal air-water transport; and heat and mass transfer with noncondensible gas. The second component, Dual-Porosity and Double-Porosity/Double-Permeability Formulation, is designed for problems dominated by fracture flow. Another component, The Solute-Transport Models, includes both a reactive-transport model that simulates transport of multiple solutes with chemical reaction and a particle-tracking model. Finally, the component, Constitutive Relationships, deals with pressure- and temperature-dependent fluid/air/gas properties, relative permeabilities and capillary pressures, stress dependencies, and reactive and sorbing solutes. Each of these components is discussed in detail, including purpose, assumptions and limitations, derivation, applications, numerical method type, derivation of numerical model, location in the FEHM code flow, numerical stability and accuracy, and alternative approaches to modeling the component

  9. Physical analysis and modelling of aerosols transport. implementation in a finite elements code. Experimental validation in laminar and turbulent flows

    International Nuclear Information System (INIS)

    Armand, Patrick

    1995-01-01

    The aim of this work consists in the Fluid Mechanics and aerosol Physics coupling. In the first part, the order of magnitude analysis of the particle dynamics is done. This particle is embedded in a non-uniform unsteady flow. Flow approximations around the inclusion are described. Corresponding aerodynamic drag formulae are expressed. Possible situations related to the problem data are extensively listed. In the second part, one studies the turbulent particles transport. Eulerian approach which is particularly well adapted to industrial codes is preferred in comparison with the Lagrangian methods. One chooses the two-fluid formalism in which career gas-particles slip is taken into account. Turbulence modelling gets through a k-epsilon modulated by the inclusions action on the flow. The model is implemented In a finite elements code. Finally, In the third part, one validates the modelling in laminar and turbulent cases. We compare simulations to various experiments (settling battery, inertial impaction in a bend, jets loaded with glass beads particles) which are taken in the literature or done by ourselves at the laboratory. The results are very close. It is a good point when it is thought of the particles transport model and associated software future use. (author) [fr

  10. Creating a Structurally Realistic Finite Element Geometric Model of a Cardiomyocyte to Study the Role of Cellular Architecture in Cardiomyocyte Systems Biology.

    Science.gov (United States)

    Rajagopal, Vijay; Bass, Gregory; Ghosh, Shouryadipta; Hunt, Hilary; Walker, Cameron; Hanssen, Eric; Crampin, Edmund; Soeller, Christian

    2018-04-18

    With the advent of three-dimensional (3D) imaging technologies such as electron tomography, serial-block-face scanning electron microscopy and confocal microscopy, the scientific community has unprecedented access to large datasets at sub-micrometer resolution that characterize the architectural remodeling that accompanies changes in cardiomyocyte function in health and disease. However, these datasets have been under-utilized for investigating the role of cellular architecture remodeling in cardiomyocyte function. The purpose of this protocol is to outline how to create an accurate finite element model of a cardiomyocyte using high resolution electron microscopy and confocal microscopy images. A detailed and accurate model of cellular architecture has significant potential to provide new insights into cardiomyocyte biology, more than experiments alone can garner. The power of this method lies in its ability to computationally fuse information from two disparate imaging modalities of cardiomyocyte ultrastructure to develop one unified and detailed model of the cardiomyocyte. This protocol outlines steps to integrate electron tomography and confocal microscopy images of adult male Wistar (name for a specific breed of albino rat) rat cardiomyocytes to develop a half-sarcomere finite element model of the cardiomyocyte. The procedure generates a 3D finite element model that contains an accurate, high-resolution depiction (on the order of ~35 nm) of the distribution of mitochondria, myofibrils and ryanodine receptor clusters that release the necessary calcium for cardiomyocyte contraction from the sarcoplasmic reticular network (SR) into the myofibril and cytosolic compartment. The model generated here as an illustration does not incorporate details of the transverse-tubule architecture or the sarcoplasmic reticular network and is therefore a minimal model of the cardiomyocyte. Nevertheless, the model can already be applied in simulation-based investigations into the

  11. Properties of non-coding DNA and identification of putative cis-regulatory elements in Theileria parva

    Directory of Open Access Journals (Sweden)

    Guo Xiang

    2008-12-01

    Full Text Available Abstract Background Parasites in the genus Theileria cause lymphoproliferative diseases in cattle, resulting in enormous socio-economic losses. The availability of the genome sequences and annotation for T. parva and T. annulata has facilitated the study of parasite biology and their relationship with host cell transformation and tropism. However, the mechanism of transcriptional regulation in this genus, which may be key to understanding fundamental aspects of its parasitology, remains poorly understood. In this study, we analyze the evolution of non-coding sequences in the Theileria genome and identify conserved sequence elements that may be involved in gene regulation of these parasitic species. Results Intergenic regions and introns in Theileria are short, and their length distributions are considerably right-skewed. Intergenic regions flanked by genes in 5'-5' orientation tend to be longer and slightly more AT-rich than those flanked by two stop codons; intergenic regions flanked by genes in 3'-5' orientation have intermediate values of length and AT composition. Intron position is negatively correlated with intron length, and positively correlated with GC content. Using stringent criteria, we identified a set of high-quality orthologous non-coding sequences between T. parva and T. annulata, and determined the distribution of selective constraints across regions, which are shown to be higher close to translation start sites. A positive correlation between constraint and length in both intergenic regions and introns suggests a tight control over length expansion of non-coding regions. Genome-wide searches for functional elements revealed several conserved motifs in intergenic regions of Theileria genomes. Two such motifs are preferentially located within the first 60 base pairs upstream of transcription start sites in T. parva, are preferentially associated with specific protein functional categories, and have significant similarity to know

  12. Type VI secretion systems of human gut Bacteroidales segregate into three genetic architectures, two of which are contained on mobile genetic elements.

    Science.gov (United States)

    Coyne, Michael J; Roelofs, Kevin G; Comstock, Laurie E

    2016-01-15

    Type VI secretion systems (T6SSs) are contact-dependent antagonistic systems employed by Gram negative bacteria to intoxicate other bacteria or eukaryotic cells. T6SSs were recently discovered in a few Bacteroidetes strains, thereby extending the presence of these systems beyond Proteobacteria. The present study was designed to analyze in a global nature the diversity, abundance, and properties of T6SSs in the Bacteroidales, the most predominant Gram negative bacterial order of the human gut. By performing extensive bioinformatics analyses and creating hidden Markov models for Bacteroidales Tss proteins, we identified 130 T6SS loci in 205 human gut Bacteroidales genomes. Of the 13 core T6SS proteins of Proteobacteria, human gut Bacteroidales T6SS loci encode orthologs of nine, and an additional five other core proteins not present in Proteobacterial T6SSs. The Bacteroidales T6SS loci segregate into three distinct genetic architectures with extensive DNA identity between loci of a given genetic architecture. We found that divergent DNA regions of a genetic architecture encode numerous types of effector and immunity proteins and likely include new classes of these proteins. TheT6SS loci of genetic architecture 1 are contained on highly similar integrative conjugative elements (ICEs), as are the T6SS loci of genetic architecture 2, whereas the T6SS loci of genetic architecture 3 are not and are confined to Bacteroides fragilis. Using collections of co-resident Bacteroidales strains from human subjects, we provide evidence for the transfer of genetic architecture 1 T6SS loci among co-resident Bacteroidales species in the human gut. However, we also found that established ecosystems can harbor strains with distinct T6SS of all genetic architectures. This is the first study to comprehensively analyze of the presence and diversity of T6SS loci within an order of bacteria and to analyze T6SSs of bacteria from a natural community. These studies demonstrate that more than

  13. Software requirements, design, and verification and validation for the FEHM application - a finite-element heat- and mass-transfer code

    International Nuclear Information System (INIS)

    Dash, Z.V.; Robinson, B.A.; Zyvoloski, G.A.

    1997-07-01

    The requirements, design, and verification and validation of the software used in the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multicomponent flow in porous media, are described. The test of the DOE Code Comparison Project, Problem Five, Case A, which verifies that FEHM has correctly implemented heat and mass transfer and phase partitioning, is also covered

  14. Validation of finite element code DELFIN by means of the zero power experiences at the nuclear power plant of Atucha I

    International Nuclear Information System (INIS)

    Grant, C.R.

    1996-01-01

    Code DELFIN, developed in CNEA, treats the spatial discretization using heterogeneous finite elements, allowing a correct treatment of the continuity of fluxes and currents among elements and a more realistic representation of the hexagonal lattice of the reactor. It can be used for fuel management calculation, Xenon oscillation and spatial kinetics. Using the HUEMUL code for cell calculation (which uses a generalized two dimensional collision probability theory and has the WIMS library incorporated in a data base), the zero power experiences performed in 1974 were calculated. (author). 8 refs., 9 figs., 3 tabs

  15. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    International Nuclear Information System (INIS)

    Valach, M.; Zymak, J.; Svoboda, R.

    1997-01-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs

  16. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    Energy Technology Data Exchange (ETDEWEB)

    Valach, M; Zymak, J; Svoboda, R [Nuclear Research Inst. Rez plc, Rez (Czech Republic)

    1997-08-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs.

  17. Changes in cis-regulatory elements of a key floral regulator are associated with divergence of inflorescence architectures

    NARCIS (Netherlands)

    Kusters, E.; Della Pina, S.; Castel, R.; Souer, E.; Koes, R.

    2015-01-01

    Higher plant species diverged extensively with regard to the moment (flowering time) and position (inflorescence architecture) at which flowers are formed. This seems largely caused by variation in the expression patterns of conserved genes that specify floral meristem identity (FMI), rather than

  18. Changes in cis-regulatory elements of a key floral regulator are associated with divergence of inflorescence architectures.

    NARCIS (Netherlands)

    Kusters, E.; Della Pina, S.; Castel, R.; Souer, E.J.; Koes, R.E.

    2015-01-01

    Higher plant species diverged extensively with regard to the moment (flowering time) and position (inflorescence architecture) at which flowers are formed. This seems largely caused by variation in the expression patterns of conserved genes that specify floral meristem identity (FMI), rather than

  19. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Architectural freedom and industrialized architecture. Inge Vestergaard, Associate Professor, Cand. Arch. Aarhus School of Architecture, Denmark Noerreport 20, 8000 Aarhus C Telephone +45 89 36 0000 E-mai l inge.vestergaard@aarch.dk Based on the repetitive architecture from the "building boom" 1960...... customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performed expression in direct relation to the given context. Through the last couple of years we have in Denmark been focusing a more sustainable and low energy building technique, which also include...... to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...

  20. Cognitive Architectures for Multimedia Learning

    Science.gov (United States)

    Reed, Stephen K.

    2006-01-01

    This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio's dual coding theory,…

  1. Distribution Pattern of Fe, Sr, Zr and Ca Elements as Particle Size Function in the Code River Sediments from Upstream to Downstream

    International Nuclear Information System (INIS)

    Sri Murniasih; Muzakky

    2007-01-01

    The analysis of Fe, Sr, Zr and Ca elements concentration of granular sediment from upstream to downstream of Code river has been done. The aim of this research is to know the influence of particle size on the concentration of Fe, Sr, Zr and Ca elements in the Code river sediments from upstream to downstream and its distribution pattern. The instrument used was x-ray fluorescence with Si(Li) detector. Analysis results show that more Fe and Sr elements are very much found in 150 - 90 μm particle size, while Zr and Ca elements are very much found in < 90 μm particle size. Distribution pattern of Fe, Sr, Zr and Ca elements distribution in Code river sediments tends to increase relatively from upstream to downstream following its conductivity. The concentration of Fe, Sr, Zr and Ca elements are 1.49 ± 0.03 % - 5.93 ± 0.02 % ; 118.20 ± 10.73 ppm - 468.21 ± 20.36 ppm; 19.81 ppm ± 0.86 ppm - 76.36 ± 3.02 ppm and 3.22 ± 0.25 % - 11.40 ± 0.31 % successively. (author)

  2. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  3. Major Damage as the Element of Objective Part of Corpus Delicti Provided for by Article 180 of the Criminal Code of the Russian Federation

    Directory of Open Access Journals (Sweden)

    Zhaivoronok A. V.

    2015-01-01

    Full Text Available The article considers different approaches to the understanding of objective element of crime provided for by article 180 of the Criminal Code of the Russian Federation (illegal use of trademark as well as the issues of law enforcement of the norm under study in regard to major damage

  4. Beyond C, H, O, and N! Analysis of the elemental composition of U.S. FDA approved drug architectures.

    Science.gov (United States)

    Smith, Brandon R; Eastman, Candice M; Njardarson, Jon T

    2014-12-11

    The diversity of elements among U.S. Food and Drug Administration (FDA) approved pharmaceuticals is analyzed and reported, with a focus on atoms other than carbon, hydrogen, oxygen, and nitrogen. Our analysis reveals that sulfur, chlorine, fluorine, and phosphorous represent about 90% of elemental substitutions, with sulfur being the fifth most used element followed closely by chlorine, then fluorine and finally phosphorous in the eighth place. The remaining 10% of substitutions are represented by 16 other elements of which bromine, iodine, and iron occur most frequently. The most detailed parts of our analysis are focused on chlorinated drugs as a function of approval date, disease condition, chlorine attachment, and structure. To better aid our chlorine drug analyses, a new poster showcasing the structures of chlorinated pharmaceuticals was created specifically for this study. Phosphorus, bromine, and iodine containing drugs are analyzed closely as well, followed by a discussion about other elements.

  5. Divergent evolutionary rates in vertebrate and mammalian specific conserved non-coding elements (CNEs) in echolocating mammals.

    Science.gov (United States)

    Davies, Kalina T J; Tsagkogeorga, Georgia; Rossiter, Stephen J

    2014-12-19

    The majority of DNA contained within vertebrate genomes is non-coding, with a certain proportion of this thought to play regulatory roles during development. Conserved Non-coding Elements (CNEs) are an abundant group of putative regulatory sequences that are highly conserved across divergent groups and thus assumed to be under strong selective constraint. Many CNEs may contain regulatory factor binding sites, and their frequent spatial association with key developmental genes - such as those regulating sensory system development - suggests crucial roles in regulating gene expression and cellular patterning. Yet surprisingly little is known about the molecular evolution of CNEs across diverse mammalian taxa or their role in specific phenotypic adaptations. We examined 3,110 vertebrate-specific and ~82,000 mammalian-specific CNEs across 19 and 9 mammalian orders respectively, and tested for changes in the rate of evolution of CNEs located in the proximity of genes underlying the development or functioning of auditory systems. As we focused on CNEs putatively associated with genes underlying the development/functioning of auditory systems, we incorporated echolocating taxa in our dataset because of their highly specialised and derived auditory systems. Phylogenetic reconstructions of concatenated CNEs broadly recovered accepted mammal relationships despite high levels of sequence conservation. We found that CNE substitution rates were highest in rodents and lowest in primates, consistent with previous findings. Comparisons of CNE substitution rates from several genomic regions containing genes linked to auditory system development and hearing revealed differences between echolocating and non-echolocating taxa. Wider taxonomic sampling of four CNEs associated with the homeobox genes Hmx2 and Hmx3 - which are required for inner ear development - revealed family-wise variation across diverse bat species. Specifically within one family of echolocating bats that utilise

  6. Toward Measures for Software Architectures

    National Research Council Canada - National Science Library

    Chastek, Gary; Ferguson, Robert

    2006-01-01

    .... Defining these architectural measures is very difficult. The software architecture deeply affects subsequent development and project management decisions, such as the breakdown of the coding tasks and the definition of the development increments...

  7. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    International Nuclear Information System (INIS)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A.; Nowak, M.; Pencer, J.; Novog, D.; Buijs, A.

    2015-01-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg"-"1 [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  8. Design of a Multi-Spectrum CANDU-based Reactor, MSCR, with 37-element fuel bundles using SERPENT code

    International Nuclear Information System (INIS)

    Hussein, M.S.; Bonin, H.W.; Lewis, B.J.; Chan, P.

    2015-01-01

    The burning of highly-enriched uranium and plutonium from dismantled nuclear warhead material in the new design nuclear power plants represents an important step towards nonproliferation. The blending of these highly enriched uranium and plutonium with with uranium dioxide from the spent fuel of CANDU reactors, or mixing it with depleted uranium would need a very long time to dispose of this material. Consequently, considering that more efficient transmutation of actinides occurs in fast neutron reactors, a novel Multi-Spectrum CANDU Reactor, has been designed on the basis of the CANDU6 reactor with two concentric regions. The simulations of the MSCR were carried out using the SERPENT code. The inner or fast neutron spectrum core is fuelled by different levels of enriched uranium oxides. The helium is used as a coolant in the fast neutron core. The outer or the thermal neutron spectrum core is fuelled with natural uranium with heavy water as both moderator and coolant. Both cores use 37- element fuel bundles. The size of the two cores and the percentage level of enrichment of the fresh fuel in the fast core were optimized according to the criticality safety of the whole reactor. The excess reactivity, the regeneration factor, radial and axial flux shapes of the MSCR reactor were calculated at different of the concentration of fissile isotope 235 U of uranium fuel at the fast neutron spectrum core. The effect of variation of the concentration of the fissile isotope on the fluxes in both cores at each energy bin has been studied. (author)

  9. Design of a Multi-Spectrum CANDU-based Reactor, MSCR, with 37-element fuel bundles using SERPENT code

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, M.S.; Bonin, H.W.; Lewis, B.J.; Chan, P., E-mail: mohamed.hussein@rmc.ca, E-mail: bonin-h@rmc.ca, E-mail: lewis-b@rmc.ca, E-mail: Paul.Chan@rmc.ca [Royal Military College of Canada, Dept. of Chemistry and Chemical Engineering, Kingston, ON (Canada)

    2015-07-01

    The burning of highly-enriched uranium and plutonium from dismantled nuclear warhead material in the new design nuclear power plants represents an important step towards nonproliferation. The blending of these highly enriched uranium and plutonium with with uranium dioxide from the spent fuel of CANDU reactors, or mixing it with depleted uranium would need a very long time to dispose of this material. Consequently, considering that more efficient transmutation of actinides occurs in fast neutron reactors, a novel Multi-Spectrum CANDU Reactor, has been designed on the basis of the CANDU6 reactor with two concentric regions. The simulations of the MSCR were carried out using the SERPENT code. The inner or fast neutron spectrum core is fuelled by different levels of enriched uranium oxides. The helium is used as a coolant in the fast neutron core. The outer or the thermal neutron spectrum core is fuelled with natural uranium with heavy water as both moderator and coolant. Both cores use 37- element fuel bundles. The size of the two cores and the percentage level of enrichment of the fresh fuel in the fast core were optimized according to the criticality safety of the whole reactor. The excess reactivity, the regeneration factor, radial and axial flux shapes of the MSCR reactor were calculated at different of the concentration of fissile isotope {sup 235}U of uranium fuel at the fast neutron spectrum core. The effect of variation of the concentration of the fissile isotope on the fluxes in both cores at each energy bin has been studied. (author)

  10. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  11. Architecture and Film

    OpenAIRE

    Mohammad Javaheri, Saharnaz

    2016-01-01

    Film does not exist without architecture. In every movie that has ever been made throughout history, the cinematic image of architecture is embedded within the picture. Throughout my studies and research, I began to see that there is no director who can consciously or unconsciously deny the use of architectural elements in his or her movies. Architecture offers a strong profile to distinguish characters and story. In the early days, films were shot in streets surrounde...

  12. Coded Network Function Virtualization

    DEFF Research Database (Denmark)

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  13. Probabilistic evaluation of fuel element performance by the combined use of a fast running simplistic and a detailed deterministic fuel performance code

    International Nuclear Information System (INIS)

    Misfeldt, I.

    1980-01-01

    A comprehensive evaluation of fuel element performance requires a probabilistic fuel code supported by a well bench-marked deterministic code. This paper presents an analysis of a SGHWR ramp experiment, where the probabilistic fuel code FRP is utilized in combination with the deterministic fuel models FFRS and SLEUTH/SEER. The statistical methods employed in FRP are Monte Carlo simulation or a low-order Taylor approximation. The fast-running simplistic fuel code FFRS is used for the deterministic simulations, whereas simulations with SLEUTH/SEER are used to verify the predictions of FFRS. The ramp test was performed with a SGHWR fuel element, where 9 of the 36 fuel pins failed. There seemed to be good agreement between the deterministic simulations and the experiment, but the statistical evaluation shows that the uncertainty on the important performance parameters is too large for this ''nice'' result. The analysis does therefore indicate a discrepancy between the experiment and the deterministic code predictions. Possible explanations for this disagreement are discussed. (author)

  14. The NPOESS Preparatory Project Science Data Segment (SDS) Data Depository and Distribution Element (SD3E) System Architecture

    Science.gov (United States)

    Ho, Evelyn L.; Schweiss, Robert J.

    2008-01-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) Science Data Segment (SDS) will make daily data requests for approximately six terabytes of NPP science products for each of its six environmental assessment elements from the operational data providers. As a result, issues associated with duplicate data requests, data transfers of large volumes of diverse products, and data transfer failures raised concerns with respect to the network traffic and bandwidth consumption. The NPP SDS Data Depository and Distribution Element (SD3E) was developed to provide a mechanism for efficient data exchange, alleviate duplicate network traffic, and reduce operational costs.

  15. Development of a computer code 'CRACK' for elastic and elastoplastic fracture mechanics analysis of 2-D structures by finite element technique

    International Nuclear Information System (INIS)

    Dutta, B.K.; Kakodkar, A.; Maiti, S.K.

    1986-01-01

    The fracture mechanics analysis of nuclear components is required to ensure prevention of sudden failure due to dynamic loadings. The linear elastic analysis near to a crack tip shows presence of stress singularity at the crack tip. The simulation of this singularity in numerical methods enhance covergence capability. In finite element technique this can be achieved by placing mid nodes of 8 noded or 6 noded isoparametric elements, at one fourth ditance from crack tip. Present report details this characteristic of finite element, implementation of this element in a code 'CRACK', implementation of J-integral to compute stress intensity factor and solution of number of cases for elastic and elastoplastic fracture mechanics analysis. 6 refs., 6 figures. (author)

  16. Production of Curved Precast Concrete Elements for Shell Structures and Free-form Architecture using the Flexible Mould Method

    NARCIS (Netherlands)

    Schipper, H.R.; Grünewald, S.; Eigenraam, P.; Raghunath, P.; Kok, M.A.D.

    2014-01-01

    Free-form buildings tend to be expensive. By optimizing the production process, economical and well-performing precast concrete structures can be manufactured. In this paper, a method is presented that allows producing highly accurate double curved-elements without the need for milling two expensive

  17. Identification of an ICP27-responsive element in the coding region of a herpes simplex virus type 1 late gene.

    Science.gov (United States)

    Sedlackova, Lenka; Perkins, Keith D; Meyer, Julia; Strain, Anna K; Goldman, Oksana; Rice, Stephen A

    2010-03-01

    During productive herpes simplex virus type 1 (HSV-1) infection, a subset of viral delayed-early (DE) and late (L) genes require the immediate-early (IE) protein ICP27 for their expression. However, the cis-acting regulatory sequences in DE and L genes that mediate their specific induction by ICP27 are unknown. One viral L gene that is highly dependent on ICP27 is that encoding glycoprotein C (gC). We previously demonstrated that this gene is posttranscriptionally transactivated by ICP27 in a plasmid cotransfection assay. Based on our past results, we hypothesized that the gC gene possesses a cis-acting inhibitory sequence and that ICP27 overcomes the effects of this sequence to enable efficient gC expression. To test this model, we systematically deleted sequences from the body of the gC gene and tested the resulting constructs for expression. In so doing, we identified a 258-bp "silencing element" (SE) in the 5' portion of the gC coding region. When present, the SE inhibits gC mRNA accumulation from a transiently transfected gC gene, unless ICP27 is present. Moreover, the SE can be transferred to another HSV-1 gene, where it inhibits mRNA accumulation in the absence of ICP27 and confers high-level expression in the presence of ICP27. Thus, for the first time, an ICP27-responsive sequence has been identified in a physiologically relevant ICP27 target gene. To see if the SE functions during viral infection, we engineered HSV-1 recombinants that lack the SE, either in a wild-type (WT) or ICP27-null genetic background. In an ICP27-null background, deletion of the SE led to ICP27-independent expression of the gC gene, demonstrating that the SE functions during viral infection. Surprisingly, the ICP27-independent gC expression seen with the mutant occurred even in the absence of viral DNA synthesis, indicating that the SE helps to regulate the tight DNA replication-dependent expression of gC.

  18. Dissection of cis-regulatory element architecture of the rice oleosin gene promoters to assess abscisic acid responsiveness in suspension-cultured rice cells.

    Science.gov (United States)

    Kim, Sol; Lee, Soo-Bin; Han, Chae-Seong; Lim, Mi-Na; Lee, Sung-Eun; Yoon, In Sun; Hwang, Yong-Sic

    2017-08-01

    Oleosins are the most abundant proteins in the monolipid layer surrounding neutral storage lipids that form oil bodies in plants. Several lines of evidence indicate that they are physiologically important for the maintenance of oil body structure and for mobilization of the lipids stored inside. Rice has six oleosin genes in its genome, the expression of all of which was found to be responsive to abscisic acid (ABA) in our examination of mature embryo and aleurone tissues. The 5'-flanking region of OsOle5 was initially characterized for its responsiveness to ABA through a transient expression assay system using the protoplasts from suspension-cultured rice cells. A series of successive deletions and site-directed mutations identified five regions critical for the hormonal induction of its promoter activity. A search for cis-acting elements in these regions deposited in a public database revealed that they contain various promoter elements previously reported to be involved in the ABA response of various genes. A gain-of-function experiment indicated that multiple copies of all five regions were sufficient to provide the minimal promoter with a distinct ABA responsiveness. Comparative sequence analysis of the short, but still ABA-responsive, promoters of OsOle genes revealed no common modular architecture shared by them, indicating that various distinct promoter elements and independent trans-acting factors are involved in the ABA responsiveness of rice oleosin multigenes. Copyright © 2017 Elsevier GmbH. All rights reserved.

  19. Architectural elements from Lower Proterozoic braid-delta and high-energy tidal flat deposits in the Magaliesberg Formation, Transvaal Supergroup, South Africa

    Science.gov (United States)

    Eriksson, Patrick G.; Reczko, Boris F. F.; Jaco Boshoff, A.; Schreiber, Ute M.; Van der Neut, Markus; Snyman, Carel P.

    1995-06-01

    Three architectural elements are identified in the Lower Proterozoic Magaliesberg Formation (Pretoria Group, Transvaal Supergroup) of the Kaapvaal craton, South Africa: (1) medium- to coarse-grained sandstone sheets; (2) fine- to medium-grained sandstone sheets; and (3) mudrock elements. Both sandstone sheet elements are characterised by horizontal lamination and planar cross-bedding, with lesser trough cross-bedding, channel-fills and wave ripples, as well as minor desiccated mudrock partings, double-crested and flat-topped ripples. Due to the local unimodal palaeocurrent patterns in the medium- to coarse-grained sandstone sheets, they are interpreted as ephemeral braid-delta deposits, which were subjected to minor marine reworking. The predominantly bimodal to polymodal palaeocurrent trends in the fine- to medium-grained sandstone sheets are inferred to reflect high-energy macrotidal processes and more complete reworking of braid-delta sands. The suspension deposits of mudrocks point to either braid-delta channel abandonment, or uppermost tidal flat sedimentation. The depositional model comprises ephemeral braid-delta systems which debouched into a high-energy peritidal environment, around the margins of a shallow epeiric sea on the Kaapvaal craton. Braid-delta and tidal channel dynamics are inferred to have been similar. Fine material in the Magaliesberg Formation peritidal complexes indicates that extensive aeolian removal of clay does not seem applicable to this example of the early Proterozoic.

  20. Software architecture evolution

    DEFF Research Database (Denmark)

    Barais, Olivier; Le Meur, Anne-Francoise; Duchien, Laurence

    2008-01-01

    Software architectures must frequently evolve to cope with changing requirements, and this evolution often implies integrating new concerns. Unfortunately, when the new concerns are crosscutting, existing architecture description languages provide little or no support for this kind of evolution....... The software architect must modify multiple elements of the architecture manually, which risks introducing inconsistencies. This chapter provides an overview, comparison and detailed treatment of the various state-of-the-art approaches to describing and evolving software architectures. Furthermore, we discuss...... one particular framework named Tran SAT, which addresses the above problems of software architecture evolution. Tran SAT provides a new element in the software architecture descriptions language, called an architectural aspect, for describing new concerns and their integration into an existing...

  1. FINEDAN - an explicit finite-element calculation code for two-dimensional analyses of fast dynamic transients in nuclear reactor technology

    International Nuclear Information System (INIS)

    Adamik, V.; Matejovic, P.

    1989-01-01

    The problems are discussed of nonstationary, nonlinear dynamics of the continuum. A survey is presented of calculation methods in the given area with emphasis on the area of impact problems. A description is presented of the explicit finite elements method and its application to two-dimensional Cartesian and cylindrical configurations. Using the method the explicit calculation code FINEDAN was written which was tested in a series of verification calculations for different configurations and different types of continuum. The main characteristics are presented of the code and of some, of its practical applications. Envisaged trends of the development of the code and its possible applications in the technology of nuclear reactors are given. (author). 9 figs., 4 tabs., 10 refs

  2. The frequency-dependent elements in the code SASSI: A bridge between civil engineers and the soil-structure interaction specialists

    International Nuclear Information System (INIS)

    Tyapin, Alexander

    2007-01-01

    After four decades of the intensive studies of the soil-structure interaction (SSI) effects in the field of the NPP seismic analysis there is a certain gap between the SSI specialists and civil engineers. The results obtained using the advanced SSI codes like SASSI are often rather far from the results obtained using general codes (though match the experimental and field data). The reasons for the discrepancies are not clear because none of the parties can recall the results of the 'other party' and investigate the influence of various factors causing the difference step by step. As a result, civil engineers neither feel the SSI effects, nor control them. The author believes that the SSI specialists should do the first step forward (a) recalling 'viscous' damping in the structures versus the 'material' one and (b) convoluting all the SSI wave effects into the format of 'soil springs and dashpots', more or less clear for civil engineers. The tool for both tasks could be a special finite element with frequency-dependent stiffness developed by the author for the code SASSI. This element can represent both soil and structure in the SSI model and help to split various factors influencing seismic response. In the paper the theory and some practical issues concerning the new element are presented

  3. ARDISC (Argonne Dispersion Code): computer programs to calculate the distribution of trace element migration in partially equilibrating media

    International Nuclear Information System (INIS)

    Strickert, R.; Friedman, A.M.; Fried, S.

    1979-04-01

    A computer program (ARDISC, the Argonne Dispersion Code) is described which simulates the migration of nuclides in porous media and includes first order kinetic effects on the retention constants. The code allows for different absorption and desorption rates and solves the coupled migration equations by arithmetic reiterations. Input data needed are the absorption and desorption rates, equilibrium surface absorption coefficients, flow rates and volumes, and media porosities

  4. Architectural element analysis within the Kayenta Formation (Lower Jurassic) using ground-probing radar and sedimentological profiling, southwestern Colorado

    Science.gov (United States)

    Stephens, Mark

    1994-05-01

    A well exposed outcrop in the Kayenta Formation (Lower Jurassic) in southwestern Colorado was examined in order to delineate the stratigraphy in the subsurface and test the usefulness of ground-probing radar (GPR) in three-dimensional architectural studies. Two fluvial styles are present within the Kayenta Formation. Sandbodies within the lower third of the outcrop are characterized by parallel laminations that can be followed in the cliff-face for well over 300 m. These sandbodies are sheet-like in appearance, and represent high-energy flood deposits that most likely resulted from episodic floods. The remainder of the outcrop is characterized by concave-up channel deposits with bank-attached and mid-channel macroforms. Their presence suggests a multiple channel river system. The GPR data collected on the cliff-top, together with sedimentological data, provided a partial three-dimensional picture of the paleo-river system within the Kayenta Formation. The 3-D picture consists of stacked channel-bar lenses approximately 50 m in diameter. The GPR technique offers a very effective means of delineating the subsurface stratigraphy. Its high resolution capabilities, easy mobility, and rapid rate of data collection make it a useful tool. Its shallow penetration depth and limitation to low-conductivity environments are its only drawbacks.

  5. Maxwell's equations in axisymmetrical geometry: coupling H(curl) finite element in volume and H(div) finite element in surface. The numerical code FuMel

    International Nuclear Information System (INIS)

    Cambon, S.; Lacoste, P.

    2011-01-01

    We propose a finite element method to solve the axisymmetric scattering problem posed on a regular bounded domain. Here we shall show how to reduce the initial 3D problem into a truncated sum of 2D independent problems posed into a meridian plane of the object. Each of these problem results in the coupling of a partial differential equation into the interior domain and an integral equation on the surface simulating the free space. Then variational volume and boundary integral formulations of Maxwell's equation on regular surfaces are derived. We introduce some general finite element adapted to cylindrical coordinates and constructed from nodal and mixed finite element both for the interior (volume) and for the integral equation (surface). (authors)

  6. Methodology for bus layout for topological quantum error correcting codes

    Energy Technology Data Exchange (ETDEWEB)

    Wosnitzka, Martin; Pedrocchi, Fabio L.; DiVincenzo, David P. [RWTH Aachen University, JARA Institute for Quantum Information, Aachen (Germany)

    2016-12-15

    Most quantum computing architectures can be realized as two-dimensional lattices of qubits that interact with each other. We take transmon qubits and transmission line resonators as promising candidates for qubits and couplers; we use them as basic building elements of a quantum code. We then propose a simple framework to determine the optimal experimental layout to realize quantum codes. We show that this engineering optimization problem can be reduced to the solution of standard binary linear programs. While solving such programs is a NP-hard problem, we propose a way to find scalable optimal architectures that require solving the linear program for a restricted number of qubits and couplers. We apply our methods to two celebrated quantum codes, namely the surface code and the Fibonacci code. (orig.)

  7. Time-history simulation of civil architecture earthquake disaster relief- based on the three-dimensional dynamic finite element method

    Directory of Open Access Journals (Sweden)

    Liu Bing

    2014-10-01

    Full Text Available Earthquake action is the main external factor which influences long-term safe operation of civil construction, especially of the high-rise building. Applying time-history method to simulate earthquake response process of civil construction foundation surrounding rock is an effective method for the anti-knock study of civil buildings. Therefore, this paper develops a civil building earthquake disaster three-dimensional dynamic finite element numerical simulation system. The system adopts the explicit central difference method. Strengthening characteristics of materials under high strain rate and damage characteristics of surrounding rock under the action of cyclic loading are considered. Then, dynamic constitutive model of rock mass suitable for civil building aseismic analysis is put forward. At the same time, through the earthquake disaster of time-history simulation of Shenzhen Children’s Palace, reliability and practicability of system program is verified in the analysis of practical engineering problems.

  8. RAP-3A Computer code for thermal and hydraulic calculations in steady state conditions for fuel element clusters

    International Nuclear Information System (INIS)

    Popescu, C.; Biro, L.; Iftode, I.; Turcu, I.

    1975-10-01

    The RAP-3A computer code is designed for calculating the main steady state thermo-hydraulic parameters of multirod fuel clusters with liquid metal cooling. The programme provides a double accuracy computation of temperatures and axial enthalpy distributions of pressure losses and axial heat flux distributions in fuel clusters before boiling conditions occur. Physical and mathematical models as well as a sample problem are presented. The code is written in FORTRAN-4 language and is running on a IBM-370/135 computer

  9. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    Science.gov (United States)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  10. Structural evaluation method for class 1 vessels by using elastic-plastic finite element analysis in code case of JSME rules on design and construction

    International Nuclear Information System (INIS)

    Asada, Seiji; Hirano, Takashi; Nagata, Tetsuya; Kasahara, Naoto

    2008-01-01

    A structural evaluation method by using elastic-plastic finite element analysis has been developed and published as a code case of Rules on Design and Construction for Nuclear Power Plants (The First Part: Light Water Reactor Structural Design Standard) in the JSME Codes for Nuclear Power Generation Facilities. Its title is 'Alternative Structural Evaluation Criteria for Class 1 Vessels Based on Elastic-Plastic Finite Element Analysis' (NC-CC-005). This code case applies elastic-plastic analysis to evaluation of such failure modes as plastic collapse, thermal ratchet, fatigue and so on. Advantage of this evaluation method is free from stress classification, consistently use of Mises stress and applicability to complex 3-dimensional structures which are hard to be treated by the conventional stress classification method. The evaluation method for plastic collapse has such variation as the Lower Bound Approach Method, Twice-Elastic-Slope Method and Elastic Compensation Method. Cyclic Yield Area (CYA) based on elastic analysis is applied to screening evaluation of thermal ratchet instead of secondary stress evaluation, and elastic-plastic analysis is performed when the CYA screening criteria is not satisfied. Strain concentration factors can be directly calculated based on elastic-plastic analysis. (author)

  11. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders...

  12. A Systematic Review of Software Architecture Visualization Techniques

    NARCIS (Netherlands)

    Shahin, M.; Liang, P.; Ali Babar, M.

    2014-01-01

    Context Given the increased interest in using visualization techniques (VTs) to help communicate and understand software architecture (SA) of large scale complex systems, several VTs and tools have been reported to represent architectural elements (such as architecture design, architectural

  13. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A., E-mail: sharpejr@mcmaster.ca [McMaster University, Hamilton, ON (Canada); Nowak, M. [McMaster University, Hamilton, ON (Canada); Institut National Polytechnique de Grenoble, Phelma, Grenoble (France); Pencer, J. [McMaster University, Hamilton, ON (Canada); Canadian Nuclear Laboratories, Chalk River, ON, (Canada); Novog, D.; Buijs, A. [McMaster University, Hamilton, ON (Canada)

    2015-07-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg{sup -1} [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  14. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... correlation between the study of existing architectures and the training of competences to design for present-day realities.......This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...

  15. Development of finite element code for the analysis of coupled thermo-hydro-mechanical behaviors of a saturated-unsaturated medium

    International Nuclear Information System (INIS)

    Ohnishi, Y.; Shibata, H.; Kobsayashi, A.

    1987-01-01

    A model is presented which describes fully coupled thermo-hydro-mechanical behavior of a porous geologic medium. The mathematical formulation for the model utilizes the Biot theory for the consolidation and the energy balance equation. If the medium is in the condition of saturated-unsaturated flow, then the free surfaces are taken into consideration in the model. The model, incorporated in a finite element numerical procedure, was implemented in a two-dimensional computer code. The code was developed under the assumptions that the medium is poro-elastic and in the plane strain condition; that water in the ground does not change its phase; and that heat is transferred by conductive and convective flow. Analytical solutions pertaining to consolidation theory for soils and rocks, thermoelasticity for solids and hydrothermal convection theory provided verification of stress and fluid flow couplings, respectively, in the coupled model. Several types of problems are analyzed

  16. FTS2000 network architecture

    Science.gov (United States)

    Klenart, John

    1991-01-01

    The network architecture of FTS2000 is graphically depicted. A map of network A topology is provided, with interservice nodes. Next, the four basic element of the architecture is laid out. Then, the FTS2000 time line is reproduced. A list of equipment supporting FTS2000 dedicated transmissions is given. Finally, access alternatives are shown.

  17. Architectural Physics: Lighting.

    Science.gov (United States)

    Hopkinson, R. G.

    The author coordinates the many diverse branches of knowledge which have dealt with the field of lighting--physiology, psychology, engineering, physics, and architectural design. Part I, "The Elements of Architectural Physics", discusses the physiological aspects of lighting, visual performance, lighting design, calculations and measurements of…

  18. Experimental and numerical investigation of water flow through spacer grids of nuclear fuel elements using the Open FOAM code

    International Nuclear Information System (INIS)

    Vidal, Guilherme A.M.; Vieira, Tiago A.S.; Castro, Higor F.P.

    2017-01-01

    With the advancement and development of computational tools, the studies of thermofluidodynamic behavior in nuclear fuel elements have been developed in recent years. Of the devices present in these elements, the spacing grids received more attention. They have kept the fuel rods equally spaced and have fins that aim to improve the heat transfer process between the water and the fuel element. Therefore, the grids present an important structural and thermal function. This work was carried out with the purpose of verifying and validating simulations of spacer grids using OpenFOAM (2017) software of Computational Fluid Dynamics (CFD). The simulations were validated using results obtained through the commercial CFD program, Ansys CFX, and experiments available in the literature and obtained in test sections assembled on the Water-Air Circuit (CCA) of the CDTN thermo-hydraulic laboratory

  19. Utilizing elements of the CSAU phenomena identification and ranking table (PIRT) to qualify a PWR non-LOCA transients system code

    Energy Technology Data Exchange (ETDEWEB)

    Greene, K.R.; Fletcher, C.D.; Gottula, R.C.; Lindquist, T.R.; Stitt, B.D. [Framatome ANP, Richland, WA (United States)

    2001-07-01

    Licensing analyses of Nuclear Regulatory Commission (NRC) Standard Review Plan (SRP) Chapter 15 non-LOCA transients are an important part of establishing operational safety limits and design limits for nuclear power plants. The applied codes and methods are generally qualified using traditional methods of benchmarking and assessment, sample problems, and demonstration of conservatism. Rigorous formal methods for developing code and methodology have been created and applied to qualify realistic methods for Large Break Loss-of-Coolant Accidents (LBLOCA's). This methodology, Code Scaling, Applicability, and Uncertainty (CSAU), is a very demanding, resource intensive, process to apply. It would be challenging to apply a comprehensive and complete CSAU level of analysis, individually, to each of the more than 30 non-LOCA transients that comprise Chapter 15 events. However, certain elements of the process can be easily adapted to improve quality of the codes and methods used to analyze non- LOCA transients. One of these elements is the Phenomena Identification and Ranking Table (PIRT). This paper presents the results of an informally constructed PIRT that applies to non-LOCA transients for Pressurized Water Reactors (PWR's) of the Westinghouse and Combustion Engineering design. A group of experts in thermal-hydraulics and safety analysis identified and ranked the phenomena. To begin the process, the PIRT was initially performed individually by each expert. Then through group interaction and discussion, a consensus was reached on both the significant phenomena and the appropriate ranking. The paper also discusses using the PIRT as an aid to qualify a 'conservative' system code and methodology. Once agreement was obtained on the phenomena and ranking, the table was divided into six functional groups, by nature of the transients, along the same lines as Chapter 15. Then, assessment and disposition of the significant phenomena was performed. The PIRT and

  20. User's manual for DYNA2D: an explicit two-dimensional hydrodynamic finite-element code with interactive rezoning

    Energy Technology Data Exchange (ETDEWEB)

    Hallquist, J.O.

    1982-02-01

    This revised report provides an updated user's manual for DYNA2D, an explicit two-dimensional axisymmetric and plane strain finite element code for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 4-node solid elements, and the equations-of motion are integrated by the central difference method. An interactive rezoner eliminates the need to terminate the calculation when the mesh becomes too distorted. Rather, the mesh can be rezoned and the calculation continued. The command structure for the rezoner is described and illustrated by an example.

  1. ARKAS: A three-dimensional finite element code for the analysis of core distortions and mechanical behaviour

    International Nuclear Information System (INIS)

    Nakagawa, M.

    1984-01-01

    Computer program ARKAS has been developed for the purpose of predicting core distortions and mechanical behaviour in a cluster of subassemblies under steady state conditions in LMFBR cores. This report describes the analytical models and numerical procedures employed in the code together with some typical results of the analysis made on large LMFBR cores. ARKAS is programmed in the FORTRAN-IV language and is capable of treating up to 260 assemblies in a cluster with flexible boundary conditions including mirror and rotational symmetry. The nonlinearity of the problem due to contact and separation is solved by the step iterative procedure based on the Newton-Raphson method. In each step iterative procedure, the linear matrix equation must be reconstructed and then solved directly. To save computer time and memory, the substructure method is adopted in the step of reconstructing the linear matrix equation, and in the step of solving the linear matrix equation, the block successive over-relaxation method is adopted. The program ARKAS computes, at every time step, 3-dimensional displacements and rotations of the subassemblies in the core and the interduct forces including at the nozzle tips and nozzle bases with friction effects. The code also has an ability to deal with the refueling and shuffling of subassemblies and to calculate the values of withdrawal forces. For the qualitative validation of the code, sample calculations were performed on the several bundle arrays. In these calculations, contact and separation processes under the influences of friction forces, off-center loading, duct rotations and torsion, thermal expansion and irradiation induced swelling and creep were analyzed. These results are quite reasonable in the light of the expected behaviour. This work was performed under the sponsorship of Toshiba Corporation

  2. Color coding of televised task elements in remote work: a literature review with practical recommendations for a fuel reprocessing facility

    International Nuclear Information System (INIS)

    Clarke, M.M.; Preston-Anderson, A.

    1981-11-01

    The experimental literature on the effects of color visual displays was reviewed with particular reference to the performance of remote work in a Hot Experimental Facility (HEF) using real scene closed-circult television systems. It was also reviewed with more general reference to the broader range of work-related issues of operator learning and preference, and display specifications. Color has been shown to enhance the performance of tasks requiring search and location and may also enhance tracking/transportation tasks. However, both HEF large-volume searching and tracking can be computer augmented, alleviating some of the necessity for a color code to assist an operator. Although color enhances long-term memory and is preferred to black and white displays, it has not been shown to have a specific advantage in the performance of unique tasks (where computer augmentation is more problematic and visual input to operator is critical). Practical display specifications are discussed with reference to hue and size of color code, target size, ambient illumination, multiple displays, and coatings. The authors conclude that the disadvantages to color television in the HEF far outweigh any possible advantages and recommend the use of high-resolution black and white systems, unless future experiments unequivocally indicate that (1) color is superior to black and white for in-situ task performance or (2) it is imperative in terms of long-range psychological well-being

  3. Predictions of the thermomechanical code ''RESTA'' compared with fuel element examinations after irradiation in the BR3 reactor

    International Nuclear Information System (INIS)

    Petitgrand, S.

    1980-01-01

    A large number of fuel rods have been irradiated in the small power plant BR3. Many of them have been examined in hot cells after irradiation, giving thus valuable experimental information. On the other hand a thermomechanical code, named RESTA, has been developed by the C.E.A. to describe and predict the behaviour of a fuel pin in a PWR environment and in stationary conditions. The models used in that code derive chiefly from the C.E.A.'s own experience and are briefly reviewed in this paper. The comparison between prediction and experience has been performed for four power history classes: (1) moderate (average linear rating approximately equal to 20 kw m -1 ) and short (approximately equal to 300 days) rating, (2) moderate (approximately equal to 20 kw m -1 ) and long (approximately equal to 600 days) rating, (3) high (25-30 kw m -1 ) and long (approximately equal to 600 days) rating and (4) very high (30-40 kw m -1 ) and long (approximately equal to 600 days) rating. Satisfactory agreement has been found between experimental and calculated results in all cases, concerning fuel structural change, fission gas release, pellet-clad interaction as well as clad permanent strain. (author)

  4. Numerical modelling of the long-term evolution of EDZ. Development of material models, implementation in finite-element codes, and validation

    International Nuclear Information System (INIS)

    Pudewills, A.

    2005-11-01

    Construction of deep underground structures disturbs the initial stress field in the surrounding rock. This effect can generate microcracks and alter the hydromechanical properties of the rock salt around the excavations. For the long-term performance of an underground repository in rock salt, the evolution of the 'Excavation Disturbed Zone' (EDZ) and the hydromechanical behaviour of this zone represent important issues with respect to the integrity of the geological and technical barriers. Within the framework of the NF-PRO project, WP 4.4, attention focuses on the mathematical modelling of the development and evolution of the EDZ in the rock near a disposal drift due to its relevance on the integrity of the geological and technical barriers. To perform this task, finite-element codes containing a set of time- and temperature-dependent constitutive models have been improved. A new viscoplastic constitutive model for rock salt that can describe the damage of the rock has been implemented in the finite-element codes available. The model parameters were evaluated based on experimental results. Additionally, the long-term evolution of the EDZ around a gallery in a salt mine at about 700 m below the surface was analysed and the numerical results were compared with in-situ measurements. The calculated room closure, stress distribution and the increase of rock permeability in the EDZ were compared with in-situ data, thus providing confidence in the model used. (orig.)

  5. The one-dimensional transport code CHET2, taking into account nonlinear, element-specific equilibrium sorption

    International Nuclear Information System (INIS)

    Luehrmann, L.; Noseck, U.

    1996-03-01

    While the verification report on CHET1 primarily focused on aspects such as the correctness of algorithms with respect to the modeling of advection, dispersion and diffusion, the report in hand is intended to primarily deal with nonlinear sorption and numerical sorption modeling. Another aspect discussed is the correct treatment of decay within established radioactive decay chains. First, the physical fundamentals are explained of the processes determining the radionuclide transport in the cap rock, and hence are the basis of the program discussed. The numeric algorithms the CHET2 code is based are explained, showing the details of realisation and the function of the various defaults and corrections. The iterative coupling of transport and sorption computation is illustrated by means of a program flowchart. Furthermore, the actvities for verification of the program are explained, as well as qualitative effects of computations assuming concentration-dependent sorption. The computation of the decay within decay chains is verified, and application programming using nonlinear sorption isotherms as well as the entire process of transport calculations with CHET2 are shown. (orig./DG) [de

  6. Highly conserved non-coding elements on either side of SOX9 associated with Pierre Robin sequence.

    Science.gov (United States)

    Benko, Sabina; Fantes, Judy A; Amiel, Jeanne; Kleinjan, Dirk-Jan; Thomas, Sophie; Ramsay, Jacqueline; Jamshidi, Negar; Essafi, Abdelkader; Heaney, Simon; Gordon, Christopher T; McBride, David; Golzio, Christelle; Fisher, Malcolm; Perry, Paul; Abadie, Véronique; Ayuso, Carmen; Holder-Espinasse, Muriel; Kilpatrick, Nicky; Lees, Melissa M; Picard, Arnaud; Temple, I Karen; Thomas, Paul; Vazquez, Marie-Paule; Vekemans, Michel; Roest Crollius, Hugues; Hastie, Nicholas D; Munnich, Arnold; Etchevers, Heather C; Pelet, Anna; Farlie, Peter G; Fitzpatrick, David R; Lyonnet, Stanislas

    2009-03-01

    Pierre Robin sequence (PRS) is an important subgroup of cleft palate. We report several lines of evidence for the existence of a 17q24 locus underlying PRS, including linkage analysis results, a clustering of translocation breakpoints 1.06-1.23 Mb upstream of SOX9, and microdeletions both approximately 1.5 Mb centromeric and approximately 1.5 Mb telomeric of SOX9. We have also identified a heterozygous point mutation in an evolutionarily conserved region of DNA with in vitro and in vivo features of a developmental enhancer. This enhancer is centromeric to the breakpoint cluster and maps within one of the microdeletion regions. The mutation abrogates the in vitro enhancer function and alters binding of the transcription factor MSX1 as compared to the wild-type sequence. In the developing mouse mandible, the 3-Mb region bounded by the microdeletions shows a regionally specific chromatin decompaction in cells expressing Sox9. Some cases of PRS may thus result from developmental misexpression of SOX9 due to disruption of very-long-range cis-regulatory elements.

  7. Progress on the development of a new fuel management code to simulate the movement of pebble and block type fuel elements in a very high temperature reactor core

    Energy Technology Data Exchange (ETDEWEB)

    Xhonneux, Andre, E-mail: a.xhonneux@fz-juelich.de [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany); Kasselmann, Stefan; Rütten, Hans-Jochem [Forschungszentrum Jülich, 52425 Jülich (Germany); Becker, Kai [Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany); Allelein, Hans-Josef [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany)

    2014-05-01

    The history of gas-cooled high-temperature reactor prototypes in Germany is closely related to Forschungszentrum Jülich and its “Institute of Nuclear Waste Disposal and Reactor Safety (IEK-6)”. A variety of computer codes have been developed, validated and optimized to simulate the different safety and operational aspects of V/HTR. In order to overcome the present limitations of these codes and to exploit the advantages of modern computer clusters, a project has been initiated to integrate these individual programs into a consistent V/HTR code package (VHCP) applying state-of-the-art programming techniques and standards. One important aspect in the simulation of a V/HTR is the modeling of a continuous moving pebble bed or the periodic rearrangement of prismatic block type fuel. Present models are either too coarse to take special issues (e.g. pebble piles) into account or are too detailed and therefore too time consuming to be applicable in the HCP. The new Software for Handling Universal Fuel Elements (SHUFLE) recently being developed is well suited to close this gap. Although at first the code has been designed for pebble bed reactors, it can in principal be applied to all other types of nuclear fuel. The granularity of the mesh grid meets the requirements to consider these special issues while keeping the used computing power within reasonable limits. New features are for example the possibility to consider azimuthally differing flow velocities in the case of a pebble bed reactor or individual void factors to simulate effects to seismic events. The general idea behind this new approach to the simulation of pebble bed reactors is the following: In the preprocessing step, experimental flow lines or flow lines simulated by more detailed codes serve as an input. For each radial mesh column a representative flow line is then determined by interpolation. These representative flow lines are finally mapped to a user defined rectangular grid forming chains of meshes

  8. Progress on the development of a new fuel management code to simulate the movement of pebble and block type fuel elements in a very high temperature reactor core

    International Nuclear Information System (INIS)

    Xhonneux, Andre; Kasselmann, Stefan; Rütten, Hans-Jochem; Becker, Kai; Allelein, Hans-Josef

    2014-01-01

    The history of gas-cooled high-temperature reactor prototypes in Germany is closely related to Forschungszentrum Jülich and its “Institute of Nuclear Waste Disposal and Reactor Safety (IEK-6)”. A variety of computer codes have been developed, validated and optimized to simulate the different safety and operational aspects of V/HTR. In order to overcome the present limitations of these codes and to exploit the advantages of modern computer clusters, a project has been initiated to integrate these individual programs into a consistent V/HTR code package (VHCP) applying state-of-the-art programming techniques and standards. One important aspect in the simulation of a V/HTR is the modeling of a continuous moving pebble bed or the periodic rearrangement of prismatic block type fuel. Present models are either too coarse to take special issues (e.g. pebble piles) into account or are too detailed and therefore too time consuming to be applicable in the HCP. The new Software for Handling Universal Fuel Elements (SHUFLE) recently being developed is well suited to close this gap. Although at first the code has been designed for pebble bed reactors, it can in principal be applied to all other types of nuclear fuel. The granularity of the mesh grid meets the requirements to consider these special issues while keeping the used computing power within reasonable limits. New features are for example the possibility to consider azimuthally differing flow velocities in the case of a pebble bed reactor or individual void factors to simulate effects to seismic events. The general idea behind this new approach to the simulation of pebble bed reactors is the following: In the preprocessing step, experimental flow lines or flow lines simulated by more detailed codes serve as an input. For each radial mesh column a representative flow line is then determined by interpolation. These representative flow lines are finally mapped to a user defined rectangular grid forming chains of meshes

  9. High-Fidelity RF Gun Simulations with the Parallel 3D Finite Element Particle-In-Cell Code Pic3P

    Energy Technology Data Exchange (ETDEWEB)

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Schussman, G.; Ko, K.; /SLAC

    2009-06-19

    SLAC's Advanced Computations Department (ACD) has developed the first parallel Finite Element 3D Particle-In-Cell (PIC) code, Pic3P, for simulations of RF guns and other space-charge dominated beam-cavity interactions. Pic3P solves the complete set of Maxwell-Lorentz equations and thus includes space charge, retardation and wakefield effects from first principles. Pic3P uses higher-order Finite Elementmethods on unstructured conformal meshes. A novel scheme for causal adaptive refinement and dynamic load balancing enable unprecedented simulation accuracy, aiding the design and operation of the next generation of accelerator facilities. Application to the Linac Coherent Light Source (LCLS) RF gun is presented.

  10. VISCOT: a two-dimensional and axisymmetric nonlinear transient thermoviscoelastic and thermoviscoplastic finite-element code for modeling time-dependent viscous mechanical behavior of a rock mass

    International Nuclear Information System (INIS)

    1983-04-01

    VISCOT is a non-linear, transient, thermal-stress finite-element code designed to determine the viscoelastic, fiscoplastic, or elastoplastic deformation of a rock mass due to mechanical and thermal loading. The numerical solution of the nonlinear incremental equilibrium equations within VISCOT is performed by using an explicit Euler time-stepping scheme. The rock mass may be modeled as a viscoplastic or viscoelastic material. The viscoplastic material model can be described by a Tresca, von Mises, Drucker-Prager or Mohr-Coulomb yield criteria (with or without strain hardening) with an associated flow rule which can be a power or an exponential law. The viscoelastic material model within VISCOT is a temperature- and stress-dependent law which has been developed specifically for salt rock masses by Pfeifle, Mellegard and Senseny in ONWI-314 topical report (1981). Site specific parameters for this creep law at the Richton, Permian, Paradox and Vacherie salt sites have been calculated and are given in ONWI-314 topical report (1981). A major application of VISCOT (in conjunction with a SCEPTER heat transfer code such as DOT) is the thermomechanical analysis of a rock mass such as salt in which significant time-dependent nonlinear deformations are expected to occur. Such problems include room- and canister-scale studies during the excavation, operation, and long-term post-closure stages in a salt repository. In Section 1.5 of this document the code custodianship and control is described along with the status of verification, validation and peer review of this report

  11. Software Architecture Reconstruction Method, a Survey

    OpenAIRE

    Zainab Nayyar; Nazish Rafique

    2014-01-01

    Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules a...

  12. Generic programming for deterministic neutron transport codes

    International Nuclear Information System (INIS)

    Plagne, L.; Poncot, A.

    2005-01-01

    This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)

  13. Product Architecture Modularity Strategies

    DEFF Research Database (Denmark)

    Mikkola, Juliana Hsuan

    2003-01-01

    The focus of this paper is to integrate various perspectives on product architecture modularity into a general framework, and also to propose a way to measure the degree of modularization embedded in product architectures. Various trade-offs between modular and integral product architectures...... and how components and interfaces influence the degree of modularization are considered. In order to gain a better understanding of product architecture modularity as a strategy, a theoretical framework and propositions are drawn from various academic literature sources. Based on the literature review......, the following key elements of product architecture are identified: components (standard and new-to-the-firm), interfaces (standardization and specification), degree of coupling, and substitutability. A mathematical function, termed modularization function, is introduced to measure the degree of modularization...

  14. Synthesis and characterization of f-element iodate architectures with variable dimensionality, alpha- and beta-Am(IO3)3.

    Science.gov (United States)

    Runde, Wolfgang; Bean, Amanda C; Brodnax, Lia F; Scott, Brian L

    2006-03-20

    Two americium(III) iodates, beta-Am(IO3)3 (I) and alpha-Am(IO3)3 (II), have been prepared from the aqueous reactions of Am(III) with KIO(4) at 180 degrees C and have been characterized by single-crystal X-ray diffraction, diffuse reflectance, and Raman spectroscopy. The alpha-form is consistent with the known structure type I of anhydrous lanthanide iodates. It consists of a three-dimensional network of pyramidal iodate groups bridging [AmO8] polyhedra where each of the americium ions are coordinated to eight iodate ligands. The beta-form reveals a novel architecture that is unknown within the f-element iodate series. beta-Am(IO3)3 exhibits a two-dimensional layered structure with nine-coordinate Am(III) atoms. Three crystallographically unique pyramidal iodate anions link the Am atoms into corrugated sheets that interact with one another through intermolecular IO3-...IO3- interactions forming dimeric I2O10 units. One of these anions utilizes all three O atoms to simultaneously bridge three Am atoms. The other two iodate ligands bridge only two Am atoms and have one terminal O atom. In contrast to alpha-Am(IO3)3, where the [IO3] ligands are solely corner-sharing with [AmO8] polyhedra, a complex arrangement of corner- and edge-sharing mu2- and mu3-[IO3] pyramids can be found in beta-Am(IO3)3. Crystallographic data: I, monoclinic, space group P2(1)/n, a = 8.871(3) A, b = 5.933(2) A, c = 15.315(4) A, beta = 96.948(4) degrees , V = 800.1(4) A(3), Z = 4; II, monoclinic, space group P2(1)/c, a = 7.243(2) A, b = 8.538(3) A, c = 13.513(5) A, beta = 100.123(6) degrees , V = 822.7(5) A(3), Z = 4.

  15. Essential software architecture

    CERN Document Server

    Gorton, Ian

    2011-01-01

    Job titles like ""Technical Architect"" and ""Chief Architect"" nowadays abound in software industry, yet many people suspect that ""architecture"" is one of the most overused and least understood terms in professional software development. Gorton's book tries to resolve this dilemma. It concisely describes the essential elements of knowledge and key skills required to be a software architect. The explanations encompass the essentials of architecture thinking, practices, and supporting technologies. They range from a general understanding of structure and quality attributes through technical i

  16. Dynamic Weather Routes Architecture Overview

    Science.gov (United States)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  17. The Influence of Building Codes on Recreation Facility Design.

    Science.gov (United States)

    Morrison, Thomas A.

    1989-01-01

    Implications of building codes upon design and construction of recreation facilities are investigated (national building codes, recreation facility standards, and misperceptions of design requirements). Recreation professionals can influence architectural designers to correct past deficiencies, but they must understand architectural and…

  18. Architectural Prototyping in Industrial Practice

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2008-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system......, in addressing issues regarding quality attributes, in addressing architectural risks, and in addressing the problem of knowledge transfer and conformance. Little work has been reported so far on the actual industrial use of architectural prototyping. In this paper, we report from an ethnographical study...... and focus group involving architects from four companies in which we have focused on architectural prototypes. Our findings conclude that architectural prototypes play an important role in resolving problems experimentally, but less so in exploring alternative solutions. Furthermore, architectural...

  19. Validation of finite element code DELFIN by means of the zero power experiences at the nuclear power plant of Atucha I; Convalidacion del codigo DELFIN por medio de las experiencias a potencia cero de la central nuclear Atucha I

    Energy Technology Data Exchange (ETDEWEB)

    Grant, C R [Comision Nacional de Energia Atomica, San Martin (Argentina). Unidad de Actividad Reactores y Centrales Nucleares

    1997-12-31

    Code DELFIN, developed in CNEA, treats the spatial discretization using heterogeneous finite elements, allowing a correct treatment of the continuity of fluxes and currents among elements and a more realistic representation of the hexagonal lattice of the reactor. It can be used for fuel management calculation, Xenon oscillation and spatial kinetics. Using the HUEMUL code for cell calculation (which uses a generalized two dimensional collision probability theory and has the WIMS library incorporated in a data base), the zero power experiences performed in 1974 were calculated. (author). 8 refs., 9 figs., 3 tabs.

  20. Use of environmental isotopes in studying surface and groundwaters in the Upper Orontes basin: A case study of modeling elements and pollutants transport using the code PHREEQM

    International Nuclear Information System (INIS)

    Kattan, Z.

    2001-06-01

    This report evaluate the chemical and isotopic characteristics of surface and groundwater in the upper Orontes basin, together with a study of the precipitation behavior of Bloudan, Homs and Tartous stations. It presents also the so far obtained results throughout the application of the geochemical code PHREEQM in studying the elements and pollutant as transport in the groundwater of this basin. The results show that the rainfall chemistry was a moderate dissolved content, and, and accompanied with how ph values and high sulfate contents, as a result of domestic and industrial pollution. the altitude effect is shown up by a depletion of heavy stable isotopes of about -0.18 % and -1.39% per 100 m elevation of δ 18 O and δ D, respectively. surface water in the Orontes River, up to Qattineh Lake, was characterized by a low solute content, high ph values (higher than 8), high dissolved oxygen content, depleted concentration in heavy stable isotopes and natural mineralization in 15 N and organic pollutants (N and P). Un the opposite, the water of this river was more saline and more enriched in organic pollutants such as nitrogen and phosphorous, after its getting out of the Qattineh Lake. The river water was also characterized by low ph values and low concentration in dissolved oxygen, as a consequence of organic matter oxidation. The depleted concentration of heavy stable isotopes in the Cenomanian Turonian aquifer system reveals that the altitude of recharge zone is rather higher than 1000 m, which corresponds to an exposure of these rocks in Lebanon, the altitude of recharge zones for the continental and volcanic pliocene aquifers is not lower than 500 m. The mean turnover time (residence time) of groundwater in the Cenomanian-Turonian aquifer was evaluated to be about 40-50 years. On the basis of this evaluation, a value of about 0.8 billion cubic m was obtained for the maximum groundwater reservoir size. The results of geochemical modeling of elements and

  1. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  2. Evolution of the Petasis-Ferrier union/rearrangement tactic: construction of architecturally complex natural products possessing the ubiquitous cis-2,6-substituted tetrahydropyran structural element.

    Science.gov (United States)

    Smith, Amos B; Fox, Richard J; Razler, Thomas M

    2008-05-01

    The frequent low abundance of architecturally complex natural products possessing significant bioregulatory properties mandates the development of rapid, efficient, and stereocontrolled synthetic tactics, not only to provide access to the biologically rare target but also to enable elaboration of analogues for the development of new therapeutic agents with improved activities and/or pharmacokinetic properties. In this Account, the genesis and evolution of the Petasis-Ferrier union/rearrangement tactic, in the context of natural product total syntheses, is described. The reaction sequence comprises a powerful tactic for the construction of the 2,6- cis-substituted tetrahydropyran ring system, a ubiquitous structural element often found in complex natural products possessing significant bioactivities. The three-step sequence, developed in our laboratory, extends two independent methods introduced by Ferrier and Petasis and now comprises: condensation between a chiral, nonracemic beta-hydroxy acid and an aldehyde to furnish a dioxanone; carbonyl olefination; and Lewis-acid-induced rearrangement of the resultant enol acetal to generate the 2,6- cis-substituted tetrahydropyranone system in a highly stereocontrolled fashion. To demonstrate the envisioned versatility and robustness of the Petasis-Ferrier union/rearrangement tactic in complex molecule synthesis, we exploited the method as the cornerstone in our now successful total syntheses of (+)-phorboxazole A, (+)-zampanolide, (+)-dactylolide, (+)-spongistatins 1 and 2, (-)-kendomycin, (-)-clavosolide A, and most recently, (-)-okilactomycin. Although each target comprises a number of synthetic challenges, this Account focuses on the motivation, excitement, and frustrations associated with the evolution and implementation of the Petasis-Ferrier union/rearrangement tactic. For example, during our (+)-phorboxazole A endeavor, we recognized and exploited the inherent pseudo symmetry of the 2,6- cis

  3. Architecture & Environment

    Science.gov (United States)

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  4. Numerical study of effects of the beam tube on laser fields with a three-dimensional simulation code using the finite element method

    CERN Document Server

    Sobajima, M; Yamazaki, T; Yoshikawa, K; Ohnishi, M; Toku, H; Masuda, K; Kitagaki, J; Nakamura, T

    1999-01-01

    In January 1997, the Beijing FEL observed large laser amplification at 8-18 mu m. However, through the collaborative work, it was found from both experiments and numerical simulations that the laser loss on the beam tube wall was not negligible, and that the saturation was not seen in the relatively long wavelength range because of this loss. This calls for further investigation on the effects of the beam tube of finite size. In order to include such effects self-consistently, we have developed a new three-dimensional code that can solve equations with the boundary conditions of the beam tube by using the Finite Element Method. Results show that the beam tube effects are dominant in deriving higher laser modes in the tube, compared with the optical guiding effects, and consequently reduced gain especially in the longer wavelength range, where the beam tube effects are greatly emphasized. It is also found that TEM sub 0 sub 2 mode is the most dominant higher mode in the beam tube, and is also the main cause of...

  5. Minimalism in architecture: Architecture as a language of its identity

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2012-01-01

    Full Text Available Every architectural work is created on the principle that includes the meaning, and then this work is read like an artifact of the particular meaning. Resources by which the meaning is built primarily, susceptible to transformation, as well as routing of understanding (decoding messages carried by a work of architecture, are subject of semiotics and communication theories, which have played significant role for the architecture and the architect. Minimalism in architecture, as a paradigm of the XXI century architecture, means searching for essence located in the irreducible minimum. Inspired use of architectural units (archetypical elements, trough the fatasm of simplicity, assumes the primary responsibility for providing the object identity, because it participates in language formation and therefore in its reading. Volume is form by clean language that builds the expression of the fluid areas liberated of recharge needs. Reduced architectural language is appropriating to the age marked by electronic communications.

  6. Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture

    Science.gov (United States)

    Weber, Raphael; Rettberg, Achim

    A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.

  7. Enterprise Architecture Analysis with XML

    OpenAIRE

    Boer, Frank; Bonsangue, Marcello; Jacob, Joost; Stam, A.; Torre, Leon

    2005-01-01

    htmlabstractThis paper shows how XML can be used for static and dynamic analysis of architectures. Our analysis is based on the distinction between symbolic and semantic models of architectures. The core of a symbolic model consists of its signature that specifies symbolically its structural elements and their relationships. A semantic model is defined as a formal interpretation of the symbolic model. This provides a formal approach to the design of architectural description languages and a g...

  8. Iraqi architecture in mogul period

    Directory of Open Access Journals (Sweden)

    Hasan Shatha

    2018-01-01

    Full Text Available Iraqi architecture have many periods passed through it until now, each on from these periods have it is architectural style, also through time these styles interacted among us, to creating kind of space forming, space relationships, and architectural elements (detailed treatments, the research problem being from the multi interacted architectural styles causing some of confused of general characteristic to every style, that we could distinguish by it. Research tries to study architecture style through Mogul Conquest to Baghdad. Aim of research follow main characteristic for this architectural style in the Mogul periods on the level of form, elements, and treatments. Research depending on descriptive and analytical all buildings belong to this period, so from analyzing there style by, general form for building, architectural elements, and it architectural treatment, therefore; repeating this procedures to every building we get some similarities, from these similarities we can making conclusion about pure characteristic of the style of these period. Other side, we also discover some Dissimilar in the building periods, these will lead research to make what interacting among styles in this period, after all that we can drew clearly main characteristic of Architectural Style for Mogul Conquest in Baghdad

  9. Enterprise Architecture Evaluation

    DEFF Research Database (Denmark)

    Andersen, Peter; Carugati, Andrea

    2014-01-01

    By being holistically preoccupied with coherency among organizational elements such as organizational strategy, business needs and the IT functions role in supporting the business, enterprise architecture (EA) has grown to become a core competitive advantage. Though EA is a maturing research area...

  10. National Positioning, Navigation, and Timing Architecture

    National Research Council Canada - National Science Library

    Huested, Patrick; Popejoy, Paul D

    2008-01-01

    .... The strategy is supported by vectors, or enterprise architecture elements, for using multiple PNT-related phenomenologies and interchangeable PNT solutions, PNT and Communications synergy, and co...

  11. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  12. Architecture Descriptions. A Contribution to Modeling of Production System Architecture

    DEFF Research Database (Denmark)

    Jepsen, Allan Dam; Hvam, Lars

    a proper understanding of the architecture phenomenon and the ability to describe it in a manner that allow the architecture to be communicated to and handled by stakeholders throughout the company. Despite the existence of several design philosophies in production system design such as Lean, that focus...... a diverse set of stakeholder domains and tools in the production system life cycle. To support such activities, a contribution is made to the identification and referencing of production system elements within architecture descriptions as part of the reference architecture framework. The contribution...

  13. WARP3D-Release 10.8: Dynamic Nonlinear Analysis of Solids using a Preconditioned Conjugate Gradient Software Architecture

    Science.gov (United States)

    Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.

    1998-01-01

    This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves

  14. A surface code quantum computer in silicon

    Science.gov (United States)

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  15. A surface code quantum computer in silicon.

    Science.gov (United States)

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  16. An architectural analysis of the elongation of field-grown sunflower root systems. Elements for modelling the effects of temperature and intercepted radiation

    International Nuclear Information System (INIS)

    Aguirrezabal, L.A.N.; Tardieu, F.

    1996-01-01

    The effects of photosynthetic photon flux density (PPFD) and soil temperature on root system elongation rate have been analysed by using an architectural framework. Root elongation rate was analysed by considering three terms, (i) the branch appearance rate, (ii) the individual elongation rates of the taproot and branches and (iii) the proportion of branches which stop elongating. Large ranges of PPFD and soil temperature were obtained in a series of field and growth chamber experiments. In the field, the growth of root systems experiencing day-to-day natural fluctuation of PPFD and temperature was followed, and some of the plants under study were shaded. In the growth chamber, plants experienced contrasting and constant PPFDs and root temperatures. The direct effect of apex temperature on individual root elongation rate was surprisingly low in the range 13–25°C, except for the first days after germination. Root elongation rate was essentially related to intercepted PPFD and to distance to the source, both in the field and in the growth chamber. Branch appearance rate substantially varied among days and environmental conditions. It was essentially linked to taproot elongation rate, as the profile of branch density along the taproot was quite stable. The length of the taproot segment carrying newly appeared branches on a given day was equal to taproot elongation on this day, plus a 'buffering term' which transiently increased if taproot elongation rate slowed down. The proportion of branches which stopped elongating a short distance from the taproot ranged from 50–80% and was, therefore, a major architectural variable, although it is not taken into account in current architectural models. A set of equations accounting for the variabilities in elongation rate, branch appearance rate and proportion of branches which stop elongating, as a function of intercepted PPFD and apex temperature is proposed. These equations apply for both field and growth

  17. Architectural Contestation

    NARCIS (Netherlands)

    Merle, J.

    2012-01-01

    This dissertation addresses the reductive reading of Georges Bataille's work done within the field of architectural criticism and theory which tends to set aside the fundamental ‘broken’ totality of Bataille's oeuvre and also to narrowly interpret it as a mere critique of architectural form,

  18. Architecture Sustainability

    NARCIS (Netherlands)

    Avgeriou, Paris; Stal, Michael; Hilliard, Rich

    2013-01-01

    Software architecture is the foundation of software system development, encompassing a system's architects' and stakeholders' strategic decisions. A special issue of IEEE Software is intended to raise awareness of architecture sustainability issues and increase interest and work in the area. The

  19. Memory architecture

    NARCIS (Netherlands)

    2012-01-01

    A memory architecture is presented. The memory architecture comprises a first memory and a second memory. The first memory has at least a bank with a first width addressable by a single address. The second memory has a plurality of banks of a second width, said banks being addressable by components

  20. Architectural Narratives

    DEFF Research Database (Denmark)

    Kiib, Hans

    2010-01-01

    a functional framework for these concepts, but tries increasingly to endow the main idea of the cultural project with a spatially aesthetic expression - a shift towards “experience architecture.” A great number of these projects typically recycle and reinterpret narratives related to historical buildings......In this essay, I focus on the combination of programs and the architecture of cultural projects that have emerged within the last few years. These projects are characterized as “hybrid cultural projects,” because they intend to combine experience with entertainment, play, and learning. This essay...... and architectural heritage; another group tries to embed new performative technologies in expressive architectural representation. Finally, this essay provides a theoretical framework for the analysis of the political rationales of these projects and for the architectural representation bridges the gap between...

  1. Minimalism in architecture: Abstract conceptualization of architecture

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2015-01-01

    Full Text Available Minimalism in architecture contains the idea of the minimum as a leading creative tend to be considered and interpreted in working through phenomena of empathy and abstraction. In the Western culture, the root of this idea is found in empathy of Wilhelm Worringer and abstraction of Kasimir Malevich. In his dissertation, 'Abstraction and Empathy' Worringer presented his thesis on the psychology of style through which he explained the two opposing basic forms: abstraction and empathy. His conclusion on empathy as a psychological basis of observation expression is significant due to the verbal congruence with contemporary minimalist expression. His intuition was enhenced furthermore by figure of Malevich. Abstraction, as an expression of inner unfettered inspiration, has played a crucial role in the development of modern art and architecture of the twentieth century. Abstraction, which is one of the basic methods of learning in psychology (separating relevant from irrelevant features, Carl Jung is used to discover ideas. Minimalism in architecture emphasizes the level of abstraction to which the individual functions are reduced. Different types of abstraction are present: in the form as well as function of the basic elements: walls and windows. The case study is an example of Sou Fujimoto who is unequivocal in its commitment to the autonomy of abstract conceptualization of architecture.

  2. Layered architecture for quantum computing

    OpenAIRE

    Jones, N. Cody; Van Meter, Rodney; Fowler, Austin G.; McMahon, Peter L.; Kim, Jungsang; Ladd, Thaddeus D.; Yamamoto, Yoshihisa

    2010-01-01

    We develop a layered quantum-computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction. In doing so, we propose a new quantum-computer architecture based on optical control of quantum dot...

  3. Une approche de coloriage d’arrêtes pour la conception d’architectures parallèles d’entrelaceurs matériels

    OpenAIRE

    Awais Hussein , Sani

    2012-01-01

    Nowadays, Turbo and LDPC codes are two families of codes that are extensively used in current communication standards due to their excellent error correction capabilities. However, hardware design of coders and decoders for high data rate applications is not a straightforward process. For high data rates, decoders are implemented on parallel architectures in which more than one processing elements decode the received data. To achieve high memory bandwidth, the main memory is divided into smal...

  4. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  5. Islamic Architecture and Arch

    Directory of Open Access Journals (Sweden)

    Mohammed Mahbubur Rahman

    2015-01-01

    Full Text Available The arch, an essential architectural element since the early civilizations, permitted the construction of lighter walls and vaults, often covering a large span. Visually it was an important decorative feature that was trans-mitted from architectural decoration to other forms of art worldwide. In early Islamic period, Muslims were receiving from many civilizations, which they improved and re-introduced to bring about the Renaissance. Arches appeared in Mesopotamia, Indus, Egyptian, Babylonian, Greek and Assyrian civilizations; but the Romans applied the technique to a wide range of structures. The Muslims mastered the use and design of the arch, employed for structural and functional purposes, progressively meeting decorative and symbolic pur-poses. Islamic architecture is characterized by arches employed in all types of buildings; most common uses being in arcades. This paper discusses the process of assimilation and charts how they contributed to other civilizations.

  6. System architectures for telerobotic research

    Science.gov (United States)

    Harrison, F. Wallace

    1989-01-01

    Several activities are performed related to the definition and creation of telerobotic systems. The effort and investment required to create architectures for these complex systems can be enormous; however, the magnitude of process can be reduced if structured design techniques are applied. A number of informal methodologies supporting certain aspects of the design process are available. More recently, prototypes of integrated tools supporting all phases of system design from requirements analysis to code generation and hardware layout have begun to appear. Activities related to system architecture of telerobots are described, including current activities which are designed to provide a methodology for the comparison and quantitative analysis of alternative system architectures.

  7. The DACC system. Code burnup of cell for projection of the fuel elements in the power net work PWR and BWR

    International Nuclear Information System (INIS)

    Cepraga, D.; Boeriu, St.; Gheorghiu, E.; Cristian, I.; Patrulescu, I.; Cimporescu, D.; Ciuvica, P.; Velciu, E.

    1975-01-01

    The calculation system DACC-5 is a zero-dimensional reactor physics code used to calculate the criticality and burn-up of light-water reactors. The code requires as input essential extensive reactor parameters (fuel rod radius, water density, etc.). The nuclear constants (intensive parameters) are calculated with a five-group model (2 thermal and 3 fast groups). A fitting procedure is systematically employed to reduce the computation time of the code. Zero-dimensional burn-up calculations are made in an automatic way. Part one of the paper contains the code physical model and computer structure. Part two of the paper will contain tests of DACC-5 credibility for different light-water power lattices

  8. Architectural technology

    DEFF Research Database (Denmark)

    2005-01-01

    The booklet offers an overall introduction to the Institute of Architectural Technology and its projects and activities, and an invitation to the reader to contact the institute or the individual researcher for further information. The research, which takes place at the Institute of Architectural...... Technology at the Roayl Danish Academy of Fine Arts, School of Architecture, reflects a spread between strategic, goal-oriented pilot projects, commissioned by a ministry, a fund or a private company, and on the other hand projects which originate from strong personal interests and enthusiasm of individual...

  9. Humanizing Architecture

    DEFF Research Database (Denmark)

    Toft, Tanya Søndergaard

    2015-01-01

    The article proposes the urban digital gallery as an opportunity to explore the relationship between ‘human’ and ‘technology,’ through the programming of media architecture. It takes a curatorial perspective when proposing an ontological shift from considering media facades as visual spectacles...... agency and a sense of being by way of dematerializing architecture. This is achieved by way of programming the symbolic to provide new emotional realizations and situations of enlightenment in the public audience. This reflects a greater potential to humanize the digital in media architecture....

  10. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  11. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen

    environments and a knowledge gap therefore exists in present hospital designs. Consequently, the purpose of this thesis has been to investigate if any research-based knowledge exist supporting the hypothesis that the interior architectural qualities of eating environments influence patient food intake, health...... and well-being, as well as outline a set of basic design principles ‘predicting’ the future interior architectural qualities of patient eating environments. Methodologically the thesis is based on an explorative study employing an abductive approach and hermeneutic-interpretative strategy utilizing tactics...... and food intake, as well as a series of references exist linking the interior architectural qualities of healthcare environments with the health and wellbeing of patients. On the basis of these findings, the thesis presents the concept of Architectural Theatricality as well as a set of design principles...

  12. Enterprise architecture evaluation using architecture framework and UML stereotypes

    Directory of Open Access Journals (Sweden)

    Narges Shahi

    2014-08-01

    Full Text Available There is an increasing need for enterprise architecture in numerous organizations with complicated systems with various processes. Support for information technology, organizational units whose elements maintain complex relationships increases. Enterprise architecture is so effective that its non-use in organizations is regarded as their institutional inability in efficient information technology management. The enterprise architecture process generally consists of three phases including strategic programing of information technology, enterprise architecture programing and enterprise architecture implementation. Each phase must be implemented sequentially and one single flaw in each phase may result in a flaw in the whole architecture and, consequently, in extra costs and time. If a model is mapped for the issue and then it is evaluated before enterprise architecture implementation in the second phase, the possible flaws in implementation process are prevented. In this study, the processes of enterprise architecture are illustrated through UML diagrams, and the architecture is evaluated in programming phase through transforming the UML diagrams to Petri nets. The results indicate that the high costs of the implementation phase will be reduced.

  13. The Simulation Intranet Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  14. DOD Business Systems Modernization: Military Departments Need to Strengthen Management of Enterprise Architecture Programs

    National Research Council Canada - National Science Library

    Hite, Randolph C; Johnson, Tonia; Eagle, Timothy; Epps, Elena; Holland, Michael; Lakhmani, Neela; LaPaza, Rebecca; Le, Anh; Paintsil, Freda

    2008-01-01

    .... Our framework for managing and evaluating the status of architecture programs consists of 31 core elements related to architecture governance, content, use, and measurement that are associated...

  15. Data Element Registry Services

    Data.gov (United States)

    U.S. Environmental Protection Agency — Data Element Registry Services (DERS) is a resource for information about value lists (aka code sets / pick lists), data dictionaries, data elements, and EPA data...

  16. French RSE-M and RCC-MR code appendices for flaw analysis: Presentation of the fracture parameters calculation-Part V: Elements of validation

    International Nuclear Information System (INIS)

    Marie, S.; Chapuliot, S.; Kayser, Y.; Lacire, M.H.; Drubay, B.; Barthelet, B.; Le Delliou, P.; Rougier, V.; Naudin, C.; Gilles, P.; Triay, M.

    2007-01-01

    French nuclear codes include flaw assessment procedures: the RSE-M Code 'Rules for In-service Inspection of Nuclear Power Plant Components' and the RCC-MR code 'Design and Construction Rules for Mechanical Components of FBR Nuclear Islands and High Temperature Applications'. Development of analytical methods has been made for the last 10 years in the framework of a collaboration between CEA, EDF and AREVA-NP, and by R and D actions involving CEA and IRSN. These activities have led to a unification of the common methods of the two codes. The calculation of fracture mechanics parameters, in particular the stress intensity factor K I and the J integral, has been widely developed for industrial configurations. All the developments have been integrated in the 2005 edition of RSE-M and in 2007 edition of RCC-MR. This series of articles consists of 5 parts: the first part presents an overview of the methods proposed in the RCC-MR and RSE-M codes. Parts II-IV provide the compendia for specific components. The geometries are plates (part II), pipes (part III) and elbows (part IV). This part presents validation of the methods, with details on the process followed for their development and of the evaluation accuracy of the proposed analytical methods

  17. The Political Economy of Architectural Research : Dutch Architecture, Architects and the City, 2000-2012

    NARCIS (Netherlands)

    Djalali, A.

    2016-01-01

    The status of architectural research has not yet been clearly defined. Nevertheless, architectural research has surely become a core element in the profession of architecture. In fact, the tendency seem for architects to be less and less involved with building design and construction services, which

  18. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... proportions, to organize the process on site choosing either one room wall components or several rooms wall components – either horizontally or vertically. Combined with the seamless joint the playing with these possibilities the new industrialized architecture can deliver variations in choice of solutions...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  19. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... expression in the specific housing area. It is the aim of this article to expand the different design strategies which architects can use – to give the individual project attitudes and designs with architectural quality. Through the customized component production it is possible to choose different...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  20. Review of the surface architecture of the equine neopallium: Principle elements of a cartographic pattern of sulci revisited and further elaborated.

    Science.gov (United States)

    Lang, A; Wirth, G; Gasse, H

    2018-03-14

    The surface architecture of the equine telencephalon is far more complex and complicated than, for example, that of the carnivore's brain, and basic organization patterns are more difficult to recognize. This is due to species differences, to interindividual variations and even to asymmetries between right and left hemispheres. Moreover, a very heterogeneous anatomical terminology, especially in the pioneering older literature, does not allow easy access to a unanimous topographical orientation. This review article presents the key features of this heterogeneity and its anatomical and terminological backgrounds, focusing on the cerebral sulci. The abundant, often divergent data from the reviewed literature are displayed by means of graphical illustrations highlighting the key issues and comparing them with the terminology of the present Nomina Anatomica Veterinaria. These illustrations are supposed to convey the relevant conformities and discrepancies regarding locations, courses and names of cerebral sulci in an easier and more effective manner than written texts could possibly do with such a complex and heterogeneous matter. The data from the selected literature are supplemented by and discussed together with photographs and drawings of brains from our own collection. This combination of a classic review article and own findings is supposed to confirm, to further elaborate and to evaluate the key sulci serving as landmarks for an orientation on the equine neopallium. These are, laterally, the Sulcus suprasylvius, coronalis and praesylvius; dorsally, the Sulcus marginalis; and medially, the Sulcus genualis, cinguli and splenialis. Special attention is also given to the Fissura sylvia; a Fissura sylvia accessoria is proposed. © 2018 Blackwell Verlag GmbH.

  1. PICNIC Architecture.

    Science.gov (United States)

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source.

  2. Architectural geometry

    KAUST Repository

    Pottmann, Helmut; Eigensatz, Michael; Vaxman, Amir; Wallner, Johannes

    2014-01-01

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  3. Architectural geometry

    KAUST Repository

    Pottmann, Helmut

    2014-11-26

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  4. Analysis of experiments performed at University of Hannover with Relap5/Mod2 and Cathare codes on fluid dynamic effects in the fuel element top nozzle area during refilling and reflooding

    International Nuclear Information System (INIS)

    Ambrosini, W.; D'Auria, F.; Di Marco, P.; Fantappie, G.; Giot, G.; Emmerechts, D.; Seynhaeve, J.M.; Zhang, J.

    1989-11-01

    The experimental data of flooding and CCFL in the fuel element top nozzle area collected at the University of Hannover have been analyzed with RELAP5/MOD2 and CATHARE V.1.3 codes. Preliminary sensitivity calculations have been performed to evaluate the influence of various parameters and code options on the results. However, an a priori rational assessment procedure has been performed for those parameters non specific in experimental data (e.g. energy loss coefficients in flow restrictions). This procedure is based on single phase flow pressure drops and no further tuning has been performed to fit experimental data. The reported experimental data and some others demonstrate the complex relation-ship among the involved physical quantities (film thickness, pressure drop etc.) even in a simple geometrical condition with well defined boundary conditions. In the application of the two advanced codes to the selected CCFL experiments it appears that sophisticated models do not simulate satisfactorily the measured phenomena mainly when situations similar to nuclear reactors are dealt with (rod bundles). This result should be evaluated considering that: - dimensional phenomena occurring in flooding experiments are not well reproducible with one dimensional models implemented in the two codes; - a rational and reproducible procedure has been used to fix some boundary conditions (K-tuning); there is the evidence that more tuning can be used to get results closer to the experimental ones in each specific situation; - the uncertainty bands in measured experimental results are not (entirely) specified. The work performed demonstrated that further applications to CCFL experiments of present codes appear to be unuseful. New models should be tested and implemented before any attempt to reproduce CCFL in experimental facilities by system codes

  5. Multiprocessor architecture: Synthesis and evaluation

    Science.gov (United States)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  6. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    Architecture and anthropology have always had a common focus on dwelling, housing, urban life and spatial organisation. Current developments in both disciplines make it even more relevant to explore their boundaries and overlaps. Architects are inspired by anthropological insights and methods......, while recent material and spatial turns in anthropology have also brought an increasing interest in design, architecture and the built environment. Understanding the relationship between the social and the physical is at the heart of both disciplines, and they can obviously benefit from further...... collaboration: How can qualitative anthropological approaches contribute to contemporary architecture? And just as importantly: What can anthropologists learn from architects’ understanding of spatial and material surroundings? Recent theoretical developments in anthropology stress the role of materials...

  7. Architectural Engineers

    DEFF Research Database (Denmark)

    Petersen, Rikke Premer

    engineering is addresses from two perspectives – as an educational response and an occupational constellation. Architecture and engineering are two of the traditional design professions and they frequently meet in the occupational setting, but at educational institutions they remain largely estranged....... The paper builds on a multi-sited study of an architectural engineering program at the Technical University of Denmark and an architectural engineering team within an international engineering consultancy based on Denmark. They are both responding to new tendencies within the building industry where...... the role of engineers and architects increasingly overlap during the design process, but their approaches reflect different perceptions of the consequences. The paper discusses some of the challenges that design education, not only within engineering, is facing today: young designers must be equipped...

  8. Reframing Architecture

    DEFF Research Database (Denmark)

    Riis, Søren

    2013-01-01

    I would like to thank Prof. Stephen Read (2011) and Prof. Andrew Benjamin (2011) for both giving inspiring and elaborate comments on my article “Dwelling in-between walls: the architectural surround”. As I will try to demonstrate below, their two different responses not only supplement my article...... focuses on how the absence of an initial distinction might threaten the endeavour of my paper. In my reply to Read and Benjamin, I will discuss their suggestions and arguments, while at the same time hopefully clarifying the postphenomenological approach to architecture....

  9. Layered Architecture for Quantum Computing

    Directory of Open Access Journals (Sweden)

    N. Cody Jones

    2012-07-01

    Full Text Available We develop a layered quantum-computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction. In doing so, we propose a new quantum-computer architecture based on optical control of quantum dots. The time scales of physical-hardware operations and logical, error-corrected quantum gates differ by several orders of magnitude. By dividing functionality into layers, we can design and analyze subsystems independently, demonstrating the value of our layered architectural approach. Using this concrete hardware platform, we provide resource analysis for executing fault-tolerant quantum algorithms for integer factoring and quantum simulation, finding that the quantum-dot architecture we study could solve such problems on the time scale of days.

  10. CALIPSO - a computer code for the calculation of fluiddynamics, thermohydraulics and changes of geometry in failing fuel elements of a fast breeder reactor

    International Nuclear Information System (INIS)

    Kedziur, F.

    1982-07-01

    The computer code CALIPSO was developed for the calculation of a hypothetical accident in an LMFBR (Liquid Metal Fast Breeder Reactor), where the failure of fuel pins is assumed. It calculates two-dimensionally the thermodynamics, fluiddynamics and changes in geometry of a single fuel pin and its coolant channel in a time period between failure of the pin and a state, at which the geometry is nearly destroyed. The determination of temperature profiles in the fuel pin cladding and the channel wall make it possible to take melting and freezing processes into account. Further features of CALIPSO are the variable channel cross section in order to model disturbances of the channel geometry as well as the calculation of two velocity fields including the consideration of virtual mass effects. The documented version of CALIPSO is especially suited for the calculation of the SIMBATH experiments carried out at the Kernforschungszentrum Karlsruhe, which simulate the above-mentioned accident. The report contains the complete documentation of the CALIPSO code: the modeling of the geometry, the equations used, the structure of the code and the solution procedure as well as the instructions for use with an application example. (orig.) [de

  11. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    International Nuclear Information System (INIS)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-01-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated

  12. Textile Architecture

    DEFF Research Database (Denmark)

    Heimdal, Elisabeth Jacobsen

    2010-01-01

    Textiles can be used as building skins, adding new aesthetic and functional qualities to architecture. Just like we as humans can put on a coat, buildings can also get dressed. Depending on our mood, or on the weather, we can change coat, and so can the building. But the idea of using textiles...

  13. Hijazi Architectural Object Library (haol)

    Science.gov (United States)

    Baik, A.; Boehm, J.

    2017-02-01

    As with many historical buildings around the world, building façades are of special interest; moreover, the details of such windows, stonework, and ornaments give each historic building its individual character. Each object of these buildings must be classified in an architectural object library. Recently, a number of researches have been focusing on this topic in Europe and Canada. From this standpoint, the Hijazi Architectural Objects Library (HAOL) has reproduced Hijazi elements as 3D computer models, which are modelled using a Revit Family (RFA). The HAOL will be dependent on the image survey and point cloud data. The Hijazi Object such as Roshan and Mashrabiyah, become as vocabulary of many Islamic cities in the Hijazi region such as Jeddah in Saudi Arabia, and even for a number of Islamic historic cities such as Istanbul and Cairo. These architectural vocabularies are the main cause of the beauty of these heritage. However, there is a big gap in both the Islamic architectural library and the Hijazi architectural library to provide these unique elements. Besides, both Islamic and Hijazi architecture contains a huge amount of information which has not yet been digitally classified according to period and styles. Due to this issue, this paper will be focusing on developing of Heritage BIM (HBIM) standards and the HAOL library to reduce the cost and the delivering time for heritage and new projects that involve in Hijazi architectural styles. Through this paper, the fundamentals of Hijazi architecture informatics will be provided via developing framework for HBIM models and standards. This framework will provide schema and critical information, for example, classifying the different shapes, models, and forms of structure, construction, and ornamentation of Hijazi architecture in order to digitalize parametric building identity.

  14. A CORBA BASED ARCHITECTURE FOR ACCESSING REUSABLE SOFTWARE COMPONENTS ON THE WEB.

    Directory of Open Access Journals (Sweden)

    R. Cenk ERDUR

    2003-01-01

    Full Text Available In a very near future, as a result of the continious growth of Internet and advances in networking technologies, Internet will become the common software repository for people and organizations who employ component based reuse approach in their software development life cycles. In order to use the reusable components such as source codes, analysis, designs, design patterns during new software development processes, environments that support the identification of the components over Internet are needed. Basic elements of such an environment are the coordinator programs which deliver user requests to appropriate component libraries, user interfaces for querying, and programs that wrap the component libraries. First, a CORBA based architecture is proposed for such an environment. Then, an alternative architecture that is based on the Java 2 platform technologies is given for the same environment. Finally, the two architectures are compared.

  15. From green architecture to architectural green

    DEFF Research Database (Denmark)

    Earon, Ofri

    2011-01-01

    that describes the architectural exclusivity of this particular architecture genre. The adjective green expresses architectural qualities differentiating green architecture from none-green architecture. Currently, adding trees and vegetation to the building’s facade is the main architectural characteristics...... they have overshadowed the architectural potential of green architecture. The paper questions how a green space should perform, look like and function. Two examples are chosen to demonstrate thorough integrations between green and space. The examples are public buildings categorized as pavilions. One......The paper investigates the topic of green architecture from an architectural point of view and not an energy point of view. The purpose of the paper is to establish a debate about the architectural language and spatial characteristics of green architecture. In this light, green becomes an adjective...

  16. 38 CFR 39.22 - Architectural design standards.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Architectural design...-16-10) Standards and Requirements for Project § 39.22 Architectural design standards. The..., Ontario, CA 91761-2816. (a) Architectural and structural requirements—(1) Life Safety Code. Standards must...

  17. Accuracy Test of Software Architecture Compliance Checking Tools : Test Instruction

    NARCIS (Netherlands)

    Prof.dr. S. Brinkkemper; Dr. Leo Pruijt; C. Köppe; J.M.E.M. van der Werf

    2015-01-01

    Author supplied: "Abstract Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies

  18. Towards architectural information in implementation (NIER track)

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2011-01-01

    in a fast-faced agile project. We propose to embed as much architectural information as possible in the central artefact of the agile universe, the code. We argue that thereby valuable architectural information is retained for (automatic) documentation, validation, and further analysis, based......Agile development methods favor speed and feature producing iterations. Software architecture, on the other hand, is ripe with techniques that are slow and not oriented directly towards implementation of costumers’ needs. Thus, there is a major challenge in retaining architectural information...

  19. SURF: a subroutine code to draw the axonometric projection of a surface generated by a scalar function over a discretized plane domain using finite element computations

    International Nuclear Information System (INIS)

    Giuliani, Giovanni; Giuliani, Silvano.

    1980-01-01

    The FORTRAN IV subroutine SURF has been designed to help visualising the results of Finite Element computations. It drawns the axonometric projection of a surface generated in 3-dimensional space by a scalar function over a discretized plane domain. The most important characteristic of the routine is to remove the hidden lines and in this way it enables a clear vision of the details of the generated surface

  20. Participation in benchmark MATIS-H of NEA/OCDE: uses CFD codes applied to nuclear safety. Study of the spacer grids in the fuel elements

    International Nuclear Information System (INIS)

    Pena-Monferrer, C.; Chiva, S.; Munoz-cobo, J. L.; Vela, E.

    2012-01-01

    This paper develops participation in benchmark MATIS-H, promoted by the NEA / OECD-KAERI, involving the study of turbulent flow in a rod beam with spacers in an experimental installation. Its aim is the analysis of hydraulic behavior of turbulent flow in the subchannels of the fuel elements, essential for the improvement of safety margins in normal and transient operations and to maximize the use of nuclear energy through an optimal design of grids.

  1. MUF architecture /art London

    DEFF Research Database (Denmark)

    Svenningsen Kajita, Heidi

    2009-01-01

    Om MUF architecture samt interview med Liza Fior og Katherine Clarke, partnere i muf architecture/art......Om MUF architecture samt interview med Liza Fior og Katherine Clarke, partnere i muf architecture/art...

  2. Architectural fragments

    DEFF Research Database (Denmark)

    Bang, Jacob Sebastian

    2018-01-01

    I have created a large collection of plaster models: a collection of Obstructions, errors and opportunities that may develop into architecture. The models are fragments of different complex shapes as well as more simple circular models with different profiling and diameters. In this contect I have....... I try to invent the ways of drawing the models - that decode and unfold them into architectural fragments- into future buildings or constructions in the landscape. [1] Luigi Moretti: Italian architect, 1907 - 1973 [2] Man Ray: American artist, 1890 - 1976. in 2015, I saw the wonderful exhibition...... "Man Ray - Human Equations" at the Glyptotek in Copenhagen, organized by the Philips Collection in Washington D.C. and the Israel Museum in Jerusalem (in 2013). See also: "Man Ray - Human Equations" catalogue published by Hatje Cantz Verlag, Germany, 2014....

  3. Kosmos = architecture

    Directory of Open Access Journals (Sweden)

    Tine Kurent

    1985-12-01

    Full Text Available The old Greek word "kosmos" means not only "cosmos", but also "the beautiful order", "the way of building", "building", "scenography", "mankind", and, in the time of the New Testament, also "pagans". The word "arhitekton", meaning first the "master of theatrical scenography", acquired the meaning of "builder", when the words "kosmos" and ~kosmetes" became pejorative. The fear that architecture was not considered one of the arts before Renaissance, since none of the Muses supervised the art of building, results from the misunderstanding of the word "kosmos". Urania was the Goddes of the activity implied in the verb "kosmein", meaning "to put in the beautiful order" - everything, from the universe to the man-made space, i. e. the architecture.

  4. Metabolistic Architecture

    DEFF Research Database (Denmark)

    2013-01-01

    Textile Spaces presents different approaches to using textile as a spatial definer and artistic medium. The publication collages images and text, art and architecture, science, philosophy and literature, process and product, past, present and future. It forms an insight into soft materials' funct......' functional and poetic potentials, linking the disciplines through fragments that aim to inspire a further look into the artists' and architects' practices, while simultaneously framing these textile visions in a wider context.......Textile Spaces presents different approaches to using textile as a spatial definer and artistic medium. The publication collages images and text, art and architecture, science, philosophy and literature, process and product, past, present and future. It forms an insight into soft materials...

  5. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  6. Self-Contained Cross-Cutting Pipeline Software Architecture

    OpenAIRE

    Patwardhan, Amol; Patwardhan, Rahul; Vartak, Sumalini

    2016-01-01

    Layered software architecture contains several intra-layer and inter-layer dependencies. Each layer depends on shared components making it difficult to release a code change, bug fix or feature without exhaustive testing and having to build the entire software code base. This paper proposed self-contained, cross-cutting pipeline architecture (SCPA) that is independent of existing layers. We chose 2 open source projects and 3 internal intern projects that used n-tier architecture and applied t...

  7. Executable Architecture Research at Old Dominion University

    Science.gov (United States)

    Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.

    2011-01-01

    Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.

  8. Transverse pumped laser amplifier architecture

    Science.gov (United States)

    Bayramian, Andrew James; Manes, Kenneth; Deri, Robert; Erlandson, Al; Caird, John; Spaeth, Mary

    2013-07-09

    An optical gain architecture includes a pump source and a pump aperture. The architecture also includes a gain region including a gain element operable to amplify light at a laser wavelength. The gain region is characterized by a first side intersecting an optical path, a second side opposing the first side, a third side adjacent the first and second sides, and a fourth side opposing the third side. The architecture further includes a dichroic section disposed between the pump aperture and the first side of the gain region. The dichroic section is characterized by low reflectance at a pump wavelength and high reflectance at the laser wavelength. The architecture additionally includes a first cladding section proximate to the third side of the gain region and a second cladding section proximate to the fourth side of the gain region.

  9. Assessment of Fuel Analysis Methodology and Fission Product Release for 37-Element Fuel by Using the Latest IST Codes during Stagnation Feeder Break in CANDU

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jung, Jong Yeob

    2009-09-01

    Feeder break accident is regarded as one of the design basis accident in CANDU reactor which results in a fuel failure. For a particular range of inlet feeder break sizes, the flow in the channel is reduced sufficiently that the fuel and fuel channel integrity can be significantly affected to have damage in the affected channel, while the remainder of the core remains adequately cooled. The flow in the downstream channel can be more or less stagnated due to a balance between pressure at the break on the upstream side and the reverse driving pressure between the break and the downstream end. In the extreme, this can lead to rapid fuel heatup and fuel damage and failure of the fuel channel similar to that associated with a severe channel flow blockage. Such an inlet feeder break scenario is called a stagnation break. In this report, the fuel analysis methodology and the assessment results of fission product inventory and release during the stagnation feeder break are described for conservatively assumed limiting channel. The accident was assumed to be occurred in the refurbished Wolsong unit 1 and the latest safety codes were used in the analysis. Fission product inventories during the steady state were calculated by using ELESTRES-IST 1.2 code. The whole analysis process was carried out by a script file which was programmed by Perl language. The perl script file was programmed to make all ELESTRES input files for each bundle and each ring based on the given power-burnup history and thermal-hydraulic conditions of the limiting channel and to perform the fuel analysis automatically. The fission product release during the transient period of stagnation feeder break was evaluated by applying Gehl model. The amounts of each isotope's release are conservatively evaluated for additional 2 seconds after channel failure. The calculated fission product releases are provided to the following dose assessment as a source term

  10. Implementation of constitutive equations for creep damage mechanics into the ABAQUS finite element code - some practical cases in high temperature component design and life assessment

    International Nuclear Information System (INIS)

    Segle, P.; Samuelson, L.Aa.; Andersson, Peder; Moberg, F.

    1996-01-01

    Constitutive equations for creep damage mechanics are implemented into the finite element program ABAQUS using a user supplied subroutine, UMAT. A modified Kachanov-Rabotnov constitutive equation which accounts for inhomogeneity in creep damage is used. With a user defined material a number of bench mark tests are analyzed for verification. In the cases where analytical solutions exist, the numerical results agree very well. In other cases, the creep damage evolution response appear to be realistic in comparison with laboratory creep tests. The appropriateness of using the creep damage mechanics concept in design and life assessment of high temperature components is demonstrated. 18 refs

  11. A "Language Lab" for Architectural Design.

    Science.gov (United States)

    Mackenzie, Arch; And Others

    This paper discusses a "language lab" strategy in which traditional studio learning may be supplemented by language lessons using computer graphics techniques to teach architectural grammar, a body of elements and principles that govern the design of buildings belonging to a particular architectural theory or style. Two methods of…

  12. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  13. GAUDI-Architecture design document

    CERN Document Server

    Mato, P

    1998-01-01

    98-064 This document is the result of the architecture design phase for the LHCb event data processing applications project. The architecture of the LHCb software system includes its logical and physical structure which has been forged by all the strategic and tactical decisions applied during development. The strategic decisions should be made explicitly with the considerations for the trade-off of each alternative. The other purpose of this document is that it serves as the main material for the scheduled architecture review that will take place in the next weeks. The architecture review will allow us to identify what are the weaknesses or strengths of the proposed architecture as well as we hope to obtain a list of suggested changes to improve it. All that well before the system is being realized in code. It is in our interest to identify the possible problems at the architecture design phase of the software project before much of the software is implemented. Strategic decisions must be cross checked caref...

  14. An Empirical Investigation of Architectural Prototyping

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2010-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system...... and in addressing issues regarding quality attributes, architectural risks, and the problem of knowledge transfer and conformance. However, the actual industrial use of architectural prototyping has not been thoroughly researched so far. In this article, we report from three studies of architectural prototyping...... in practice. First, we report findings from an ethnographic study of practicing software architects. Secondly, we report from a focus group on architectural prototyping involving architects from four companies. And, thirdly, we report from a survey study of 20 practicing software architects and software...

  15. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Yun

    2015-11-21

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this paper, I will present the code architecture, physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.

  16. A bacterial genetic screen identifies functional coding sequences of the insect mariner transposable element Famar1 amplified from the genome of the earwig, Forficula auricularia.

    Science.gov (United States)

    Barry, Elizabeth G; Witherspoon, David J; Lampe, David J

    2004-02-01

    Transposons of the mariner family are widespread in animal genomes and have apparently infected them by horizontal transfer. Most species carry only old defective copies of particular mariner transposons that have diverged greatly from their active horizontally transferred ancestor, while a few contain young, very similar, and active copies. We report here the use of a whole-genome screen in bacteria to isolate somewhat diverged Famar1 copies from the European earwig, Forficula auricularia, that encode functional transposases. Functional and nonfunctional coding sequences of Famar1 and nonfunctional copies of Ammar1 from the European honey bee, Apis mellifera, were sequenced to examine their molecular evolution. No selection for sequence conservation was detected in any clade of a tree derived from these sequences, not even on branches leading to functional copies. This agrees with the current model for mariner transposon evolution that expects neutral evolution within particular hosts, with selection for function occurring only upon horizontal transfer to a new host. Our results further suggest that mariners are not finely tuned genetic entities and that a greater amount of sequence diversification than had previously been appreciated can occur in functional copies in a single host lineage. Finally, this method of isolating active copies can be used to isolate other novel active transposons without resorting to reconstruction of ancestral sequences.

  17. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  18. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  19. Medical Data Architecture Project Status

    Science.gov (United States)

    Krihak, M.; Middour, C.; Lindsey, A.; Marker, N.; Wolfe, S.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.

    2017-01-01

    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current International Space Station (ISS) medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable an increasingly autonomous crew than the current ISS paradigm. The MDA will develop capabilities that support automated data collection, and the necessary functionality and challenges in executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. To attain this goal, the first year of the MDA project focused on reducing technical risk, developing documentation and instituting iterative development processes that established the basis for the first version of MDA software (or Test Bed 1). Test Bed 1 is based on a nominal operations scenario authored by the ExMC Element Scientist. This narrative was decomposed into a Concept of Operations that formed the basis for Test Bed 1 requirements. These requirements were successfully vetted through the MDA Test Bed 1 System Requirements Review, which permitted the MDA project to begin software code development and component integration. This paper highlights the MDA objectives, development processes, and accomplishments, and identifies the fiscal year 2017 milestones and

  20. gEVE: a genome-based endogenous viral element database provides comprehensive viral protein-coding sequences in mammalian genomes.

    Science.gov (United States)

    Nakagawa, So; Takahashi, Mahoko Ueda

    2016-01-01

    In mammals, approximately 10% of genome sequences correspond to endogenous viral elements (EVEs), which are derived from ancient viral infections of germ cells. Although most EVEs have been inactivated, some open reading frames (ORFs) of EVEs obtained functions in the hosts. However, EVE ORFs usually remain unannotated in the genomes, and no databases are available for EVE ORFs. To investigate the function and evolution of EVEs in mammalian genomes, we developed EVE ORF databases for 20 genomes of 19 mammalian species. A total of 736,771 non-overlapping EVE ORFs were identified and archived in a database named gEVE (http://geve.med.u-tokai.ac.jp). The gEVE database provides nucleotide and amino acid sequences, genomic loci and functional annotations of EVE ORFs for all 20 genomes. In analyzing RNA-seq data with the gEVE database, we successfully identified the expressed EVE genes, suggesting that the gEVE database facilitates studies of the genomic analyses of various mammalian species.Database URL: http://geve.med.u-tokai.ac.jp. © The Author(s) 2016. Published by Oxford University Press.

  1. Study of Anti-Sliding Stability of a Dam Foundation Based on the Fracture Flow Method with 3D Discrete Element Code

    Directory of Open Access Journals (Sweden)

    Chong Shi

    2017-10-01

    Full Text Available Fractured seepage is an important factor affecting the interface stability of rock mass. It is closely related to fracture properties and hydraulic conditions. In this study, the law of seepage in a single fracture surface based on modified cubic law is described, and the three-dimensional discrete element method is used to simulate the dam foundation structure of the Capulin San Pablo (Costa Rica hydropower station. The effect of construction joints and developed structure on dam stability is studied, and its permeability law and sliding stability are also evaluated. It is found that the hydraulic-mechanical coupling with strength reduction method in DEM is more appropriate to use to study the seepage-related problems of fractured rock mass, which considers practical conditions, such as the roughness of and the width of fracture. The strength reduction method provides a more accurate safety factor of dam when considering the deformation coordination with bedrocks. It is an important method with which to study the stability of seepage conditions in complex structures. The discrete method also provided an effective and reasonable way of determining seepage control measures.

  2. Tandem Mirror Reactor Systems Code (Version I)

    International Nuclear Information System (INIS)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost

  3. Coding as literacy metalithikum IV

    CERN Document Server

    Bühlmann, Vera; Moosavi, Vahid

    2015-01-01

    Recent developments in computer science, particularly "data-driven procedures" have opened a new level of design and engineering. This has also affected architecture. The publication collects contributions on Coding as Literacy by computer scientists, mathematicians, philosophers, cultural theorists, and architects. "Self-Organizing Maps" (SOM) will serve as the concrete reference point for all further discussions.

  4. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  5. Numerical simulation of cracks and interfaces with cohesive zone models in the extended finite element method, with EDF R and D software Code Aster

    International Nuclear Information System (INIS)

    Ferte, Guilhem

    2014-01-01

    In order to assess the harmfulness of detected defects in some nuclear power plants, EDF Group is led to develop advanced simulation tools. Among the targeted mechanisms are 3D non-planar quasi-static crack propagation, but also dynamic transients during unstable phases. In the present thesis, quasi-brittle crack growth is simulated based on the combination of the XFEM and cohesive zone models. These are inserted over large potential crack surfaces, so that the cohesive law will naturally separate adherent and de-bonding zones, resulting in an implicit update of the crack front, which makes the originality of the approach. This requires a robust insertion of non-smooth interface laws in the XFEM, which is achieved in quasi-statics with the use of XFEM-suited multiplier spaces in a consistent formulation, block-wise diagonal interface operators and an augmented Lagrangian formalism to write the cohesive law. Based on this concept and a novel directional criterion appealing to cohesive integrals, a propagation procedure over non-planar crack paths is proposed and compared with literature benchmarks. As for dynamics, an initially perfectly adherent cohesive law is implicitly treated within an explicit time-stepping scheme, resulting in an analytical determination of interface tractions if appropriate discrete spaces are used. Implementation is validated on a tapered DCB test. Extension to quadratic elements is then investigated. For stress-free cracks, it was found that a subdivision into quadratic sub-cells is needed for optimality. Theory expects enriched quadrature to be necessary for distorted sub-cells, but this could not be observed in practice. For adherent interfaces, a novel discrete multiplier space was proposed which has both numerical stability and produces quadratic convergence if used along with quadratic sub-cells. (author)

  6. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  7. Towards Product Lining Model-Driven Development Code Generators

    OpenAIRE

    Roth, Alexander; Rumpe, Bernhard

    2015-01-01

    A code generator systematically transforms compact models to detailed code. Today, code generation is regarded as an integral part of model-driven development (MDD). Despite its relevance, the development of code generators is an inherently complex task and common methodologies and architectures are lacking. Additionally, reuse and extension of existing code generators only exist on individual parts. A systematic development and reuse based on a code generator product line is still in its inf...

  8. Architectural Drawing

    DEFF Research Database (Denmark)

    Steinø, Nicolai

    2018-01-01

    In a time of computer aided design, computer graphics and parametric design tools, the art of architectural drawing is in a state of neglect. But design and drawing are inseparably linked in ways which often go unnoticed. Essentially, it is very difficult, if not impossible, to conceive of a design...... is that computers can represent graphic ideas both faster and better than most medium-skilled draftsmen, drawing in design is not only about representing final designs. In fact, several steps involving the capacity to draw lie before the representation of a final design. Not only is drawing skills an important...... prerequisite for learning about the nature of existing objects and spaces, and thus to build a vocabulary of design. It is also a prerequisite for both reflecting and communicating about design ideas. In this paper, a taxonomy of notation, reflection, communication and presentation drawing is presented...

  9. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen; Fisker, Anna Marie; Kirkegaard, Poul Henning

    2013-01-01

    In the attempt to improve patient treatment and recovery, researchers focus on applying concepts of hospitality to hospitals. Often these concepts are dominated by hotel-metaphors focusing on host–guest relationships or concierge services. Motivated by a project trying to improve patient treatment...... is known for his writings on theatricality, understood as a holistic design approach emphasizing the contextual, cultural, ritual and social meanings rooted in architecture. Relative hereto, the International Food Design Society recently argued, in a similar holistic manner, that the methodology used...... to provide an aesthetic eating experience includes knowledge on both food and design. Based on a hermeneutic reading of Semper’s theory, our thesis is that this holistic design approach is important when debating concepts of hospitality in hospitals. We use this approach to argue for how ‘food design...

  10. Lab architecture

    Science.gov (United States)

    Crease, Robert P.

    2008-04-01

    There are few more dramatic illustrations of the vicissitudes of laboratory architecturethan the contrast between Building 20 at the Massachusetts Institute of Technology (MIT) and its replacement, the Ray and Maria Stata Center. Building 20 was built hurriedly in 1943 as temporary housing for MIT's famous Rad Lab, the site of wartime radar research, and it remained a productive laboratory space for over half a century. A decade ago it was demolished to make way for the Stata Center, an architecturally striking building designed by Frank Gehry to house MIT's computer science and artificial intelligence labs (above). But in 2004 - just two years after the Stata Center officially opened - the building was criticized for being unsuitable for research and became the subject of still ongoing lawsuits alleging design and construction failures.

  11. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  12. Development of MCNP interface code in HFETR

    International Nuclear Information System (INIS)

    Qiu Liqing; Fu Rong; Deng Caiyu

    2007-01-01

    In order to describe the HFETR core with MCNP method, the interface code MCNPIP for HFETR and MCNP code is developed. This paper introduces the core DXSY and flowchart of MCNPIP code, and the handling of compositions of fuel elements and requirements on hardware and software. Finally, MCNPIP code is validated against the practical application. (authors)

  13. Architecture for Teraflop Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  14. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  15. The architecture of a modern military health information system.

    Science.gov (United States)

    Mukherji, Raj J; Egyhazy, Csaba J

    2004-06-01

    This article describes a melding of a government-sponsored architecture for complex systems with open systems engineering architecture developed by the Institute for Electrical and Electronics Engineers (IEEE). Our experience in using these two architectures in building a complex healthcare system is described in this paper. The work described shows that it is possible to combine these two architectural frameworks in describing the systems, operational, and technical views of a complex automation system. The advantage in combining the two architectural frameworks lies in the simplicity of implementation and ease of understanding of automation system architectural elements by medical professionals.

  16. Structural elements design manual

    CERN Document Server

    Draycott, Trevor

    2012-01-01

    Gives clear explanations of the logical design sequence for structural elements. The Structural Engineer says: `The book explains, in simple terms, and with many examples, Code of Practice methods for sizing structural sections in timber, concrete,masonry and steel. It is the combination into one book of section sizing methods in each of these materials that makes this text so useful....Students will find this an essential support text to the Codes of Practice in their study of element sizing'.

  17. Relating business intelligence and enterprise architecture - A method for combining operational data with architectural metadata

    NARCIS (Netherlands)

    Veneberg, R.K.M.; Iacob, Maria Eugenia; van Sinderen, Marten J.; Bodenstaff, L.

    Combining enterprise architecture and operational data is complex (especially when considering the actual ‘matching’ of data with enterprise architecture elements), and little has been written on how to do this. In this paper we aim to fill this gap, and propose a method to combine operational data

  18. SUSTAINABLE ARCHITECTURE : WHAT ARCHITECTURE STUDENTS THINK

    OpenAIRE

    SATWIKO, PRASASTO

    2013-01-01

    Sustainable architecture has become a hot issue lately as the impacts of climate change become more intense. Architecture educations have responded by integrating knowledge of sustainable design in their curriculum. However, in the real life, new buildings keep coming with designs that completely ignore sustainable principles. This paper discusses the results of two national competitions on sustainable architecture targeted for architecture students (conducted in 2012 and 2013). The results a...

  19. Change Impact Analysis of Crosscutting in Software Architectural Design

    NARCIS (Netherlands)

    van den Berg, Klaas

    2006-01-01

    Software architectures should be amenable to changes in user requirements and implementation technology. The analysis of the impact of these changes can be based on traceability of architectural design elements. Design elements have dependencies with other software artifacts but also evolve in time.

  20. Lightweight enterprise architectures

    CERN Document Server

    Theuerkorn, Fenix

    2004-01-01

    STATE OF ARCHITECTUREArchitectural ChaosRelation of Technology and Architecture The Many Faces of Architecture The Scope of Enterprise Architecture The Need for Enterprise ArchitectureThe History of Architecture The Current Environment Standardization Barriers The Need for Lightweight Architecture in the EnterpriseThe Cost of TechnologyThe Benefits of Enterprise Architecture The Domains of Architecture The Gap between Business and ITWhere Does LEA Fit? LEA's FrameworkFrameworks, Methodologies, and Approaches The Framework of LEATypes of Methodologies Types of ApproachesActual System Environmen

  1. Lean Architecture for Agile Software Development

    CERN Document Server

    Coplien, James O

    2010-01-01

    More and more Agile projects are seeking architectural roots as they struggle with complexity and scale - and they're seeking lightweight ways to do it: Still seeking? In this book the authors help you to find your own path; Taking cues from Lean development, they can help steer your project toward practices with longstanding track records; Up-front architecture? Sure. You can deliver an architecture as code that compiles and that concretely guides development without bogging it down in a mass of documents and guesses about the implementation; Documentation? Even a whiteboard diagram, or a CRC

  2. Information architecture for building digital library | Obuh ...

    African Journals Online (AJOL)

    The paper provided an overview of constituent elements of a digital library and explained the underlying information architecture and building blocks for a digital library. It specifically proffered meaning to the various elements or constituents of a digital library system. The paper took a look at the structure of information as a ...

  3. The NIMROD Code

    Science.gov (United States)

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  4. Verifying Architectural Design Rules of the Flight Software Product Line

    Science.gov (United States)

    Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen

    2009-01-01

    This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.

  5. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  6. Project Integration Architecture

    Science.gov (United States)

    Jones, William Henry

    2008-01-01

    The Project Integration Architecture (PIA) is a distributed, object-oriented, conceptual, software framework for the generation, organization, publication, integration, and consumption of all information involved in any complex technological process in a manner that is intelligible to both computers and humans. In the development of PIA, it was recognized that in order to provide a single computational environment in which all information associated with any given complex technological process could be viewed, reviewed, manipulated, and shared, it is necessary to formulate all the elements of such a process on the most fundamental level. In this formulation, any such element is regarded as being composed of any or all of three parts: input information, some transformation of that input information, and some useful output information. Another fundamental principle of PIA is the assumption that no consumer of information, whether human or computer, can be assumed to have any useful foreknowledge of an element presented to it. Consequently, a PIA-compliant computing system is required to be ready to respond to any questions, posed by the consumer, concerning the nature of the proffered element. In colloquial terms, a PIA-compliant system must be prepared to provide all the information needed to place the element in context. To satisfy this requirement, PIA extends the previously established object-oriented- programming concept of self-revelation and applies it on a grand scale. To enable pervasive use of self-revelation, PIA exploits another previously established object-oriented-programming concept - that of semantic infusion through class derivation. By means of self-revelation and semantic infusion through class derivation, a consumer of information can inquire about the contents of all information entities (e.g., databases and software) and can interact appropriately with those entities. Other key features of PIA are listed.

  7. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  8. The application of diagrams in architectural design

    Directory of Open Access Journals (Sweden)

    Dulić Olivera

    2014-01-01

    Full Text Available Diagrams in architecture represent the visualization of the thinking process, or selective abstraction of concepts or ideas translated into the form of drawings. In addition, they provide insight into the way of thinking about and in architecture, thus creating a balance between the visual and the conceptual. The subject of research presented in this paper are diagrams as a specific kind of architectural representation, and possibilities and importance of their application in the design process. Diagrams are almost old as architecture itself, and they are an element of some of the most important studies of architecture during all periods of history - which results in a large number of different definitions of diagrams, but also very different conceptualizations of their features, functions and applications. The diagrams become part of contemporary architectural discourse during the eighties and nineties of the twentieth century, especially through the work of architects like Bernard Tschumi, Peter Eisenman, Rem Koolhaas, SANAA and others. The use of diagrams in the design process allows unification of some of the essential aspects of the profession: architectural representation and design process, as well as the question of the concept of architectural and urban design at a time of rapid changes at all levels of contemporary society. The aim of the research is the analysis of the diagram as a specific medium for processing large amounts of information that the architect should consider and incorporate into the architectural work. On that basis, it is assumed that an architectural diagram allows the creator the identification and analysis of specific elements or ideas of physical form, thereby constantly maintaining concept of the integrity of the architectural work.

  9. Lunar Navigation Architecture Design Considerations

    Science.gov (United States)

    D'Souza, Christopher; Getchius, Joel; Holt, Greg; Moreau, Michael

    2009-01-01

    The NASA Constellation Program is aiming to establish a long-term presence on the lunar surface. The Constellation elements (Orion, Altair, Earth Departure Stage, and Ares launch vehicles) will require a lunar navigation architecture for navigation state updates during lunar-class missions. Orion in particular has baselined earth-based ground direct tracking as the primary source for much of its absolute navigation needs. However, due to the uncertainty in the lunar navigation architecture, the Orion program has had to make certain assumptions on the capabilities of such architectures in order to adequately scale the vehicle design trade space. The following paper outlines lunar navigation requirements, the Orion program assumptions, and the impacts of these assumptions to the lunar navigation architecture design. The selection of potential sites was based upon geometric baselines, logistical feasibility, redundancy, and abort support capability. Simulated navigation covariances mapped to entry interface flightpath- angle uncertainties were used to evaluate knowledge errors. A minimum ground station architecture was identified consisting of Goldstone, Madrid, Canberra, Santiago, Hartebeeshoek, Dongora, Hawaii, Guam, and Ascension Island (or the geometric equivalent).

  10. The plasma automata network (PAN) architecture

    International Nuclear Information System (INIS)

    Cameron-Carey, C.M.

    1991-01-01

    Conventional neural networks consist of processing elements which are interconnected according to a specified topology. Typically, the number of processing elements and the interconnection topology are fixed. A neural network's information processing capability lies mainly in the variability of interconnection strengths, which directly influence activation patterns; these patterns represent entities and their interrelationships. Contrast this architecture, with its fixed topology and variable interconnection strengths, against one having dynamic topology and fixed connection strength. This paper reports on this proposed architecture in which there are no connections between processing elements. Instead, the processing elements form a plasma, exchanging information upon collision. A plasma can be populated with several different types of processing elements, each with their won activation function and self-modification mechanism. The activation patterns that are the plasma;s response to stimulation drive natural selection among processing elements which evolve to optimize performance

  11. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  12. Architectural design decisions

    NARCIS (Netherlands)

    Jansen, Antonius Gradus Johannes

    2008-01-01

    A software architecture can be considered as the collection of key decisions concerning the design of the software of a system. Knowledge about this design, i.e. architectural knowledge, is key for understanding a software architecture and thus the software itself. Architectural knowledge is mostly

  13. Information Integration Architecture Development

    OpenAIRE

    Faulkner, Stéphane; Kolp, Manuel; Nguyen, Duy Thai; Coyette, Adrien; Do, Thanh Tung; 16th International Conference on Software Engineering and Knowledge Engineering

    2004-01-01

    Multi-Agent Systems (MAS) architectures are gaining popularity for building open, distributed, and evolving software required by systems such as information integration applications. Unfortunately, despite considerable work in software architecture during the last decade, few research efforts have aimed at truly defining patterns and languages for designing such multiagent architectures. We propose a modern approach based on organizational structures and architectural description lan...

  14. Fragments of Architecture

    DEFF Research Database (Denmark)

    Bang, Jacob Sebastian

    2016-01-01

    Topic 3: “Case studies dealing with the artistic and architectural work of architects worldwide, and the ties between specific artistic and architectural projects, methodologies and products”......Topic 3: “Case studies dealing with the artistic and architectural work of architects worldwide, and the ties between specific artistic and architectural projects, methodologies and products”...

  15. Simulation of the Intake and Compression Strokes of a Motored 4-Valve Si Engine with a Finite Element Code Simulation de l'admission et de la compression dans un moteur 4-soupapes AC entraîné à l'aide d'un code de calcul à éléments finis

    Directory of Open Access Journals (Sweden)

    Bailly O.

    2006-12-01

    Full Text Available A CFD code, using a mixed finite volumes - finite elements method on tetraedrons, is now available for engine simulations. The code takes into account the displacement of moving walls such as piston and valves in a full automatic way: a single mesh is used for a full computation and no intervention of the user is necessary. A fourth order implicit spatial scheme and a first order implicit temporal scheme are used. The work presented in this paper is part of a larger program for the validation of this new numerical tool for engine applications. Here, comparisons between computation and experiments of the intake and compression strokes of a four-valve engine were carried out. The experimental investigations are conducted on a single cylinder four valve optical research engine. The turbulence intensity, mean velocity components, tumble and swirl ratios in the combustion chamber are deduced from the LDV measurements. The comparisons between computations and experiments are made on the mean velocity flow field at different locations inside the chamber and for different crank angles. We also present some global comparisons (swirl and tumble ratios. The simulation shows excellent agreement between computations and experiments. Un code de calcul utilisant une approche mixte éléments finis - volumes finis en tétraèdres a été développé pour les simulations moteur. Le code prend en compte le déplacement des parois mobiles comme les pistons et les soupapes de façon totalement automatique : un maillage unique est utilisé pour tout le calcul sans intervention de l'utilisateur. Un schéma implicite du quatrième ordre en espace et du premier ordre en temps est retenu. Le travail présenté dans cet article fait partie d'une démarche globale de validation de cette nouvelle approche pour les moteurs. Des comparaisons entre calculs et mesures lors des phases d'admission et de compression dans un moteur 4-soupapes AC y sont exposées. Ces exp

  16. Geochemical computer codes. A review

    International Nuclear Information System (INIS)

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  17. Temporal Architecture: Poetic Dwelling in Japanese buildings

    Directory of Open Access Journals (Sweden)

    Michael Lazarin

    2014-07-01

    Full Text Available Heidegger’s thinking about poetic dwelling and Derrida’s impressions of Freudian estrangement are employed to provide a constitutional analysis of the experience of Japanese architecture, in particular, the Japanese vestibule (genkan. This analysis is supplemented by writings by Japanese architects and poets. The principal elements of Japanese architecture are: (1 ma, and (2 en. Ma is usually translated as ‘interval’ because, like the English word, it applies to both space and time.  However, in Japanese thinking, it is not so much an either/or, but rather a both/and. In other words, Japanese architecture emphasises the temporal aspect of dwelling in a way that Western architectural thinking usually does not. En means ‘joint, edge, the in-between’ as an ambiguous, often asymmetrical spanning of interior and exterior, rather than a demarcation of these regions. Both elements are aimed at producing an experience of temporality and transiency.

  18. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  19. KENO-V code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  20. Space and Architecture's Current Line of Research? A Lunar Architecture Workshop With An Architectural Agenda.

    Science.gov (United States)

    Solomon, D.; van Dijk, A.

    space context that will be useful on Earth on a conceptual and practical level? * In what ways could architecture's field of reference offer building on the Moon (and other celestial bodies) a paradigm shift? 1 In addition to their models and designs, workshop participants will begin authoring a design recommendation for the building of (infra-) structures and habitats on celestial bodies in particular the Moon and Mars. The design recommendation, a substantiated aesthetic code of conduct (not legally binding) will address long term planning and incorporate issues of sustainability, durability, bio-diversity, infrastructure, CHANGE, and techniques that lend themselves to Earth-bound applications. It will also address the cultural implications of architectural design might have within the context of space exploration. The design recommendation will ultimately be presented for peer review to both the space and architecture communities. What would the endorsement from the architectural community of such a document mean to the space community? The Lunar Architecture Workshop is conceptualised, produced and organised by(in alphabetical order): Alexander van Dijk, Art Race in Space, Barbara Imhof; ES- CAPE*spHERE, Vienna, University of Technology, Institute for Design and Building Construction, Vienna, Bernard Foing; ESA SMART1 Project Scientist, Susmita Mo- hanty; MoonFront, LLC, Hans Schartner' Vienna University of Technology, Institute for Design and Building Construction, Debra Solomon; Art Race in Space, Dutch Art Institute, Paul van Susante; Lunar Explorers Society. Workshop locations: ESTEC, Noordwijk, NL and V2_Lab, Rotterdam, NL Workshop dates: June 3-16, 2002 (a Call for Participation will be made in March -April 2002.) 2

  1. Future city architecture for optimal living

    CERN Document Server

    Pardalos, Panos

    2015-01-01

      This book offers a wealth of interdisciplinary approaches to urbanization strategies in architecture centered on growing concerns about the future of cities and their impacts on essential elements of architectural optimization, livability, energy consumption and sustainability. It portrays the urban condition in architectural terms, as well as the living condition in human terms, both of which can be optimized by mathematical modeling as well as mathematical calculation and assessment.   Special features include:   ·        new research on the construction of future cities and smart cities   ·        discussions of sustainability and new technologies designed to advance ideas to future city developments   Graduate students and researchers in architecture, engineering, mathematical modeling, and building physics will be engaged by the contributions written by eminent international experts from a variety of disciplines including architecture, engineering, modeling, optimization, and relat...

  2. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  3. Coded aperture subreflector array for high resolution radar imaging

    Science.gov (United States)

    Lynch, Jonathan J.; Herrault, Florian; Kona, Keerti; Virbila, Gabriel; McGuire, Chuck; Wetzel, Mike; Fung, Helen; Prophet, Eric

    2017-05-01

    HRL Laboratories has been developing a new approach for high resolution radar imaging on stationary platforms. High angular resolution is achieved by operating at 235 GHz and using a scalable tile phased array architecture that has the potential to realize thousands of elements at an affordable cost. HRL utilizes aperture coding techniques to minimize the size and complexity of the RF electronics needed for beamforming, and wafer level fabrication and integration allow tiles containing 1024 elements to be manufactured with reasonable costs. This paper describes the results of an initial feasibility study for HRL's Coded Aperture Subreflector Array (CASA) approach for a 1024 element micromachined antenna array with integrated single-bit phase shifters. Two candidate electronic device technologies were evaluated over the 170 - 260 GHz range, GaN HEMT transistors and GaAs Schottky diodes. Array structures utilizing silicon micromachining and die bonding were evaluated for etch and alignment accuracy. Finally, the overall array efficiency was estimated to be about 37% (not including spillover losses) using full wave array simulations and measured device performance, which is a reasonable value at 235 GHz. Based on the measured data we selected GaN HEMT devices operated passively with 0V drain bias due to their extremely low DC power dissipation.

  4. Home networking architecture for IPv6

    OpenAIRE

    Arkko, Jari; Weil, Jason; Troan, Ole; Brandt, Anders

    2012-01-01

    This text describes evolving networking technology within increasingly large residential home networks. The goal of this document is to define an architecture for IPv6-based home networking while describing the associated principles, considerations and requirements. The text briefly highlights the specific implications of the introduction of IPv6 for home networking, discusses the elements of the architecture, and suggests how standard IPv6 mechanisms and addressing can be employed in home ne...

  5. The Walk-Man Robot Software Architecture

    Directory of Open Access Journals (Sweden)

    Mirko Ferrati

    2016-05-01

    Full Text Available A software and control architecture for a humanoid robot is a complex and large project, which involves a team of developers/researchers to be coordinated and requires many hard design choices. If such project has to be done in a very limited time, i.e., less than 1 year, more constraints are added and concepts, such as modular design, code reusability, and API definition, need to be used as much as possible. In this work, we describe the software architecture developed for Walk-Man, a robot participant at the Darpa Robotics Challenge. The challenge required the robot to execute many different tasks, such as walking, driving a car, and manipulating objects. These tasks need to be solved by robotics specialists in their corresponding research field, such as humanoid walking, motion planning, or object manipulation. The proposed architecture was developed in 10 months, provided boilerplate code for most of the functionalities required to control a humanoid robot and allowed robotics researchers to produce their control modules for DRC tasks in a short time. Additional capabilities of the architecture include firmware and hardware management, mixing of different middlewares, unreliable network management, and operator control station GUI. All the source code related to the architecture and some control modules have been released as open source projects.

  6. Vectorization and parallelization of a production reactor assembly code

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.; Michigan Univ., Ann Arbor, MI

    1991-01-01

    In order to use efficiently the new features of supercomputers, production codes, usually written 10 -20 years ago, must be tailored for modern computer architectures. We have chosen to optimize the CPM-2 code, a production reactor assembly code based on the collision probability transport method. Substantial speedup in the execution times was obtained with the parallel/vector version of the CPM-2 code. In addition, we have developed a new transfer probability method, which removes some of the modelling limitations of the collision probability method encoded in the CPM-2 code, and can fully utilize the parallel/vector architecture of a multiprocessor IBM 3090. (author)

  7. Vectorization and parallelization of a production reactor assembly code

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.

    1991-01-01

    In order to efficiently use new features of supercomputers, production codes, usually written 10 - 20 years ago, must be tailored for modern computer architectures. We have chosen to optimize the CPM-2 code, a production reactor assembly code based on the collision probability transport method. Substantial speedups in the execution times were obtained with the parallel/vector version of the CPM-2 code. In addition, we have developed a new transfer probability method, which removes some of the modelling limitations of the collision probability method encoded in the CPM-2 code, and can fully utilize parallel/vector architecture of a multiprocessor IBM 3090. (author)

  8. Gray Code for Cayley Permutations

    Directory of Open Access Journals (Sweden)

    J.-L. Baril

    2003-10-01

    Full Text Available A length-n Cayley permutation p of a total ordered set S is a length-n sequence of elements from S, subject to the condition that if an element x appears in p then all elements y < x also appear in p . In this paper, we give a Gray code list for the set of length-n Cayley permutations. Two successive permutations in this list differ at most in two positions.

  9. Advanced Architectures for Astrophysical Supercomputing

    Science.gov (United States)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  10. ArchE - An Architecture Design Assistant

    Science.gov (United States)

    2007-08-02

    Architecture Design Assistant Len Bass August 2, 2007 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...ArchE - An Architecture Design Assistant 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...X, Module X 3 Author / Presenter, Date if Needed What is ArchE? ArchE is a software architecture design assistant, which: • Takes quality and

  11. A data acquisition architecture for the SSC

    International Nuclear Information System (INIS)

    Partridge, R.

    1990-01-01

    An SSC data acquisition architecture applicable to high-p T detectors is described. The architecture is based upon a small set of design principles that were chosen to simplify communication between data acquisition elements while providing the required level of flexibility and performance. The architecture features an integrated system for data collection, event building, and communication with a large processing farm. The interface to the front end electronics system is also discussed. A set of design parameters is given for a data acquisition system that should meet the needs of high-p T detectors at the SSC

  12. Architecture-driven Migration of Legacy Systems to Cloud-enabled Software

    DEFF Research Database (Denmark)

    Ahmad, Aakash; Babar, Muhammad Ali

    2014-01-01

    of legacy systems to cloud computing. The framework leverages the software reengineering concepts that aim to recover the architecture from legacy source code. Then the framework exploits the software evolution concepts to support architecture-driven migration of legacy systems to cloud-based architectures....... The Legacy-to-Cloud Migration Horseshoe comprises of four processes: (i) architecture migration planning, (ii) architecture recovery and consistency, (iii) architecture transformation and (iv) architecture-based development of cloud-enabled software. We aim to discover, document and apply the migration...

  13. Evaluation of computational endomicroscopy architectures for minimally-invasive optical biopsy

    Science.gov (United States)

    Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.

    2017-02-01

    We are investigating compressive sensing architectures for applications in endomicroscopy, where the narrow diameter probes required for tissue access can limit the achievable spatial resolution. We hypothesize that the compressive sensing framework can be used to overcome the fundamental pixel number limitation in fiber-bundle based endomicroscopy by reconstructing images with more resolvable points than fibers in the bundle. An experimental test platform was assembled to evaluate and compare two candidate architectures, based on introducing a coded amplitude mask at either a conjugate image or Fourier plane within the optical system. The benchtop platform consists of a common illumination and object path followed by separate imaging arms for each compressive architecture. The imaging arms contain a digital micromirror device (DMD) as a reprogrammable mask, with a CCD camera for image acquisition. One arm has the DMD positioned at a conjugate image plane ("IP arm"), while the other arm has the DMD positioned at a Fourier plane ("FP arm"). Lenses were selected and positioned within each arm to achieve an element-to-pixel ratio of 16 (230,400 mask elements mapped onto 14,400 camera pixels). We discuss our mathematical model for each system arm and outline the importance of accounting for system non-idealities. Reconstruction of a 1951 USAF resolution target using optimization-based compressive sensing algorithms produced images with higher spatial resolution than bicubic interpolation for both system arms when system non-idealities are included in the model. Furthermore, images generated with image plane coding appear to exhibit higher spatial resolution, but more noise, than images acquired through Fourier plane coding.

  14. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2009-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and

  15. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2008-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and

  16. Design requirements of communication architecture of SMART safety system

    International Nuclear Information System (INIS)

    Park, H. Y.; Kim, D. H.; Sin, Y. C.; Lee, J. Y.

    2001-01-01

    To develop the communication network architecture of safety system of SMART, the evaluation elements for reliability and performance factors are extracted from commercial networks and classified the required-level by importance. A predictable determinacy, status and fixed based architecture, separation and isolation from other systems, high reliability, verification and validation are introduced as the essential requirements of safety system communication network. Based on the suggested requirements, optical cable, star topology, synchronous transmission, point-to-point physical link, connection-oriented logical link, MAC (medium access control) with fixed allocation are selected as the design elements. The proposed architecture will be applied as basic communication network architecture of SMART safety system

  17. Latest improvements on TRACPWR six-equations thermohydraulic code

    International Nuclear Information System (INIS)

    Rivero, N.; Batuecas, T.; Martinez, R.; Munoz, J.; Lenhardt, G.; Serrano, P.

    1999-01-01

    The paper presents the latest improvements on TRACPWR aimed at adapting the code to present trends on computer platforms, architectures and training requirements as well as extending the scope of the code itself and its applicability to other technologies different from Westinghouse PWR one. Firstly major features of TRACPWR as best estimate and real time simulation code are summed, then the areas where TRACPWR is being improved are presented. These areas comprising: (1) Architecture: integrating TRACPWR and RELAP5 codes, (2) Code scope enhancement: modelling the Mid-Loop operation, (3) Code speed-up: applying parallelization techniques, (4) Code platform downswing: porting to Windows N1 platform, (5) On-line performance: allowing simulation initialisation from a Plant Process Computer, and (6) Code scope extension: using the code for modelling VVER and PHWR technology. (author)

  18. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  19. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  20. Fast underdetermined BSS architecture design methodology for real time applications.

    Science.gov (United States)

    Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R

    2015-01-01

    In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.

  1. ISLAMIZATION OF CONTEMPORARY ARCHITECTURE: SHIFTING THE PARADIGM OF ISLAMIC ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    Mustapha Ben- Hamouche

    2012-03-01

    Full Text Available Islamic architecture is often thought as a history course and thus finds its material limited to the cataloguing and studying of legacies of successive empires or various geographic regions of the Islamic world. In practice, adherent professionals tend to reproduce high styles such as Umayyad, Abassid, Fatimid, Ottoman, etc., or recycle well known elements such as the minarets, courtyards, and mashrabiyyahs. This approach, endorsed by the present comprehensive Islamic revival, is believed to be the way to defend and revitalize the identity of Muslim societies that was initially affected by colonization and now is being offended by globalization. However, this approach often clashes with the contemporary trends in architecture that do not necessarily oppose the essence of Islamic architecture. Furthermore, it sometimes lead to an erroneous belief that consists of relating a priori forms to Islam and that clashes with the timeless and universal character of the Islamic religion. The key question to be asked then is, beyond this historicist view, what would be an “Islamic architec-ture” of nowadays that originates from the essence of Islam and that responds to contemporary conditions, needs, aspirations of present Muslim societies and individuals. To what extends can Islamic architecture bene-fits from modern progress and contemporary thought in resurrecting itself without loosing its essence. The hypothesis of the study is that, just as early Muslim architecture started from the adoption, use and re-use of early pre-Islamic architectures before reaching originality, this process, called Islamization, could also take place nowadays with the contemporary thought that is mostly developed in Western and non-Islamic environ-ments. Mechanisms in Islam that allowed the “absorption” of pre-existing civilizations should thus structure the islamization approach and serve the scholars and professionals to reach the new Islamic architecture. The

  2. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  3. Performance Analysis of Multiradio Transmitter with Polar or Cartesian Architectures Associated with High Efficiency Switched-Mode Power Amplifiers (invited paper

    Directory of Open Access Journals (Sweden)

    F. Robert

    2010-12-01

    Full Text Available This paper deals with wireless multi-radio transmitter architectures operating in the frequency band of 800 MHz – 6 GHz. As a consequence of the constant evolution in the communication systems, mobile transmitters must be able to operate at different frequency bands and modes according to existing standards specifications. The concept of a unique multiradio architecture is an evolution of the multistandard transceiver characterized by a parallelization of circuits for each standard. Multi-radio concept optimizes surface and power consumption. Transmitter architectures using sampling techniques and baseband ΣΔ or PWM coding of signals before their amplification appear as good candidates for multiradio transmitters for several reasons. They allow using high efficiency power amplifiers such as switched-mode PAs. They are highly flexible and easy to integrate because of their digital nature. But when the transmitter efficiency is considered, many elements have to be taken into account: signal coding efficiency, PA efficiency, RF filter. This paper investigates the interest of these architectures for a multiradio transmitter able to support existing wireless communications standards between 800 MHz and 6 GHz. It evaluates and compares the different possible architectures for WiMAX and LTE standards in terms of signal quality and transmitter power efficiency.

  4. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  5. Modeling Architectural Patterns’ Behavior Using Architectural Primitives

    NARCIS (Netherlands)

    Waqas Kamal, Ahmad; Avgeriou, Paris

    2008-01-01

    Architectural patterns have an impact on both the structure and the behavior of a system at the architecture design level. However, it is challenging to model patterns’ behavior in a systematic way because modeling languages do not provide the appropriate abstractions and because each pattern

  6. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  7. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  8. Fault-tolerant architectures for superconducting qubits

    International Nuclear Information System (INIS)

    DiVincenzo, David P

    2009-01-01

    In this short review, I draw attention to new developments in the theory of fault tolerance in quantum computation that may give concrete direction to future work in the development of superconducting qubit systems. The basics of quantum error-correction codes, which I will briefly review, have not significantly changed since their introduction 15 years ago. But an interesting picture has emerged of an efficient use of these codes that may put fault-tolerant operation within reach. It is now understood that two-dimensional surface codes, close relatives of the original toric code of Kitaev, can be adapted as shown by Raussendorf and Harrington to effectively perform logical gate operations in a very simple planar architecture, with error thresholds for fault-tolerant operation simulated to be 0.75%. This architecture uses topological ideas in its functioning, but it is not 'topological quantum computation'-there are no non-abelian anyons in sight. I offer some speculations on the crucial pieces of superconducting hardware that could be demonstrated in the next couple of years that would be clear stepping stones towards this surface-code architecture.

  9. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  10. A Reference Architecture for Space Information Management

    Science.gov (United States)

    Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.

    2006-01-01

    We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.

  11. Simply architecture or bioclimatic architecture?; Arquitectura bioclimatica o simplemente Arquitectura?

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Torres, Juan Manuel [Universidad de Guanajuato (Mexico)

    2006-10-15

    The bioclimatic architecture is the one which profits from its position in the environment and its architectonic elements for the climate benefit. With the aim of reach the internal thermal comfort without using mechanical systems. This article states the story about this singular kind of architecture during centuries. And also emphasizes the sunlight utilization, in order to achieve the desired thermal well-being in edifications. [Spanish] El tipo de arquitectura que toma ventaja de su disposicion en el entorno y sus elementos arquitectonicos para el aprovechamiento del clima, con el fin de conseguir el confort termico interior sin utilizar sistemas mecanicos, se denomina bioclimatica. En este articulo se habla de la historia de este tipo tan singular de arquitectura con el paso de los siglos, y tambien se hace hincapie acerca de la luz solar, como un medio muy eficiente a traves del cual las edificaciones pueden ser disenadas para lograr el bienestar termico deseado.

  12. Religious architecture: anthropological perspectives

    NARCIS (Netherlands)

    Verkaaik, O.

    2013-01-01

    Religious Architecture: Anthropological Perspectives develops an anthropological perspective on modern religious architecture, including mosques, churches and synagogues. Borrowing from a range of theoretical perspectives on space-making and material religion, this volume looks at how religious

  13. Avionics Architecture for Exploration

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the AES Avionics Architectures for Exploration (AAE) project is to develop a reference architecture that is based on standards and that can be scaled and...

  14. RATS: Reactive Architectures

    National Research Council Canada - National Science Library

    Christensen, Marc

    2004-01-01

    This project had two goals: To build an emulation prototype board for a tiled architecture and to demonstrate the utility of a global inter-chip free-space photonic interconnection fabric for polymorphous computer architectures (PCA...

  15. Rhein-Ruhr architecture

    DEFF Research Database (Denmark)

    2002-01-01

    katalog til udstillingen 'Rhein - Ruhr architecture' Meldahls smedie, 15. marts - 28. april 2002. 99 sider......katalog til udstillingen 'Rhein - Ruhr architecture' Meldahls smedie, 15. marts - 28. april 2002. 99 sider...

  16. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  17. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  18. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  19. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  20. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  1. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  2. An Evaluation of Automated Code Generation with the PetriCode Approach

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Automated code generation is an important element of model driven development methodologies. We have previously proposed an approach for code generation based on Coloured Petri Net models annotated with textual pragmatics for the network protocol domain. In this paper, we present and evaluate thr...... important properties of our approach: platform independence, code integratability, and code readability. The evaluation shows that our approach can generate code for a wide range of platforms which is integratable and readable....

  3. Accuracy Test of Software Architecture Compliance Checking Tools – Test Instruction

    NARCIS (Netherlands)

    Pruijt, Leo; van der Werf, J.M.E.M.|info:eu-repo/dai/nl/36950674X; Brinkkemper., Sjaak|info:eu-repo/dai/nl/07500707X

    2015-01-01

    Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies between modules. Accurate tool

  4. Space Internet Architectures and Technologies for NASA Enterprises

    Science.gov (United States)

    Bhasin, Kul; Hayden, Jeffrey L.

    2001-01-01

    NASA's future communications services will be supplied through a space communications network that mirrors the terrestrial Internet in its capabilities and flexibility. The notional requirements for future data gathering and distribution by this Space Internet have been gathered from NASA's Earth Science Enterprise (ESE), the Human Exploration and Development in Space (HEDS), and the Space Science Enterprise (SSE). This paper describes a communications infrastructure for the Space Internet, the architectures within the infrastructure, and the elements that make up the architectures. The architectures meet the requirements of the enterprises beyond 2010 with Internet 'compatible technologies and functionality. The elements of an architecture include the backbone, access, inter-spacecraft and proximity communication parts. From the architectures, technologies have been identified which have the most impact and are critical for the implementation of the architectures.

  5. An Embodied Architecture

    Directory of Open Access Journals (Sweden)

    Frances Downing

    2012-10-01

    it is our body boundary. The “flesh” or the lived body (Merleau-Ponty, 1968 is moreover, an inbetween concept that articulates the subjective mind to the objective world. It bridges the boundaries separating inside from outside. Thus, it could act as a metaphor for introducing the notion of edge in architectural place. The edge itself then, embodies the embodied being. Buildings have boundaries of foundation, wall, or roof, parts of which could be thought of as the“skin.” In today’s practice, the various skins of a building have become more complicated and porous as the field of architecture extends itself into “systemic” conditions, within and without. It follows then that the body survives the interaction and communication between mind and theexternal world if it inhabits the edge of place embodying localized boundary metaphors. Architecture is beginning the process of aligning itself with a new moral code—one that is inclusive of our biological reality, the embodiment of ideas, systemic evolution, and ecological necessities. This paper is situated within this new moral code of systemic ecological and biologicalinteractions.

  6. Vital architecture, slow momentum policy

    DEFF Research Database (Denmark)

    Braae, Ellen Marie

    2010-01-01

    A reflection on the relation between Danish landscape architecture policy and the statements made through current landscape architectural project.......A reflection on the relation between Danish landscape architecture policy and the statements made through current landscape architectural project....

  7. The analysis of cultural architectural trends in Crisan locality

    Directory of Open Access Journals (Sweden)

    SELA Florentina

    2010-09-01

    Full Text Available The paper presents data about the identification and analyse of the traditional architectural elements in Crisan locality knowing that the tourism activity is in a continuous development. The field research (during November 2007 enabled us to develop a qualitative and quantitative analysis in terms of identification of traditional architecture elements, their conservation status, and frequency of traditional building materials use, decorative elements and specificcolors used in construction architecture. Further, based on collected data, was realized the chart - Distribution for TraditionalArchitecture Index (TAI on the distance from the center of Crisan locality, showing that in Crisan locality the houses were and are built without taking into account any rule, destroying thus traditional architecture.

  8. Energy and architecture. [Denmark]; Energi + arkitektur

    Energy Technology Data Exchange (ETDEWEB)

    Lehrskov, H. [Ingenioerhoejskolen i Aarhus, Aarhus (Denmark); Oehlenschlaeger, R. [AplusB, Aarhus (Denmark); Kappel, K. [Solar City Copenhagen, Copenhagen (Denmark); Kleis, B. [Arkitekturformidling.dk, Vanloese (Denmark); Klint, J. [Kuben Management, Aarhus (Denmark); Vejsig Pedersen, P. [Cenergia, Herlev (Denmark)

    2011-07-01

    The book presents the best examples of Danish energy-oriented architecture with a focus on architectural and energy measures in the integrated design process, resulting in architectural quality. The book consists of two parts and consists first of an introduction to the challenges and tools within low-energy buildings, and then a catalog of a wide range of building projects in the categories of housing, business, education, institutions and sports. The book contains examples of new buildings that as a minimum meet the requirements of the building codes LE1, BR08. The book also contains suggestions for renovation projects that meet LE2 BR08, as the energy optimization of the existing building stock is an imminent task with great constructional and aesthetic challenges. The selected projects are designed and built in the period 2009 to 2011 and include both everyday architecture, created under highly competitive economic environment, as more exclusive development projects. The objectives of the projects are often higher than required by the building code, and in many projects measurements were made to find out what works. These examples show that the stricter energy requirements can serve as inspiration for a holistic architecture and contribute to a paradigm shift in the cooperation process between the project parties. But it is also clear that it requires a focused commitment of all players in the construction industry - clients, consultants, contractors and building product manufacturers - to reduce energy consumption significantly. (LN)

  9. Tourists' Transformation Experience: From Destination Architecture to Identity Formation

    DEFF Research Database (Denmark)

    Ye, Helen Yi; Tussyadiah, Iis

    2010-01-01

    Today’s tourists seek unique destinations that could associate with their self identity in a profound way. It is meaningful for destinations to design unique physical elements that offer transformational travel experiences. This study aims at identifying how tourists encounter architecture...... in a destination and if architecture facilitates tourists’ self transformation. Based on narrative structure analysis by deconstruction of travel blog posts, the results suggest that tourists perceive architectural landscape as an important feature that reflects destinations’ identity. Four different interaction...

  10. Business model driven service architecture design for enterprise application integration

    OpenAIRE

    Gacitua-Decar, Veronica; Pahl, Claus

    2008-01-01

    Increasingly, organisations are using a Service-Oriented Architecture (SOA) as an approach to Enterprise Application Integration (EAI), which is required for the automation of business processes. This paper presents an architecture development process which guides the transition from business models to a service-based software architecture. The process is supported by business reference models and patterns. Firstly, the business process models are enhanced with domain model elements, applicat...

  11. Requirement analysis and architecture of data communication system for integral reactor

    International Nuclear Information System (INIS)

    Jeong, K. I.; Kwon, H. J.; Park, J. H.; Park, H. Y.; Koo, I. S.

    2005-05-01

    When digitalizing the Instrumentation and Control(I and C) systems in Nuclear Power Plants(NPP), a communication network is required for exchanging the digitalized data between I and C equipments in a NPP. A requirements analysis and an analysis of design elements and techniques are required for the design of a communication network. Through the requirements analysis of the code and regulation documents such as NUREG/CR-6082, section 7.9 of NUREG 0800 , IEEE Standard 7-4.3.2 and IEEE Standard 603, the extracted requirements can be used as a design basis and design concept for a detailed design of a communication network in the I and C system of an integral reactor. Design elements and techniques such as a physical topology, protocol transmission media and interconnection device should be considered for designing a communication network. Each design element and technique should be analyzed and evaluated as a portion of the integrated communication network design. In this report, the basic design requirements related to the design of communication network are investigated by using the code and regulation documents and an analysis of the design elements and techniques is performed. Based on these investigation and analysis, the overall architecture including the safety communication network and the non-safety communication network is proposed for an integral reactor

  12. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  13. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  14. Exporting Humanist Architecture

    DEFF Research Database (Denmark)

    Nielsen, Tom

    2016-01-01

    The article is a chapter in the catalogue for the Danish exhibition at the 2016 Architecture Biennale in Venice. The catalogue is conceived at an independent book exploring the theme Art of Many - The Right to Space. The chapter is an essay in this anthology tracing and discussing the different...... values and ethical stands involved in the export of Danish Architecture. Abstract: Danish architecture has, in a sense, been driven by an unwritten contract between the architects and the democratic state and its institutions. This contract may be viewed as an ethos – an architectural tradition...... with inherent aesthetic and moral values. Today, however, Danish architecture is also an export commodity. That raises questions, which should be debated as openly as possible. What does it mean for architecture and architects to practice in cultures and under political systems that do not use architecture...

  15. Architectures for wrist-worn energy harvesting

    Science.gov (United States)

    Rantz, R.; Halim, M. A.; Xue, T.; Zhang, Q.; Gu, L.; Yang, K.; Roundy, S.

    2018-04-01

    This paper reports the simulation-based analysis of six dynamical structures with respect to their wrist-worn vibration energy harvesting capability. This work approaches the problem of maximizing energy harvesting potential at the wrist by considering multiple mechanical substructures; rotational and linear motion-based architectures are examined. Mathematical models are developed and experimentally corroborated. An optimization routine is applied to the proposed architectures to maximize average power output and allow for comparison. The addition of a linear spring element to the structures has the potential to improve power output; for example, in the case of rotational structures, a 211% improvement in power output was estimated under real walking excitation. The analysis concludes that a sprung rotational harvester architecture outperforms a sprung linear architecture by 66% when real walking data is used as input to the simulations.

  16. Architecture of absurd (forms, positions, apposition

    Directory of Open Access Journals (Sweden)

    Fedorov Viktor Vladimirovich

    2014-04-01

    Full Text Available In everyday life we constantly face absurd things, which seem to lack common sense. The notion of the absurd acts as: a an aesthetic category; b an element of logic; c a metaphysical phenomenon. The opportunity of its overcoming is achieved through the understanding of the situation, the faith in the existence of sense and hope for his understanding. The architecture of absurd should be considered as a loss of sense of a part of architectural landscape (urban environment. The ways of organization of the architecture of absurd: the exaggerated forms and proportions, the unnatural position and apposition of various objects. These are usually small-scale facilities that have local spatial and temporary value. There are no large absurd architectural spaces, as the natural architectural environment dampens the perturbation of sense-sphere. The architecture of absurd is considered «pathology» of the environment. «Nonsense» objects and hope (or even faith to detect sense generate a fruitful paradox of architecture of absurd presence in the world.

  17. Chemistry of superheavy elements

    International Nuclear Information System (INIS)

    Schaedel, M.

    2012-01-01

    The chemistry of superheavy elements - or transactinides from their position in the Periodic Table - is summarized. After giving an overview over historical developments, nuclear aspects about synthesis of neutron-rich isotopes of these elements, produced in hot-fusion reactions, and their nuclear decay properties are briefly mentioned. Specific requirements to cope with the one-atom-at-a-time situation in automated chemical separations and recent developments in aqueous-phase and gas-phase chemistry are presented. Exciting, current developments, first applications, and future prospects of chemical separations behind physical recoil separators ('pre-separator') are discussed in detail. The status of our current knowledge about the chemistry of rutherfordium (Rf, element 104), dubnium (Db, element 105), seaborgium (Sg, element 106), bohrium (Bh, element 107), hassium (Hs, element 108), copernicium (Cn, element 112), and element 114 is discussed from an experimental point of view. Recent results are emphasized and compared with empirical extrapolations and with fully-relativistic theoretical calculations, especially also under the aspect of the architecture of the Periodic Table. (orig.)

  18. The Walk-Man Robot Software Architecture

    OpenAIRE

    Mirko Ferrati; Alessandro Settimi; Alessandro Settimi; Luca Muratore; Alberto Cardellino; Alessio Rocchi; Enrico Mingo Hoffman; Corrado Pavan; Dimitrios Kanoulas; Nikos G. Tsagarakis; Lorenzo Natale; Lucia Pallottino

    2016-01-01

    A software and control architecture for a humanoid robot is a complex and large project, which involves a team of developers/researchers to be coordinated and requires many hard design choices. If such project has to be done in a very limited time, i.e., less than 1 year, more constraints are added and concepts, such as modular design, code reusability, and API definition, need to be used as much as possible. In this work, we describe the software architecture developed for Walk-Man, a robot ...

  19. QCA Gray Code Converter Circuits Using LTEx Methodology

    Science.gov (United States)

    Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan

    2018-04-01

    The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.

  20. Prevalence of transcription promoters within archaeal operons and coding sequences.

    Science.gov (United States)

    Koide, Tie; Reiss, David J; Bare, J Christopher; Pang, Wyming Lee; Facciotti, Marc T; Schmid, Amy K; Pan, Min; Marzolf, Bruz; Van, Phu T; Lo, Fang-Yin; Pratap, Abhishek; Deutsch, Eric W; Peterson, Amelia; Martin, Dan; Baliga, Nitin S

    2009-01-01

    Despite the knowledge of complex prokaryotic-transcription mechanisms, generalized rules, such as the simplified organization of genes into operons with well-defined promoters and terminators, have had a significant role in systems analysis of regulatory logic in both bacteria and archaea. Here, we have investigated the prevalence of alternate regulatory mechanisms through genome-wide characterization of transcript structures of approximately 64% of all genes, including putative non-coding RNAs in Halobacterium salinarum NRC-1. Our integrative analysis of transcriptome dynamics and protein-DNA interaction data sets showed widespread environment-dependent modulation of operon architectures, transcription initiation and termination inside coding sequences, and extensive overlap in 3' ends of transcripts for many convergently transcribed genes. A significant fraction of these alternate transcriptional events correlate to binding locations of 11 transcription factors and regulators (TFs) inside operons and annotated genes-events usually considered spurious or non-functional. Using experimental validation, we illustrate the prevalence of overlapping genomic signals in archaeal transcription, casting doubt on the general perception of rigid boundaries between coding sequences and regulatory elements.