WorldWideScience

Sample records for performance analysis architecture

  1. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  2. Virtual Prototyping and Performance Analysis of Two Memory Architectures

    Directory of Open Access Journals (Sweden)

    Huda S. Muhammad

    2009-01-01

    Full Text Available The gap between CPU and memory speed has always been a critical concern that motivated researchers to study and analyze the performance of memory hierarchical architectures. In the early stages of the design cycle, performance evaluation methodologies can be used to leverage exploration at the architectural level and assist in making early design tradeoffs. In this paper, we use simulation platforms developed using the VisualSim tool to compare the performance of two memory architectures, namely, the Direct Connect architecture of the Opteron, and the Shared Bus of the Xeon multicore processors. Key variations exist between the two memory architectures and both design approaches provide rich platforms that call for the early use of virtual system prototyping and simulation techniques to assess performance at an early stage in the design cycle.

  3. Performance Analysis of GFDL's GCM Line-By-Line Radiative Transfer Model on GPU and MIC Architectures

    Science.gov (United States)

    Menzel, R.; Paynter, D.; Jones, A. L.

    2017-12-01

    Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.

  4. Analysis of Different Blade Architectures on small VAWT Performance

    Science.gov (United States)

    Battisti, L.; Brighenti, A.; Benini, E.; Raciti Castelli, M.

    2016-09-01

    The present paper aims at describing and comparing different small Vertical Axis Wind Turbine (VAWT) architectures, in terms of performance and loads. These characteristics can be highlighted by resorting to the Blade Element-Momentum (BE-M) model, commonly adopted for rotor pre-design and controller assessment. After validating the model with experimental data, the paper focuses on the analysis of VAWT loads depending on some relevant rotor features: blade number (2 and 3), airfoil camber line (comparing symmetrical and asymmetrical profiles) and blade inclination (straight versus helical blade). The effect of such characteristics on both power and thrusts (in the streamwise direction and in the crosswise one) as a function of both the blades azimuthal position and their Tip Speed Ratio (TSR) are presented and widely discussed.

  5. A performance analysis of advanced I/O architectures for PC-based network file servers

    Science.gov (United States)

    Huynh, K. D.; Khoshgoftaar, T. M.

    1994-12-01

    In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.

  6. Improving Software Performance in the Compute Unified Device Architecture

    Directory of Open Access Journals (Sweden)

    Alexandru PIRJAN

    2010-01-01

    Full Text Available This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA. We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU, like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.

  7. ARCHITECTURE OF A SENTIMENT ANALYSIS PLATFORM

    Directory of Open Access Journals (Sweden)

    CRISTIAN BUCUR

    2015-06-01

    Full Text Available A new domain of research evolved in the last decade, called sentiment analysis that tries to extract knowledge from opinionated text documents. The article presents an overview of the domain and present an architecture of a system that could perform sentiment analysis processes. Based on previous researches are presented two methods for performing classification and the results obtained.

  8. The ATLAS Analysis Architecture

    International Nuclear Information System (INIS)

    Cranmer, K.S.

    2008-01-01

    We present an overview of the ATLAS analysis architecture including the relevant aspects of the computing model and the major architectural aspects of the Athena framework. Emphasis will be given to the interplay between the analysis use cases and the technical aspects of the architecture including the design of the event data model, transient-persistent separation, data reduction strategies, analysis tools, and ROOT interoperability

  9. Performative Urban Architecture

    DEFF Research Database (Denmark)

    Thomsen, Bo Stjerne; Jensen, Ole B.

    The paper explores how performative urban architecture can enhance community-making and public domain using socio-technical systems and digital technologies to constitute an urban reality. Digital medias developed for the web are now increasingly occupying the urban realm as a tool for navigating...... the physical world e.g. as exemplified by the Google Walk Score and the mobile extension of the Google Maps to the iPhone. At the same time the development in pervasive technologies and situated computing extends the build environment with digital feedback systems that are increasingly embedded and deployed...... using sensor technologies opening up for new access considerations in architecture as well as the ability for a local environment to act as real-time sources of information and facilities. Starting from the NoRA pavilion for the 10th International Architecture Biennale in Venice the paper discusses...

  10. Software architecture analysis tool : software architecture metrics collection

    NARCIS (Netherlands)

    Muskens, J.; Chaudron, M.R.V.; Westgeest, R.

    2002-01-01

    The Software Engineering discipline lacks the ability to evaluate software architectures. Here we describe a tool for software architecture analysis that is based on metrics. Metrics can be used to detect possible problems and bottlenecks in software architectures. Even though metrics do not give a

  11. Architectural design and analysis of a programmable image processor

    International Nuclear Information System (INIS)

    Siyal, M.Y.; Chowdhry, B.S.; Rajput, A.Q.K.

    2003-01-01

    In this paper we present an architectural design and analysis of a programmable image processor, nicknamed Snake. The processor was designed with a high degree of parallelism to speed up a range of image processing operations. Data parallelism found in array processors has been included into the architecture of the proposed processor. The implementation of commonly used image processing algorithms and their performance evaluation are also discussed. The performance of Snake is also compared with other types of processor architectures. (author)

  12. Analysis of mobile fronthaul bandwidth and wireless transmission performance in split-PHY processing architecture.

    Science.gov (United States)

    Miyamoto, Kenji; Kuwano, Shigeru; Terada, Jun; Otaka, Akihiro

    2016-01-25

    We analyze the mobile fronthaul (MFH) bandwidth and the wireless transmission performance in the split-PHY processing (SPP) architecture, which redefines the functional split of centralized/cloud RAN (C-RAN) while preserving high wireless coordinated multi-point (CoMP) transmission/reception performance. The SPP architecture splits the base stations (BS) functions between wireless channel coding/decoding and wireless modulation/demodulation, and employs its own CoMP joint transmission and reception schemes. Simulation results show that the SPP architecture reduces the MFH bandwidth by up to 97% from conventional C-RAN while matching the wireless bit error rate (BER) performance of conventional C-RAN in uplink joint reception with only 2-dB signal to noise ratio (SNR) penalty.

  13. Performance analysis of IMS based LTE and WIMAX integration architectures

    Directory of Open Access Journals (Sweden)

    A. Bagubali

    2016-12-01

    Full Text Available In the current networking field many research works are going on regarding the integration of different wireless technologies, with the aim of providing uninterrupted connectivity to the user anywhere, with high data rates due to increased demand. However, the number of objects like smart devices, industrial machines, smart homes, connected by wireless interface is dramatically increasing due to the evolution of cloud computing and internet of things technology. This Paper begins with the challenges involved in such integrations and then explains the role of different couplings and different architectures. This paper also gives further improvement in the LTE and Wimax integration architectures to provide seamless vertical handover and flexible quality of service for supporting voice, video, multimedia services over IP network and mobility management with the help of IMS networks. Evaluation of various parameters like handover delay, cost of signalling, packet loss,, is done and the performance of the interworking architecture is analysed from the simulation results. Finally, it concludes that the cross layer scenario is better than the non cross layer scenario.

  14. Integrating acoustic analysis in the architectural design process using parametric modelling

    DEFF Research Database (Denmark)

    Peters, Brady

    2011-01-01

    This paper discusses how parametric modeling techniques can be used to provide architectural designers with a better understanding of the acoustic performance of their designs and provide acoustic engineers with models that can be analyzed using computational acoustic analysis software. Architects......, acoustic performance can inform the geometry and material logic of the design. In this way, the architectural design and the acoustic analysis model become linked....

  15. Enterprise Architecture Analysis with XML

    OpenAIRE

    Boer, Frank; Bonsangue, Marcello; Jacob, Joost; Stam, A.; Torre, Leon

    2005-01-01

    htmlabstractThis paper shows how XML can be used for static and dynamic analysis of architectures. Our analysis is based on the distinction between symbolic and semantic models of architectures. The core of a symbolic model consists of its signature that specifies symbolically its structural elements and their relationships. A semantic model is defined as a formal interpretation of the symbolic model. This provides a formal approach to the design of architectural description languages and a g...

  16. Enterprise Architecture Analysis with XML

    NARCIS (Netherlands)

    F.S. de Boer (Frank); M.M. Bonsangue (Marcello); J.F. Jacob (Joost); A. Stam; L.W.N. van der Torre (Leon)

    2005-01-01

    htmlabstractThis paper shows how XML can be used for static and dynamic analysis of architectures. Our analysis is based on the distinction between symbolic and semantic models of architectures. The core of a symbolic model consists of its signature that specifies symbolically its structural

  17. Analysis OpenMP performance of AMD and Intel architecture for breaking waves simulation using MPS

    Science.gov (United States)

    Alamsyah, M. N. A.; Utomo, A.; Gunawan, P. H.

    2018-03-01

    Simulation of breaking waves by using Navier-Stokes equation via moving particle semi-implicit method (MPS) over close domain is given. The results show the parallel computing on multicore architecture using OpenMP platform can reduce the computational time almost half of the serial time. Here, the comparison using two computer architectures (AMD and Intel) are performed. The results using Intel architecture is shown better than AMD architecture in CPU time. However, in efficiency, the computer with AMD architecture gives slightly higher than the Intel. For the simulation by 1512 number of particles, the CPU time using Intel and AMD are 12662.47 and 28282.30 respectively. Moreover, the efficiency using similar number of particles, AMD obtains 50.09 % and Intel up to 49.42 %.

  18. Modeling, analysis and optimization of network-on-chip communication architectures

    CERN Document Server

    Ogras, Umit Y

    2013-01-01

    Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures. This book explores outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.

  19. Surveillance and Datalink Communication Performance Analysis for Distributed Separation Assurance System Architectures

    Science.gov (United States)

    Chung, William W.; Linse, Dennis J.; Alaverdi, Omeed; Ifarraguerri, Carlos; Seifert, Scott C.; Salvano, Dan; Calender, Dale

    2012-01-01

    This study investigates the effects of two technical enablers: Automatic Dependent Surveillance - Broadcast (ADS-B) and digital datalink communication, of the Federal Aviation Administration s Next Generation Air Transportation System (NextGen) under two separation assurance (SA) system architectures: ground-based SA and airborne SA, on overall separation assurance performance. Datalink performance such as successful reception probability in both surveillance and communication messages, and surveillance accuracy are examined in various operational conditions. Required SA performance is evaluated as a function of subsystem performance, using availability, continuity, and integrity metrics to establish overall required separation assurance performance, under normal and off-nominal conditions.

  20. Performative Architecture and Urban Spaces

    DEFF Research Database (Denmark)

    Kiib, Hans

    2008-01-01

      3 Workshops one exibition   Three conceptual architectural workshops took take place in parallel from August 16th - 22nd 2008. Each workshop carried a specific methodology and the goal is to come up with conceptual proposals that could be further developed for selected sites in the city of Aalb...... This workshop focus on temporary architecture and urban catalysts. Informal spaces and the interface between the built and the void are foremost in the development of performative urban environments and cultural interaction. ......  3 Workshops one exibition   Three conceptual architectural workshops took take place in parallel from August 16th - 22nd 2008. Each workshop carried a specific methodology and the goal is to come up with conceptual proposals that could be further developed for selected sites in the city...... The workshop model includes an open workshop where a handful of international architects are invited to spend five days with local architects, engineers and scholars contributing to a work of architectural vision and quality. The workshop includes presentations and discussions and development of projects...

  1. Cost and performance analysis of physical security systems

    International Nuclear Information System (INIS)

    Hicks, M.J.; Yates, D.; Jago, W.H.; Phillips, A.W.

    1998-04-01

    Analysis of cost and performance of physical security systems can be a complex, multi-dimensional problem. There are a number of point tools that address various aspects of cost and performance analysis. Increased interest in cost tradeoffs of physical security alternatives has motivated development of an architecture called Cost and Performance Analysis (CPA), which takes a top-down approach to aligning cost and performance metrics. CPA incorporates results generated by existing physical security system performance analysis tools, and utilizes an existing cost analysis tool. The objective of this architecture is to offer comprehensive visualization of complex data to security analysts and decision-makers

  2. Architectural Analysis of Dynamically Reconfigurable Systems

    Science.gov (United States)

    Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly

    2010-01-01

    oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.

  3. Performance evaluation of enterprise architecture using fuzzy sequence diagram

    Directory of Open Access Journals (Sweden)

    Mohammad Atasheneh

    2014-01-01

    Full Text Available Developing an Enterprise Architecture is a complex task and to control the complexity of the regulatory framework we need to measure the relative performance of one system against other available systems. On the other hand, enterprise architecture cannot be organized without the use of a logical structure. The framework provides a logical structure for classifying architectural output. Among the common architectural framework, the C4ISR framework and methodology of the product is one of the most popular techniques. In this paper, given the existing uncertainties in system development and information systems, a new version of UML called Fuzzy-UML is proposed for enterprise architecture development based on fuzzy Petri nets. In addition, the performance of the system is also evaluated based on Fuzzy sequence diagram.

  4. From Smart-Eco Building to High-Performance Architecture: Optimization of Energy Consumption in Architecture of Developing Countries

    Science.gov (United States)

    Mahdavinejad, M.; Bitaab, N.

    2017-08-01

    Search for high-performance architecture and dreams of future architecture resulted in attempts towards meeting energy efficient architecture and planning in different aspects. Recent trends as a mean to meet future legacy in architecture are based on the idea of innovative technologies for resource efficient buildings, performative design, bio-inspired technologies etc. while there are meaningful differences between architecture of developed and developing countries. Significance of issue might be understood when the emerging cities are found interested in Dubaization and other related booming development doctrines. This paper is to analyze the level of developing countries’ success to achieve smart-eco buildings’ goals and objectives. Emerging cities of West of Asia are selected as case studies of the paper. The results of the paper show that the concept of high-performance architecture and smart-eco buildings are different in developing countries in comparison with developed countries. The paper is to mention five essential issues in order to improve future architecture of developing countries: 1- Integrated Strategies for Energy Efficiency, 2- Contextual Solutions, 3- Embedded and Initial Energy Assessment, 4- Staff and Occupancy Wellbeing, 5- Life-Cycle Monitoring.

  5. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  6. Information architecture. Volume 2, Part 1: Baseline analysis summary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  7. Factoring symmetric indefinite matrices on high-performance architectures

    Science.gov (United States)

    Jones, Mark T.; Patrick, Merrell L.

    1990-01-01

    The Bunch-Kaufman algorithm is the method of choice for factoring symmetric indefinite matrices in many applications. However, the Bunch-Kaufman algorithm does not take advantage of high-performance architectures such as the Cray Y-MP. Three new algorithms, based on Bunch-Kaufman factorization, that take advantage of such architectures are described. Results from an implementation of the third algorithm are presented.

  8. Progress in a novel architecture for high performance processing

    Science.gov (United States)

    Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin

    2018-04-01

    The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).

  9. Performances of multiprocessor multidisk architectures for continuous media storage

    Science.gov (United States)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  10. Performance Analysis of MYSEA

    Science.gov (United States)

    2012-09-01

    Services FSD Federated Services Daemon I&A Identification and Authentication IKE Internet Key Exchange KPI Key Performance Indicator LAN Local Area...spection takes place in different processes in the server architecture. Key Performance Indica- tor ( KPI )s associated with the system need to be...application and risk analysis of security controls. Thus, measurement of the KPIs is needed before an informed tradeoff between the performance penalties

  11. Lightweight metrics for enterprise architecture analysis

    NARCIS (Netherlands)

    Singh, Prince Mayurank; van Sinderen, Marten J.; Abramowicz, Witold

    2015-01-01

    The role of an Enterprise Architecture model is not limited to a graphical representation of an organization and its dynamics. Rather, it is also a tool for analysis and rational decision making. If firms do not use their enterprise architecture model to aid decision making then they run the risk of

  12. Advanced Concept Architecture Design and Integrated Analysis (ACADIA)

    Science.gov (United States)

    2017-11-03

    1 Advanced Concept Architecture Design and Integrated Analysis (ACADIA) Submitted to the National Institute of Aerospace (NIA) on...Research Report 20161001 - 20161030 Advanced Concept Architecture Design and Integrated Analysis (ACADIA) W911NF-16-2-0229 8504Cedric Justin, Youngjun

  13. NDARC-NASA Design and Analysis of Rotorcraft Theoretical Basis and Architecture

    Science.gov (United States)

    Johnson, Wayne

    2010-01-01

    The theoretical basis and architecture of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are described. The principal tasks of NDARC are to design (or size) a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated. The aircraft attributes are obtained from the sum of the component attributes. NDARC provides a capability to model general rotorcraft configurations, and estimate the performance and attributes of advanced rotor concepts. The software has been implemented with low-fidelity models, typical of the conceptual design environment. Incorporation of higher-fidelity models will be possible, as the architecture of the code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis and optimization.

  14. A Methodology for Making Early Comparative Architecture Performance Evaluations

    Science.gov (United States)

    Doyle, Gerald S.

    2010-01-01

    Complex and expensive systems' development suffers from a lack of method for making good system-architecture-selection decisions early in the development process. Failure to make a good system-architecture-selection decision increases the risk that a development effort will not meet cost, performance and schedule goals. This research provides a…

  15. Parametric Approach to Assessing Performance of High-Lift Device Active Flow Control Architectures

    Directory of Open Access Journals (Sweden)

    Yu Cai

    2017-02-01

    Full Text Available Active Flow Control is at present an area of considerable research, with multiple potential aircraft applications. While the majority of research has focused on the performance of the actuators themselves, a system-level perspective is necessary to assess the viability of proposed solutions. This paper demonstrates such an approach, in which major system components are sized based on system flow and redundancy considerations, with the impacts linked directly to the mission performance of the aircraft. Considering the case of a large twin-aisle aircraft, four distinct active flow control architectures that facilitate the simplification of the high-lift mechanism are investigated using the demonstrated approach. The analysis indicates a very strong influence of system total mass flow requirement on architecture performance, both for a typical mission and also over the entire payload-range envelope of the aircraft.

  16. Digital Architecture – Results From a Gap Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Thomas, Kenneth David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Fitzgerald, Kirk [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The digital architecture is defined as a collection of IT capabilities needed to support and integrate a wide-spectrum of real-time digital capabilities for nuclear power plant performance improvements. The digital architecture can be thought of as an integration of the separate I&C and information systems already in place in NPPs, brought together for the purpose of creating new levels of automation in NPP work activities. In some cases, it might be an extension of the current communication systems, to provide digital communications where they are currently analog only. This collection of IT capabilities must in turn be based on a set of user requirements that must be supported for the interconnected technologies to operate in an integrated manner. These requirements, simply put, are a statement of what sorts of digital work functions will be exercised in a fully-implemented seamless digital environment and how much they will be used. The goal of the digital architecture research is to develop a methodology for mapping nuclear power plant operational and support activities into the digital architecture, which includes the development of a consensus model for advanced information and control architecture. The consensus model should be developed at a level of detail that is useful to the industry. In other words, not so detailed that it specifies specific protocols and not so vague that it is only provides a high level description of technology. The next step towards the model development is to determine the current state of digital architecture at typical NPPs. To investigate the current state, the researchers conducted a gap analysis to determine to what extent the NPPs can support the future digital technology environment with their existing I&C and IT structure, and where gaps exist with respect to the full deployment of technology over time. The methodology, result, and conclusions from the gap analysis are described in this report.

  17. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    Science.gov (United States)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while

  18. Do Performance-Based Codes Support Universal Design in Architecture?

    DEFF Research Database (Denmark)

    Grangaard, Sidse; Frandsen, Anne Kathrine

    2016-01-01

    – Universal Design (UD). The empirical material consists of input from six workshops to which all 700 Danish Architectural firms were invited, as well as eight group interviews. The analysis shows that the current prescriptive requirements are criticized for being too homogenous and possibilities...... for differentiation and zoning are required. Therefore, a majority of professionals are interested in a performance-based model because they think that such a model will support ‘accessibility zoning’, achieving flexibility because of different levels of accessibility in a building due to its performance. The common...... of educational objectives is suggested as a tool for such a boost. The research project has been financed by the Danish Transport and Construction Agency....

  19. FY1995 study of design methodology and environment of high-performance processor architectures; 1995 nendo koseino processor architecture sekkeiho to sekkei kankyo no kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The aim of our project is to develop high-performance processor architectures for both general purpose and application-specific purpose. We also plan to develop basic softwares, such as compliers, and various design aid tools for those architectures. We are particularly interested in performance evaluation at architecture design phase, design optimization, automatic generation of compliers from processor designs, and architecture design methodologies combined with circuit layout. We have investigated both microprocessor architectures and design methodologies / environments for the processors. Our goal is to establish design technologies for high-performance, low-power, low-cost and highly-reliable systems in system-on-silicon era. We have proposed PPRAM architecture for high-performance system using DRAM and logic mixture technology, Softcore processor architecture for special purpose processors in embedded systems, and Power-Pro architecture for low power systems. We also developed design methodologies and design environments for the above architectures as well as a new method for design verification of microprocessors. (NEDO)

  20. Improvements to Integrated Tradespace Analysis of Communications Architectures (ITACA) Network Loading Analysis Tool

    Science.gov (United States)

    Lee, Nathaniel; Welch, Bryan W.

    2018-01-01

    NASA's SCENIC project aims to simplify and reduce the cost of space mission planning by replicating the analysis capabilities of commercially licensed software which are integrated with relevant analysis parameters specific to SCaN assets and SCaN supported user missions. SCENIC differs from current tools that perform similar analyses in that it 1) does not require any licensing fees, 2) will provide an all-in-one package for various analysis capabilities that normally requires add-ons or multiple tools to complete. As part of SCENIC's capabilities, the ITACA network loading analysis tool will be responsible for assessing the loading on a given network architecture and generating a network service schedule. ITACA will allow users to evaluate the quality of service of a given network architecture and determine whether or not the architecture will satisfy the mission's requirements. ITACA is currently under development, and the following improvements were made during the fall of 2017: optimization of runtime, augmentation of network asset pre-service configuration time, augmentation of Brent's method of root finding, augmentation of network asset FOV restrictions, augmentation of mission lifetimes, and the integration of a SCaN link budget calculation tool. The improvements resulted in (a) 25% reduction in runtime, (b) more accurate contact window predictions when compared to STK(Registered Trademark) contact window predictions, and (c) increased fidelity through the use of specific SCaN asset parameters.

  1. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  2. Predictors of Future Performance in Architectural Design Education

    Science.gov (United States)

    Roberts, A. S.

    2007-01-01

    The link between academic performance in secondary education and the subsequent performance of students studying architecture at university level is commonly questioned by educators and admissions tutors. This paper investigates the potential for using measures of cognitive style and spatial ability as predictors of future potential in…

  3. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  4. Performance Analysis of an Astrophysical Simulation Code on the Intel Xeon Phi Architecture

    OpenAIRE

    Noormofidi, Vahid; Atlas, Susan R.; Duan, Huaiyu

    2015-01-01

    We have developed the astrophysical simulation code XFLAT to study neutrino oscillations in supernovae. XFLAT is designed to utilize multiple levels of parallelism through MPI, OpenMP, and SIMD instructions (vectorization). It can run on both CPU and Xeon Phi co-processors based on the Intel Many Integrated Core Architecture (MIC). We analyze the performance of XFLAT on configurations with CPU only, Xeon Phi only and both CPU and Xeon Phi. We also investigate the impact of I/O and the multi-n...

  5. Architecture and performance of neural networks for efficient A/C control in buildings

    International Nuclear Information System (INIS)

    Mahmoud, Mohamed A.; Ben-Nakhi, Abdullatif E.

    2003-01-01

    The feasibility of using neural networks (NNs) for optimizing air conditioning (AC) setback scheduling in public buildings was investigated. The main focus is on optimizing the network architecture in order to achieve best performance. To save energy, the temperature inside public buildings is allowed to rise after business hours by setting back the thermostat. The objective is to predict the time of the end of thermostat setback (EoS) such that the design temperature inside the building is restored in time for the start of business hours. State of the art building simulation software, ESP-r, was used to generate a database that covered the years 1995-1999. The software was used to calculate the EoS for two office buildings using the climate records in Kuwait. The EoS data for 1995 and 1996 were used for training and testing the NNs. The robustness of the trained NN was tested by applying them to a 'production' data set (1997-1999), which the networks have never 'seen' before. For each of the six different NN architectures evaluated, parametric studies were performed to determine the network parameters that best predict the EoS. External hourly temperature readings were used as network inputs, and the thermostat end of setback (EoS) is the output. The NN predictions were improved by developing a neural control scheme (NC). This scheme is based on using the temperature readings as they become available. For each NN architecture considered, six NNs were designed and trained for this purpose. The performance of the NN analysis was evaluated using a statistical indicator (the coefficient of multiple determination) and by statistical analysis of the error patterns, including ANOVA (analysis of variance). The results show that the NC, when used with a properly designed NN, is a powerful instrument for optimizing AC setback scheduling based only on external temperature records

  6. Using Multimedia for Teaching Analysis in History of Modern Architecture.

    Science.gov (United States)

    Perryman, Garry

    This paper presents a case for the development and support of a computer-based interactive multimedia program for teaching analysis in community college architecture design programs. Analysis in architecture design is an extremely important strategy for the teaching of higher-order thinking skills, which senior schools of architecture look for in…

  7. Thermal performance measurement and application of a multilayer insulator for emergency architecture

    International Nuclear Information System (INIS)

    Salvalai, Graziano; Imperadori, Marco; Scaccabarozzi, Diego; Pusceddu, Cristina

    2015-01-01

    for emergency architecture. • Shelter models performances analysis by means of TRNSYS 17 environment

  8. High-performance, scalable optical network-on-chip architectures

    Science.gov (United States)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  9. INFORMATION ARCHITECTURE ANALYSIS USING BUSINESS INTELLIGENCE TOOLS BASED ON THE INFORMATION NEEDS OF EXECUTIVES

    Directory of Open Access Journals (Sweden)

    Fabricio Sobrosa Affeldt

    2013-08-01

    Full Text Available Devising an information architecture system that enables an organization to centralize information regarding its operational, managerial and strategic performance is one of the challenges currently facing information technology. The present study aimed to analyze an information architecture system developed using Business Intelligence (BI technology. The analysis was performed based on a questionnaire enquiring as to whether the information needs of executives were met during the process. A theoretical framework was applied consisting of information architecture and BI technology, using a case study methodology. Results indicated that the transaction processing systems studied did not meet the information needs of company executives. Information architecture using data warehousing, online analytical processing (OLAP tools and data mining may provide a more agile means of meeting these needs. However, some items must be included and others modified, in addition to improving the culture of information use by company executives.

  10. Roofline model toolkit: A practical tool for architectural and program analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Yu Jung [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Van Straalen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ligocki, Terry J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cordery, Matthew J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wright, Nicholas J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hall, Mary W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-18

    We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measure sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.

  11. (Invited) Wavy Channel TFT Architecture for High Performance Oxide Based Displays

    KAUST Repository

    Hanna, Amir; Hussain, Aftab M.; Hussain, Aftab M.; Ghoneim, Mohamed T.; Rojas, Jhonathan Prieto; Sevilla, Galo T.; Hussain, Muhammad Mustafa

    2015-01-01

    We show the effectiveness of wavy channel architecture for thin film transistor application for increased output current. This specific architecture allows increased width of the device by adopting a corrugated shape of the substrate without any further real estate penalty. The performance improvement is attributed not only to the increased transistor width, but also to enhanced applied electric field in the channel due to the wavy architecture.

  12. (Invited) Wavy Channel TFT Architecture for High Performance Oxide Based Displays

    KAUST Repository

    Hanna, Amir

    2015-05-22

    We show the effectiveness of wavy channel architecture for thin film transistor application for increased output current. This specific architecture allows increased width of the device by adopting a corrugated shape of the substrate without any further real estate penalty. The performance improvement is attributed not only to the increased transistor width, but also to enhanced applied electric field in the channel due to the wavy architecture.

  13. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Science.gov (United States)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  14. Analysis and design of software ecosystem architectures – Towards the 4S telemedicine ecosystem

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius; Kyng, Morten

    2014-01-01

    performed a descriptive, revelatory case study of the Danish telemedicine ecosystem and for ii), we experimentally designed, implemented, and evaluated the architecture of 4S. Results We contribute in three areas. First, we define the software ecosystem architecture concept that captures organization......, and application stove-pipes that inhibit the adoption of telemedical solutions. To which extent can a software ecosystem approach to telemedicine alleviate this? Objective In this article, we define the concept of software ecosystem architecture as the structure(s) of a software ecosystem comprising elements...... experience in creating and evolving the 4S telemedicine ecosystem. Conclusion The concept of software ecosystem architecture can be used analytically and constructively in respectively the analysis and design of software ecosystems....

  15. In-Depth Modeling of the UNIX Operating System for Architectural Cyber Security Analysis

    OpenAIRE

    Vernotte, Alexandre; Johnson, Pontus; Ekstedt, Mathias; Lagerström, Robert

    2017-01-01

    ICT systems have become an integral part of business and life. At the same time, these systems have become extremely complex. In such systems exist numerous vulnerabilities waiting to be exploited by potential threat actors. pwnPr3d is a novel modelling approach that performs automated architectural analysis with the objective of measuring the cyber security of the modeled architecture. Its integrated modelling language allows users to model software and hardware components with great level o...

  16. Architectural Narratives

    DEFF Research Database (Denmark)

    Kiib, Hans

    2010-01-01

    a functional framework for these concepts, but tries increasingly to endow the main idea of the cultural project with a spatially aesthetic expression - a shift towards “experience architecture.” A great number of these projects typically recycle and reinterpret narratives related to historical buildings......In this essay, I focus on the combination of programs and the architecture of cultural projects that have emerged within the last few years. These projects are characterized as “hybrid cultural projects,” because they intend to combine experience with entertainment, play, and learning. This essay...... and architectural heritage; another group tries to embed new performative technologies in expressive architectural representation. Finally, this essay provides a theoretical framework for the analysis of the political rationales of these projects and for the architectural representation bridges the gap between...

  17. Architecture for interlock systems: reliability analysis with regard to safety and availability

    International Nuclear Information System (INIS)

    Wagner, S.; Apollonio, A.; Schmidt, R.; Zerlauth, M.; Vergara-Fernandez, A.

    2012-01-01

    For particle accelerators like LHC and other large experimental physics facilities like ITER, the machine protection relies on complex interlock systems. In the design of interlock loops for the signal exchange in machine protection systems, the choice of the hardware architecture impacts on machine safety and availability. The reliable performance of a machine stop (leaving the machine in a safe state) in case of an emergency, is an inherent requirement. The constraints in terms of machine availability on the other hand may differ from one facility to another. Spurious machine stops, lowering machine availability, may to a certain extent be tolerated in facilities where they do not cause undue equipment wear-out. In order to compare various interlock loop architectures in terms of safety and availability, the occurrence frequencies of related scenarios have been calculated in a reliability analysis, using a generic analytical model. This paper presents the results and illustrates the potential of the analysis method for supporting the choice of interlock system architectures. The results show the advantages of a 2003 (3 redundant lines with 2-out-of-3 voting) over the 6 architectures under consideration for systems with high requirements in both safety and availability

  18. L-Band Digital Aeronautical Communications System Engineering - Concepts of Use, Systems Performance, Requirements, and Architectures

    Science.gov (United States)

    Zelkin, Natalie; Henriksen, Stephen

    2010-01-01

    This NASA Contractor Report summarizes and documents the work performed to develop concepts of use (ConUse) and high-level system requirements and architecture for the proposed L-band (960 to 1164 MHz) terrestrial en route communications system. This work was completed as a follow-on to the technology assessment conducted by NASA Glenn Research Center and ITT for the Future Communications Study (FCS). ITT assessed air-to-ground (A/G) communications concepts of use and operations presented in relevant NAS-level, international, and NAS-system-level documents to derive the appropriate ConUse relevant to potential A/G communications applications and services for domestic continental airspace. ITT also leveraged prior concepts of use developed during the earlier phases of the FCS. A middle-out functional architecture was adopted by merging the functional system requirements identified in the bottom-up assessment of existing requirements with those derived as a result of the top-down analysis of ConUse and higher level functional requirements. Initial end-to-end system performance requirements were derived to define system capabilities based on the functional requirements and on NAS-SR-1000 and the Operational Performance Assessment conducted as part of the COCR. A high-level notional architecture of the L-DACS supporting A/G communication was derived from the functional architecture and requirements.

  19. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  20. UMA/GAN network architecture analysis

    Science.gov (United States)

    Yang, Liang; Li, Wensheng; Deng, Chunjian; Lv, Yi

    2009-07-01

    This paper is to critically analyze the architecture of UMA which is one of Fix Mobile Convergence (FMC) solutions, and also included by the third generation partnership project(3GPP). In UMA/GAN network architecture, UMA Network Controller (UNC) is the key equipment which connects with cellular core network and mobile station (MS). UMA network could be easily integrated into the existing cellular networks without influencing mobile core network, and could provides high-quality mobile services with preferentially priced indoor voice and data usage. This helps to improve subscriber's experience. On the other hand, UMA/GAN architecture helps to integrate other radio technique into cellular network which includes WiFi, Bluetooth, and WiMax and so on. This offers the traditional mobile operators an opportunity to integrate WiMax technique into cellular network. In the end of this article, we also give an analysis of potential influence on the cellular core networks ,which is pulled by UMA network.

  1. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  2. A State-Based Modeling Approach for Efficient Performance Evaluation of Embedded System Architectures at Transaction Level

    Directory of Open Access Journals (Sweden)

    Anthony Barreteau

    2012-01-01

    Full Text Available Abstract models are necessary to assist system architects in the evaluation process of hardware/software architectures and to cope with the still increasing complexity of embedded systems. Efficient methods are required to create reliable models of system architectures and to allow early performance evaluation and fast exploration of the design space. In this paper, we present a specific transaction level modeling approach for performance evaluation of hardware/software architectures. This approach relies on a generic execution model that exhibits light modeling effort. Created models are used to evaluate by simulation expected processing and memory resources according to various architectures. The proposed execution model relies on a specific computation method defined to improve the simulation speed of transaction level models. The benefits of the proposed approach are highlighted through two case studies. The first case study is a didactic example illustrating the modeling approach. In this example, a simulation speed-up by a factor of 7,62 is achieved by using the proposed computation method. The second case study concerns the analysis of a communication receiver supporting part of the physical layer of the LTE protocol. In this case study, architecture exploration is led in order to improve the allocation of processing functions.

  3. Utilizing a multiprocessor architecture - The performance of MIDAS

    International Nuclear Information System (INIS)

    Maples, C.; Logan, D.; Meng, J.; Rathbun, W.; Weaver, D.

    1983-01-01

    The MIDAS architecture organizes multiple CPUs into clusters called distributed subsystems. Each subsystem consists of an array of processors controlled by a supervisory CPU. The multiprocessor array is composed of commercial CPUs (with floating point hardware) and specialized processing elements. Interprocessor communication within the array may occur either through switched memory modules or common shared memory. The architecture permits multiple processors to be focused on single problems. A distributed subsystem has been constructed and tested. It currently consists of a supervisor CPU; 16 blocks of independently switchable memory; 9 general purpose, VAX-class CPUs; and 2 specialized pipelined processors to handle I/O. Results on a variety of problems indicate that the subsystem performs 8 to 15 times faster than a standard computer with an identical CPU. The difference in performance represents the effect of differing CPU and I/O requirements

  4. CALS Baseline Architecture Analysis of Weapons System. Technical Information: Army, Draft. Volume 8

    Science.gov (United States)

    1989-09-01

    This effort was performed to provide a common framework for analysis and planning of CALS initiatives across the military services, leading eventually to the development of a common DoD-wide architecture for CALS. This study addresses Army technical ...

  5. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  6. Performance Evaluation of 14 Neural Network Architectures Used for Predicting Heat Transfer Characteristics of Engine Oils

    Science.gov (United States)

    Al-Ajmi, R. M.; Abou-Ziyan, H. Z.; Mahmoud, M. A.

    2012-01-01

    This paper reports the results of a comprehensive study that aimed at identifying best neural network architecture and parameters to predict subcooled boiling characteristics of engine oils. A total of 57 different neural networks (NNs) that were derived from 14 different NN architectures were evaluated for four different prediction cases. The NNs were trained on experimental datasets performed on five engine oils of different chemical compositions. The performance of each NN was evaluated using a rigorous statistical analysis as well as careful examination of smoothness of predicted boiling curves. One NN, out of the 57 evaluated, correctly predicted the boiling curves for all cases considered either for individual oils or for all oils taken together. It was found that the pattern selection and weight update techniques strongly affect the performance of the NNs. It was also revealed that the use of descriptive statistical analysis such as R2, mean error, standard deviation, and T and slope tests, is a necessary but not sufficient condition for evaluating NN performance. The performance criteria should also include inspection of the smoothness of the predicted curves either visually or by plotting the slopes of these curves.

  7. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Yokota, Rio; Keyes, David E.

    2017-01-01

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  8. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2017-07-31

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  9. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth; Tracy Rafferty

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scale long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK

  10. Secure thin client architecture for DICOM image analysis

    Science.gov (United States)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  11. Performance evaluation of enterprise architecture with a formal fuzzy model (FPN

    Directory of Open Access Journals (Sweden)

    Ashkan Marahel

    2012-10-01

    Full Text Available Preparing enterprise architecture is complicated procedure, which uses framework as structure regularity and style as the behavior director for controlling complexity. As in architecture behavior, precedence over structure, for better diagnosis of a behavior than other behaviors, there is a need to evaluate the architecture performance. Enterprise architecture cannot be organized without the benefit of the logical structure. Framework provides a logical structure for classifying architectural output. Among the common architectural framework, the C4ISR is one of the most appropriate frameworks because of the methodology of its production and the level of aggregation capability and minor revisions. C4ISR framework, in three views and by using some documents called product, describes the architecture. In this paper, for developing the systems, there are always uncertainties in information systems and we may use new version of UML called FUZZY-UML, which includes structure and behavior of the system. The proposed model of this paper also uses Fuzzy Petri nets to analyze the developed system.

  12. Power and performance software analysis and optimization

    CERN Document Server

    Kukunas, Jim

    2015-01-01

    Power and Performance: Software Analysis and Optimization is a guide to solving performance problems in modern Linux systems. Power-efficient chips are no help if the software those chips run on is inefficient. Starting with the necessary architectural background as a foundation, the book demonstrates the proper usage of performance analysis tools in order to pinpoint the cause of performance problems, and includes best practices for handling common performance issues those tools identify. Provides expert perspective from a key member of Intel's optimization team on how processors and memory

  13. Architectural design and energy performance; Conception architecturale et performance energetique

    Energy Technology Data Exchange (ETDEWEB)

    Beaud, Ph. [Agence de l' Environnement et de la Maitrise de l' Energie, (ADEME), 06 - Valbonne (France); Pouget, A. [Bureau Etude Thermique, 75 - Paris (France); Sesolis, B. [TRIBU, 75 - Paris (France)] [and others

    2000-07-01

    This day was organized around the energy performance of the architecture in three parts. A first time dealt with the design of new buildings and private houses. Simulation tools for the energy optimization and practice of design are discussed. The second part was devoted to the new 2000 regulation with an open discussion on the regulatory costs. The last part forecasted the evolution until 2015 taking into account the french program of fight against the greenhouse effect, the limitation of the air conditioning consumption and the definition of a quality label concerning the energy performances. (A.L.B.)

  14. Architecture Students' Perceptions of Their Learning Environment and Their Academic Performance

    Science.gov (United States)

    Oluwatayo, Adedapo Adewunmi; Aderonmu, Peter A.; Aduwo, Egidario B.

    2015-01-01

    Scholars have agreed that the way in which students perceive their learning environments influences their academic performance. Empirical studies that focus on architecture students, however, have been very scarce. This is the gap that an attempt is filled in this study. A questionnaire survey of 273 students in a school of architecture in Nigeria…

  15. The analysis of cultural architectural trends in Crisan locality

    Directory of Open Access Journals (Sweden)

    SELA Florentina

    2010-09-01

    Full Text Available The paper presents data about the identification and analyse of the traditional architectural elements in Crisan locality knowing that the tourism activity is in a continuous development. The field research (during November 2007 enabled us to develop a qualitative and quantitative analysis in terms of identification of traditional architecture elements, their conservation status, and frequency of traditional building materials use, decorative elements and specificcolors used in construction architecture. Further, based on collected data, was realized the chart - Distribution for TraditionalArchitecture Index (TAI on the distance from the center of Crisan locality, showing that in Crisan locality the houses were and are built without taking into account any rule, destroying thus traditional architecture.

  16. Performance Analysis of FEM Algorithmson GPU and Many-Core Architectures

    KAUST Repository

    Khurram, Rooh

    2015-04-27

    The roadmaps of the leading supercomputer manufacturers are based on hybrid systems, which consist of a mix of conventional processors and accelerators. This trend is mainly due to the fact that the power consumption cost of the future cpu-only Exascale systems will be unsustainable, thus accelerators such as graphic processing units (GPUs) and many-integrated-core (MIC) will likely be the integral part of the TOP500 (http://www.top500.org/) supercomputers, beyond 2020. The emerging supercomputer architecture will bring new challenges for the code developers. Continuum mechanics codes will particularly be affected, because the traditional synchronous implicit solvers will probably not scale on hybrid Exascale machines. In the previous study[1], we reported on the performance of a conjugate gradient based mesh motion algorithm[2]on Sandy Bridge, Xeon Phi, and K20c. In the present study we report on the comparative study of finite element codes, using PETSC and AmgX solvers on CPU and GPUs, respectively [3,4]. We believe this study will be a good starting point for FEM code developers, who are contemplating a CPU to accelerator transition.

  17. Change Impact Analysis of Crosscutting in Software Architectural Design

    NARCIS (Netherlands)

    van den Berg, Klaas

    2006-01-01

    Software architectures should be amenable to changes in user requirements and implementation technology. The analysis of the impact of these changes can be based on traceability of architectural design elements. Design elements have dependencies with other software artifacts but also evolve in time.

  18. A price and performance comparison of three different storage architectures for data in cloud-based systems

    Science.gov (United States)

    Gallagher, J. H. R.; Jelenak, A.; Potter, N.; Fulker, D. W.; Habermann, T.

    2017-12-01

    Providing data services based on cloud computing technology that is equivalent to those developed for traditional computing and storage systems is critical for successful migration to cloud-based architectures for data production, scientific analysis and storage. OPeNDAP Web-service capabilities (comprising the Data Access Protocol (DAP) specification plus open-source software for realizing DAP in servers and clients) are among the most widely deployed means for achieving data-as-service functionality in the Earth sciences. OPeNDAP services are especially common in traditional data center environments where servers offer access to datasets stored in (very large) file systems, and a preponderance of the source data for these services is being stored in the Hierarchical Data Format Version 5 (HDF5). Three candidate architectures for serving NASA satellite Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS) were developed and their performance examined for a set of representative use cases. The performance was based both on runtime and incurred cost. The three architectures differ in how HDF5 files are stored in the Amazon Simple Storage Service (S3) and how the Hyrax server (as an EC2 instance) retrieves their data. The results for both the serial and parallel access to HDF5 data in the S3 will be presented. While the study focused on HDF5 data, OPeNDAP and the Hyrax data server, the architectures are generic and the analysis can be extrapolated to many different data formats, web APIs, and data servers.

  19. Analysis of Architecture Pattern Usage in Legacy System Architecture Documentation

    NARCIS (Netherlands)

    Harrison, Neil B.; Avgeriou, Paris

    2008-01-01

    Architecture patterns are an important tool in architectural design. However, while many architecture patterns have been identified, there is little in-depth understanding of their actual use in software architectures. For instance, there is no overview of how many patterns are used per system or

  20. NETRA: A parallel architecture for integrated vision systems 2: Algorithms and performance evaluation

    Science.gov (United States)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.

  1. Enterprise architecture availability analysis using fault trees and stakeholder interviews

    Science.gov (United States)

    Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias

    2014-01-01

    The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.

  2. The Sentinel-4 detectors: architecture and performance

    Science.gov (United States)

    Skegg, Michael P.; Hermsen, Markus; Hohn, Rüdiger; Williges, Christian; Woffinden, Charles; Levillain, Yves; Reulke, Ralf

    2017-09-01

    The Sentinel-4 instrument is an imaging spectrometer, developed by Airbus under ESA contract in the frame of the joint European Union (EU)/ESA COPERNICUS program. SENTINEL-4 will provide accurate measurements of trace gases from geostationary orbit, including key atmospheric constituents such as ozone, nitrogen dioxide, sulfur dioxide, formaldehyde, as well as aerosol and cloud properties. Key to achieving these atmospheric measurements are the two CCD detectors, covering the wavelengths in the ranges 305 nm to 500 nm (UVVIS) and 750 to 775 nm (NIR) respectively. The paper describes the architecture, and operation of these two CCD detectors, which have an unusually high full-well capacity and a very specific architecture and read-out sequence to match the requirements of the Sentinel- 4 instrument. The key performance aspects and their verification through measurement are presented, with a focus on an unusual, bi-modal dark signal generation rate observed during test.

  3. Detection of architectural distortion in prior screening mammograms using Gabor filters, phase portraits, fractal dimension, and texture analysis

    International Nuclear Information System (INIS)

    Rangayyan, Rangaraj M.; Prajna, Shormistha; Ayres, Fabio J.; Desautels, J.E.L.

    2008-01-01

    Mammography is a widely used screening tool for the early detection of breast cancer. One of the commonly missed signs of breast cancer is architectural distortion. The purpose of this study is to explore the application of fractal analysis and texture measures for the detection of architectural distortion in screening mammograms taken prior to the detection of breast cancer. A method based on Gabor filters and phase portrait analysis was used to detect initial candidates for sites of architectural distortion. A total of 386 regions of interest (ROIs) were automatically obtained from 14 ''prior mammograms'', including 21 ROIs related to architectural distortion. From the corresponding set of 14 ''detection mammograms'', 398 ROIs were obtained, including 18 related to breast cancer. For each ROI, the fractal dimension and Haralick's texture features were computed. The fractal dimension of the ROIs was calculated using the circular average power spectrum technique. The average fractal dimension of the normal (false-positive) ROIs was significantly higher than that of the ROIs with architectural distortion (p = 0.006). For the ''prior mammograms'', the best receiver operating characteristics (ROC) performance achieved, in terms of the area under the ROC curve, was 0.80 with a Bayesian classifier using four features including fractal dimension, entropy, sum entropy, and inverse difference moment. Analysis of the performance of the methods with free-response receiver operating characteristics indicated a sensitivity of 0.79 at 8.4 false positives per image in the detection of sites of architectural distortion in the ''prior mammograms''. Fractal dimension offers a promising way to detect the presence of architectural distortion in prior mammograms. (orig.)

  4. Paramedir: A Tool for Programmable Performance Analysis

    Science.gov (United States)

    Jost, Gabriele; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    Performance analysis of parallel scientific applications is time consuming and requires great expertise in areas such as programming paradigms, system software, and computer hardware architectures. In this paper we describe a tool that facilitates the programmability of performance metric calculations thereby allowing the automation of the analysis and reducing the application development time. We demonstrate how the system can be used to capture knowledge and intuition acquired by advanced parallel programmers in order to be transferred to novice users.

  5. Performation Metrics Development Analysis for Information and Communications Technology Outsourcing: A Case Study

    Science.gov (United States)

    Travis, James L., III

    2014-01-01

    This study investigated how and to what extent the development and use of the OV-5a operational architecture decomposition tree (OADT) from the Department of Defense (DoD) Architecture Framework (DoDAF) affects requirements analysis with respect to complete performance metrics for performance-based services acquisition of ICT under rigid…

  6. Analysis of the new architecture proposal for the CMM control system

    International Nuclear Information System (INIS)

    Heikkilae, L.; Saarinen, H.; Aha, L.; Viinikainen, M.; Mattila, J.; Hahto, A.; Siuko, M.; Semeraro, L.

    2011-01-01

    While developing divertor remote handling maintenance systems at the Divertor Test Platform 2 facility, some risks and sensitivity points related to the Cassette Multifunctional Mover control system software were discovered and evaluated. The control system architecture has to simultaneously fulfill the demanding ITER remote handling requirements and to face new requirements being uncovered during the trials. Especially evolving non-functional requirements such as reliability and safety have an effect on the control system architecture as it is getting more mature. An evaluation of the implications from architectural decisions is necessary before implementation efforts, as an architecture left to develop without evaluation may lead to a dead end and therefore soaring development costs. After studying existing architecture analysis methods an analysis method was developed to gain confidence to carry out the proposed changes.

  7. Laboratory infrastructure driven key performance indicator development using the smart grid architecture model

    DEFF Research Database (Denmark)

    Syed, Mazheruddin H.; Guillo-Sansano, Efren; Blair, Steven M.

    2017-01-01

    This study presents a methodology for collaboratively designing laboratory experiments and developing key performance indicators for the testing and validation of novel power system control architectures in multiple laboratory environments. The contribution makes use of the smart grid architecture...

  8. From green architecture to architectural green

    DEFF Research Database (Denmark)

    Earon, Ofri

    2011-01-01

    that describes the architectural exclusivity of this particular architecture genre. The adjective green expresses architectural qualities differentiating green architecture from none-green architecture. Currently, adding trees and vegetation to the building’s facade is the main architectural characteristics...... they have overshadowed the architectural potential of green architecture. The paper questions how a green space should perform, look like and function. Two examples are chosen to demonstrate thorough integrations between green and space. The examples are public buildings categorized as pavilions. One......The paper investigates the topic of green architecture from an architectural point of view and not an energy point of view. The purpose of the paper is to establish a debate about the architectural language and spatial characteristics of green architecture. In this light, green becomes an adjective...

  9. Analysis of Parallel Burn Without Crossfeed TSTO RLV Architectures and Comparison to Parallel Burn With Crossfeed and Series Burn Architectures

    Science.gov (United States)

    Smith, Garrett; Phillips, Alan

    2002-01-01

    There are currently three dominant TSTO class architectures. These are Series Burn (SB), Parallel Burn with crossfeed (PBw/cf), and Parallel Burn without crossfeed (PBncf). The goal of this study was to determine what factors uniquely affect PBncf architectures, how each of these factors interact, and to determine from a performance perspective whether a PBncf vehicle could be competitive with a PBw/cf or SB vehicle using equivalent technology and assumptions. In all cases, performance was evaluated on a relative basis for a fixed payload and mission by comparing gross and dry vehicle masses of a closed vehicle. Propellant combinations studied were LOX: LH2 propelled orbiter and booster (HH) and LOX: Kerosene booster with LOX: LH2 orbiter (KH). The study conclusions were: 1) a PBncf orbiter should be throttled as deeply as possible after launch until the staging point. 2) a detailed structural model is essential to accurate architecture analysis and evaluation. 3) a PBncf TSTO architecture is feasible for systems that stage at mach 7. 3a) HH architectures can achieve a mass growth relative to PBw/cf of ratio and to the position of the orbiter required to align the nozzle heights at liftoff. 5 ) thrust to weight ratios of 1.3 at liftoff and between 1.0 and 0.9 when staging at mach 7 appear to be close to ideal for PBncf vehicles. 6) performance for all vehicles studied is better when staged at mach 7 instead of mach 5. The study showed that a Series Burn architecture has the lowest gross mass for HH cases, and has the lowest dry mass for KH cases. The potential disadvantages of SB are the required use of an air-start for the orbiter engines and potential CG control issues. A Parallel Burn with crossfeed architecture solves both these problems, but the mechanics of a large bipropellant crossfeed system pose significant technical difficulties. Parallel Burn without crossfeed vehicles start both booster and orbiter engines on the ground and thus avoid both the risk of

  10. Hardware Architectures for the Correspondence Problem in Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Thomas Eide

    Method"has been developed in conjunction with the work on this thesis and has not previously been described. Also, during this project a combined image acquisition and compression board has been developed for a NASA sounding rocket. This circuit, a so-called Lightning Imager, is also described. Finally...... an optimized hardware architecture has been proposed in relation to the three matching methods mentioned above. Because of the cost required to physically implement and test the developed architecture, it has been decided todocument the performance of the architecture through theoretical proofs only....

  11. Network Analysis, Architecture, and Design

    CERN Document Server

    McCabe, James D

    2007-01-01

    Traditionally, networking has had little or no basis in analysis or architectural development, with designers relying on technologies they are most familiar with or being influenced by vendors or consultants. However, the landscape of networking has changed so that network services have now become one of the most important factors to the success of many third generation networks. It has become an important feature of the designer's job to define the problems that exist in his network, choose and analyze several optimization parameters during the analysis process, and then prioritize and evalua

  12. System architectures for telerobotic research

    Science.gov (United States)

    Harrison, F. Wallace

    1989-01-01

    Several activities are performed related to the definition and creation of telerobotic systems. The effort and investment required to create architectures for these complex systems can be enormous; however, the magnitude of process can be reduced if structured design techniques are applied. A number of informal methodologies supporting certain aspects of the design process are available. More recently, prototypes of integrated tools supporting all phases of system design from requirements analysis to code generation and hardware layout have begun to appear. Activities related to system architecture of telerobots are described, including current activities which are designed to provide a methodology for the comparison and quantitative analysis of alternative system architectures.

  13. LISA Mission and System architectures and performances

    International Nuclear Information System (INIS)

    Gath, Peter F; Weise, Dennis; Schulte, Hans-Reiner; Johann, Ulrich

    2009-01-01

    In the context of the LISA Mission Formulation Study, the LISA System was studied in detail and a new baseline architecture for the whole mission was established. This new baseline is the result of trade-offs on both, mission and system level. The paper gives an overview of the different mission scenarios and configurations that were studied in connection with their corresponding advantages and disadvantages as well as performance estimates. Differences in the required technologies and their influence on the overall performance budgets are highlighted for all configurations. For the selected baseline concept, a more detailed description of the configuration is given and open issues in the technologies involved are discussed.

  14. LISA Mission and System architectures and performances

    Energy Technology Data Exchange (ETDEWEB)

    Gath, Peter F; Weise, Dennis; Schulte, Hans-Reiner; Johann, Ulrich, E-mail: peter.gath@astrium.eads.ne [Astrium GmbH Satellites, 88039 Friedrichshafen (Germany)

    2009-03-01

    In the context of the LISA Mission Formulation Study, the LISA System was studied in detail and a new baseline architecture for the whole mission was established. This new baseline is the result of trade-offs on both, mission and system level. The paper gives an overview of the different mission scenarios and configurations that were studied in connection with their corresponding advantages and disadvantages as well as performance estimates. Differences in the required technologies and their influence on the overall performance budgets are highlighted for all configurations. For the selected baseline concept, a more detailed description of the configuration is given and open issues in the technologies involved are discussed.

  15. Marshall Application Realignment System (MARS) Architecture

    Science.gov (United States)

    Belshe, Andrea; Sutton, Mandy

    2010-01-01

    The Marshall Application Realignment System (MARS) Architecture project was established to meet the certification requirements of the Department of Defense Architecture Framework (DoDAF) V2.0 Federal Enterprise Architecture Certification (FEAC) Institute program and to provide added value to the Marshall Space Flight Center (MSFC) Application Portfolio Management process. The MARS Architecture aims to: (1) address the NASA MSFC Chief Information Officer (CIO) strategic initiative to improve Application Portfolio Management (APM) by optimizing investments and improving portfolio performance, and (2) develop a decision-aiding capability by which applications registered within the MSFC application portfolio can be analyzed and considered for retirement or decommission. The MARS Architecture describes a to-be target capability that supports application portfolio analysis against scoring measures (based on value) and overall portfolio performance objectives (based on enterprise needs and policies). This scoring and decision-aiding capability supports the process by which MSFC application investments are realigned or retired from the application portfolio. The MARS Architecture is a multi-phase effort to: (1) conduct strategic architecture planning and knowledge development based on the DoDAF V2.0 six-step methodology, (2) describe one architecture through multiple viewpoints, (3) conduct portfolio analyses based on a defined operational concept, and (4) enable a new capability to support the MSFC enterprise IT management mission, vision, and goals. This report documents Phase 1 (Strategy and Design), which includes discovery, planning, and development of initial architecture viewpoints. Phase 2 will move forward the process of building the architecture, widening the scope to include application realignment (in addition to application retirement), and validating the underlying architecture logic before moving into Phase 3. The MARS Architecture key stakeholders are most

  16. Framework for Architecture Trade Study Using MBSE and Performance Simulation

    Science.gov (United States)

    Ryan, Jessica; Sarkani, Shahram; Mazzuchim, Thomas

    2012-01-01

    Increasing complexity in modern systems as well as cost and schedule constraints require a new paradigm of system engineering to fulfill stakeholder needs. Challenges facing efficient trade studies include poor tool interoperability, lack of simulation coordination (design parameters) and requirements flowdown. A recent trend toward Model Based System Engineering (MBSE) includes flexible architecture definition, program documentation, requirements traceability and system engineering reuse. As a new domain MBSE still lacks governing standards and commonly accepted frameworks. This paper proposes a framework for efficient architecture definition using MBSE in conjunction with Domain Specific simulation to evaluate trade studies. A general framework is provided followed with a specific example including a method for designing a trade study, defining candidate architectures, planning simulations to fulfill requirements and finally a weighted decision analysis to optimize system objectives.

  17. High Performance Systolic Array Core Architecture Design for DNA Sequencer

    Directory of Open Access Journals (Sweden)

    Saiful Nurdin Dayana

    2018-01-01

    Full Text Available This paper presents a high performance systolic array (SA core architecture design for Deoxyribonucleic Acid (DNA sequencer. The core implements the affine gap penalty score Smith-Waterman (SW algorithm. This time-consuming local alignment algorithm guarantees optimal alignment between DNA sequences, but it requires quadratic computation time when performed on standard desktop computers. The use of linear SA decreases the time complexity from quadratic to linear. In addition, with the exponential growth of DNA databases, the SA architecture is used to overcome the timing issue. In this work, the SW algorithm has been captured using Verilog Hardware Description Language (HDL and simulated using Xilinx ISIM simulator. The proposed design has been implemented in Xilinx Virtex -6 Field Programmable Gate Array (FPGA and improved in the core area by 90% reduction.

  18. Changes in muscle architecture and performance during a competitive season in female softball players.

    Science.gov (United States)

    Nimphius, Sophia; McGuigan, Michael R; Newton, Robert U

    2012-10-01

    The purpose of this research was (a) to examine the performance changes that occur in elite female softball players during 20 weeks of softball training (that included 14 weeks of periodized resistance training [RT]) and (b) to examine the relationship between percent change (%change) in muscle architecture variables and %change in strength, speed, and change of direction performance. Ten female softball players (age = 18.1 ± 1.6 years, height = 166.5 ± 8.9 cm, weight = 72.4 ± 10.8 kg) from a state Australian Institute of Sport softball team were tested for maximal lower-body strength using a 3 repetition maximum for a predicted 1 repetition maximum (1RM) and peak force, peak velocity (PV), and peak power (PP) were measured during jump squats (JS) unloaded and loaded. In addition, the first base (1B) and the second base (2B) sprint performance, change of direction (505) on dominant (D) and nondominant (ND) sides, aerobic capacity, and muscle architecture characteristics of vastus lateralis (VL) including muscle thickness (MT), fascicle length (FL), and pennation angle (θp) were examined. The testing sessions occurred pre, mid, and post training (total 20 week pre- and in-season training period). Changes over time were analyzed by repeated-measures analysis of variance. The relationship between %change in muscle architecture variables and strength, speed, and change of direction variables from pre to post were assessed by Pearson product-moment correlation coefficient. Significant improvements in PV and PP occurred at all JS loads pre- to mid-testing and pre- to post-testing. Significant increases occurred pre-post in absolute 1RM, relative 1RM, 505 ND, and 2B sprint. The strongest relationships were found between %change in VL MT and 1B sprint (r = -0.80, p = 0.06), %change in VL FL and 2B sprint (r = -0.835, p = 0.02), and %change in relative 1RM and 505 D (r = -0.70, p = 0.04). In conclusion, gains in strength, power, and performance can occur during the

  19. Self-Healing Many-Core Architecture: Analysis and Evaluation

    Directory of Open Access Journals (Sweden)

    Arezoo Kamran

    2016-01-01

    Full Text Available More pronounced aging effects, more frequent early-life failures, and incomplete testing and verification processes due to time-to-market pressure in new fabrication technologies impose reliability challenges on forthcoming systems. A promising solution to these reliability challenges is self-test and self-reconfiguration with no or limited external control. In this work a scalable self-test mechanism for periodic online testing of many-core processor has been proposed. This test mechanism facilitates autonomous detection and omission of faulty cores and makes graceful degradation of the many-core architecture possible. Several test components are incorporated in the many-core architecture that distribute test stimuli, suspend normal operation of individual processing cores, apply test, and detect faulty cores. Test is performed concurrently with the system normal operation without any noticeable downtime at the application level. Experimental results show that the proposed test architecture is extensively scalable in terms of hardware overhead and performance overhead that makes it applicable to many-cores with more than a thousand processing cores.

  20. WDM packet switch architectures and analysis of the influence of tunable wavelength converters on the performance

    DEFF Research Database (Denmark)

    Danielsen, Søren Lykke; Mikkelsen, Benny; Jørgensen, Carsten

    1997-01-01

    A detailed analytical traffic model for a photonic wavelength division multiplexing (WDM) packet switch block is presented and the requirements to the buffer size is analyzed. Three different switch architectures are considered, each of them representing different complexities in terms of component.......e., the possibility of several outlets sharing the same physical buffer. For the three architectures presented here, a tradeoff in the buffer architectures is addressed: a buffer physically shared among an outlets requires many wavelengths internally in the switch block, whereas, architectures with buffers dedicated...

  1. A systematic review and meta-analysis of sleep architecture and chronic traumatic brain injury.

    Science.gov (United States)

    Mantua, Janna; Grillakis, Antigone; Mahfouz, Sanaa H; Taylor, Maura R; Brager, Allison J; Yarnell, Angela M; Balkin, Thomas J; Capaldi, Vincent F; Simonelli, Guido

    2018-02-02

    Sleep quality appears to be altered by traumatic brain injury (TBI). However, whether persistent post-injury changes in sleep architecture are present is unknown and relatively unexplored. We conducted a systematic review and meta-analysis to assess the extent to which chronic TBI (>6 months since injury) is characterized by changes to sleep architecture. We also explored the relationship between sleep architecture and TBI severity. In the fourteen included studies, sleep was assessed with at least one night of polysomnography in both chronic TBI participants and controls. Statistical analyses, performed using Comprehensive Meta-Analysis software, revealed that chronic TBI is characterized by relatively increased slow wave sleep (SWS). A meta-regression showed moderate-severe TBI is associated with elevated SWS, reduced stage 2, and reduced sleep efficiency. In contrast, mild TBI was not associated with any significant alteration of sleep architecture. The present findings are consistent with the hypothesis that increased SWS after moderate-severe TBI reflects post-injury cortical reorganization and restructuring. Suggestions for future research are discussed, including adoption of common data elements in future studies to facilitate cross-study comparability, reliability, and replicability, thereby increasing the likelihood that meaningful sleep (and other) biomarkers of TBI will be identified. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. A relational approach to support software architecture analysis

    NARCIS (Netherlands)

    Feijs, L.M.G.; Krikhaar, R.L.; van Ommering, R.C.

    1998-01-01

    This paper reports on our experience with a relational approach to support the analysis of existing software architectures. The analysis options provide for visualization and view calculation. The approach has been applied for reverse engineering. It is also possible to check concrete designs

  3. Staggered Dslash Performance on Intel Xeon Phi Architecture

    OpenAIRE

    Li, Ruizi; Gottlieb, Steven

    2014-01-01

    The conjugate gradient (CG) algorithm is among the most essential and time consuming parts of lattice calculations with staggered quarks. We test the performance of CG and dslash, the key step in the CG algorithm, on the Intel Xeon Phi, also known as the Many Integrated Core (MIC) architecture. We try different parallelization strategies using MPI, OpenMP, and the vector processing units (VPUs).

  4. Researching on knowledge architecture of design by analysis based on ASME code

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2003-01-01

    The quality of knowledge-based system's knowledge architecture is one of decisive factors of knowledge-based system's validity and rationality. For designing the ASME code knowledge based system, this paper presents a knowledge acquisition method which is extracting knowledge through document analysis consulted domain experts' knowledge. Then the paper describes knowledge architecture of design by analysis based on the related rules in ASME code. The knowledge of the knowledge architecture is divided into two categories: one is empirical knowledge, and another is ASME code knowledge. Applied as the basement of the knowledge architecture, a general procedural process of design by analysis that is met the engineering design requirements and designers' conventional mode is generalized and explained detailed in the paper. For the sake of improving inference efficiency and concurrent computation of KBS, a kind of knowledge Petri net (KPN) model is proposed and adopted in expressing the knowledge architecture. Furthermore, for validating and verifying of the empirical rules, five knowledge validation and verification theorems are given in the paper. Moreover the research production is applicable to design the knowledge architecture of ASME codes or other engineering standards. (author)

  5. Sprinting performance on the Woodway Curve 3.0 is related to muscle architecture.

    Science.gov (United States)

    Mangine, Gerald T; Fukuda, David H; Townsend, Jeremy R; Wells, Adam J; Gonzalez, Adam M; Jajtner, Adam R; Bohner, Jonathan D; LaMonica, Michael; Hoffman, Jay R; Fragala, Maren S; Stout, Jeffrey R

    2015-01-01

    To determine if unilateral measures of muscle architecture in the rectus femoris (RF) and vastus lateralis (VL) were related to (and predictive of) sprinting speed and unilateral (and bilateral) force (FRC) and power (POW) during a 30 s maximal sprint on the Woodway Curve 3.0 non-motorized treadmill. Twenty-eight healthy, physically active men (n = 14) and women (n = 14) (age = 22.9 ± 2.4 years; body mass = 77.1 ± 16.2 kg; height = 171.6 ± 11.2 cm; body-fa t = 19.4 ± 8.1%) completed one familiarization and one 30-s maximal sprint on the TM to obtain maximal sprinting speed, POW and FRC. Muscle thickness (MT), cross-sectional area (CSA) and echo intensity (ECHO) of the RF and VL in the dominant (DOM; determined by unilateral sprinting power) and non-dominant (ND) legs were measured via ultrasound. Pearson correlations indicated several significant (p architecture. Stepwise regression indicated that POW(DOM) was predictive of ipsilateral RF (MT and CSA) and VL (CSA and ECHO), while POW(ND) was predictive of ipsilateral RF (MT and CSA) and VL (CSA); sprinting power/force asymmetry was not predictive of architecture asymmetry. Sprinting time was best predicted by peak power and peak force, though muscle quality (ECHO) and the bilateral percent difference in VL (CSA) were strong architectural predictors. Muscle architecture is related to (and predictive of) TM sprinting performance, while unilateral POW is predictive of ipsilateral architecture. However, the extent to which architecture and other factors (i.e. neuromuscular control and sprinting technique) affect TM performance remains unknown.

  6. Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes

    OpenAIRE

    Koziolek, Anne

    2013-01-01

    Quality attributes, such as performance or reliability, are crucial for the success of a software system and largely influenced by the software architecture. Their quantitative prediction supports systematic, goal-oriented software design and forms a base of an engineering approach to software design. This thesis proposes a method and tool to automatically improve component-based software architecture (CBA) models based on such quantitative quality prediction techniques.

  7. A coherent and non-invasive open analysis architecture and framework with applications in CMS

    International Nuclear Information System (INIS)

    Alverson, G.; Osborne, I.; Taylor, L.; Tuura, L.A.

    2001-01-01

    The CMS IGUANA project has implemented an open analysis architecture that enables the creation of an integrated analysis environment. In this 'analysis desktop' environment a physicist is able to perform most analysis-related tasks, not just the presentation and visualisation steps usually associated with analysis tools. The motivation behind IGUANA's approach is that physics analysis includes much more than just the visualisation and data presentation. Many factors contribute to the increasing importance of making analysis and visualisation software an integral part of the experiment's software: object oriented and ever more advanced data models, GRID, and automated hierarchical storage management systems to name just a few. At the the same time the analysis toolkits should be modular and non-invasive to be usable in different contexts within one experiment and generally across experiments. Ideally the analysis environment would appear to be perfectly customised to the experiment and the context, but would mostly consist of generic components. The authors describe how the IGUANA project is addressing these issues and present both the architecture and examples of how different aspects of analysis appear to the users and the developers

  8. Cost and performance analysis of physical security systems

    International Nuclear Information System (INIS)

    Hicks, M.J.; Yates, D.; Jago, W.H.

    1997-01-01

    CPA - Cost and Performance Analysis - is a prototype integration of existing PC-based cost and performance analysis tools: ACEIT (Automated Cost Estimating Integrated Tools) and ASSESS (Analytic System and Software for Evaluating Safeguards and Security). ACE is an existing DOD PC-based tool that supports cost analysis over the full life cycle of a system; that is, the cost to procure, operate, maintain and retire the system and all of its components. ASSESS is an existing DOE PC-based tool for analysis of performance of physical protection systems. Through CPA, the cost and performance data are collected into Excel workbooks, making the data readily available to analysts and decision makers in both tabular and graphical formats and at both the system and subsystem levels. The structure of the cost spreadsheets incorporates an activity-based approach to cost estimation. Activity-based costing (ABC) is an accounting philosophy used by industry to trace direct and indirect costs to the products or services of a business unit. By tracing costs through security sensors and procedures and then mapping the contributions of the various sensors and procedures to system effectiveness, the CPA architecture can provide security managers with information critical for both operational and strategic decisions. The architecture, features and applications of the CPA prototype are presented. 5 refs., 3 figs

  9. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    Energy Technology Data Exchange (ETDEWEB)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    2017-02-06

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design and deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.

  10. Reliability analysis of multicellular system architectures for low-cost satellites

    Science.gov (United States)

    Erlank, A. O.; Bridges, C. P.

    2018-06-01

    Multicellular system architectures are proposed as a solution to the problem of low reliability currently seen amongst small, low cost satellites. In a multicellular architecture, a set of independent k-out-of-n systems mimic the cells of a biological organism. In order to be beneficial, a multicellular architecture must provide more reliability per unit of overhead than traditional forms of redundancy. The overheads include power consumption, volume and mass. This paper describes the derivation of an analytical model for predicting a multicellular system's lifetime. The performance of such architectures is compared against that of several common forms of redundancy and proven to be beneficial under certain circumstances. In addition, the problem of peripheral interfaces and cross-strapping is investigated using a purpose-developed, multicellular simulation environment. Finally, two case studies are presented based on a prototype cell implementation, which demonstrate the feasibility of the proposed architecture.

  11. Performance Aided Design

    DEFF Research Database (Denmark)

    Parigi, Dario

    2014-01-01

    paradigm where the increasing integration of parametric tools and performative analysis is changing the way we learn and design. The term Performance Aided Architectural Design (PAD) is proposed at the Master of Science of Architecture and Design at Aalborg University, with the aim of extending a tectonic...... tradition of architecture with computational tools, preparing the basis for the creation of the figure of a modern master builder, sitting at the boundary of the disciplines of architecture and engineering. Performance Aided Design focuses on the role of performative analysis, embedded tectonics......, and computational methods tools to trigger creativity and innovative understanding of relation between form material and a increasingly wide range of performances in architectural design. The ultimate goal is to pursue a design approach that aims at embracing rather than excluding the complexity implicit...

  12. Architecture-Based Reliability Analysis of Web Services

    Science.gov (United States)

    Rahmani, Cobra Mariam

    2012-01-01

    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  13. Improving the energy performance of historic buildings with architectural and cultural values

    DEFF Research Database (Denmark)

    Hansen, Ernst Jan de Place

    2017-01-01

    The thermal performance of solid walls of historic buildings can be improved by external or internal insulation. External insulation is preferred from a technical perspective, but is often disregarded as many such buildings have architectural or cultural values leaving internal insulation.......g. improvement of thermal indoor climate. The paper discusses different motivating factors for improving the thermal performance of solid walls in historic buildings with architectural and cultural values. It is argued that internal insulation, provided that it can be done without resulting in critical moisture...... as the only possible solution. As internal insulation is considered a risky way of improving the thermal performance from a moisture perspective, technically feasible solutions are needed. Further, other arguments than energy saving could convince a building owner to carry out internal insulation, e...

  14. Integrating Sound Scattering Measurements in the Design of Complex Architectural Surfaces

    DEFF Research Database (Denmark)

    Peters, Brady

    2010-01-01

    Digital tools present the opportunity for incorporating performance analysis into the architectural design process. Acoustic performance is an important criterion for architectural design. There is much known about sound absorption but little about sound scattering, even though scattering is reco...

  15. Combining Performance and Flexibility for RMS with a Hybrid Architecture

    NARCIS (Netherlands)

    Dennis Koole; Arjan Groenewegen; Daniël Telgen; Patrick Wit; Leo van Moergestel; Arjan van Zanten; John-Jules Meyer; Ing. Erik Puik; Dick van der Steen; Pascal Muller

    2013-01-01

    Author supplied Combining Performance and Flexibility for RMS with a Hybrid Architecture Dani¨el Telgen 12? , Leo van Moergestel 1 , Erik Puik 1 , Pascal Muller 1 , Arjan Groenewegen 1 , Dick van der Steen 1 , Dennis Koole 1 , Patrick de Wit 1 , Arjen van Zanten 1 , and John-Jules

  16. Analysis for Parallel Execution without Performing Hardware/Software Co-simulation

    OpenAIRE

    Muhammad Rashid

    2014-01-01

    Hardware/software co-simulation improves the performance of embedded applications by executing the applications on a virtual platform before the actual hardware is available in silicon. However, the virtual platform of the target architecture is often not available during early stages of the embedded design flow. Consequently, analysis for parallel execution without performing hardware/software co-simulation is required. This article presents an analysis methodology for parallel execution of ...

  17. Behavioral Simulation and Performance Evaluation of Multi-Processor Architectures

    Directory of Open Access Journals (Sweden)

    Ausif Mahmood

    1996-01-01

    Full Text Available The development of multi-processor architectures requires extensive behavioral simulations to verify the correctness of design and to evaluate its performance. A high level language can provide maximum flexibility in this respect if the constructs for handling concurrent processes and a time mapping mechanism are added. This paper describes a novel technique for emulating hardware processes involved in a parallel architecture such that an object-oriented description of the design is maintained. The communication and synchronization between hardware processes is handled by splitting the processes into their equivalent subprograms at the entry points. The proper scheduling of these subprograms is coordinated by a timing wheel which provides a time mapping mechanism. Finally, a high level language pre-processor is proposed so that the timing wheel and the process emulation details can be made transparent to the user.

  18. An Integrated Architecture for On-Board Aircraft Engine Performance Trend Monitoring and Gas Path Fault Diagnostics

    Science.gov (United States)

    Simon, Donald L.

    2010-01-01

    Aircraft engine performance trend monitoring and gas path fault diagnostics are closely related technologies that assist operators in managing the health of their gas turbine engine assets. Trend monitoring is the process of monitoring the gradual performance change that an aircraft engine will naturally incur over time due to turbomachinery deterioration, while gas path diagnostics is the process of detecting and isolating the occurrence of any faults impacting engine flow-path performance. Today, performance trend monitoring and gas path fault diagnostic functions are performed by a combination of on-board and off-board strategies. On-board engine control computers contain logic that monitors for anomalous engine operation in real-time. Off-board ground stations are used to conduct fleet-wide engine trend monitoring and fault diagnostics based on data collected from each engine each flight. Continuing advances in avionics are enabling the migration of portions of the ground-based functionality on-board, giving rise to more sophisticated on-board engine health management capabilities. This paper reviews the conventional engine performance trend monitoring and gas path fault diagnostic architecture commonly applied today, and presents a proposed enhanced on-board architecture for future applications. The enhanced architecture gains real-time access to an expanded quantity of engine parameters, and provides advanced on-board model-based estimation capabilities. The benefits of the enhanced architecture include the real-time continuous monitoring of engine health, the early diagnosis of fault conditions, and the estimation of unmeasured engine performance parameters. A future vision to advance the enhanced architecture is also presented and discussed

  19. Management of microbial community composition, architecture and performance in autotrophic nitrogen removing bioreactors through aeration regimes

    DEFF Research Database (Denmark)

    Mutlu, A. Gizem

    to describe aggregation and architectural evolution in nitritation/anammox reactors, incorporating the possible influences of intermediates formed with intermittent aeration. Community analysis revealed an abundant fraction of heterotrophic types despite the absence of organic carbon in the feed. The aerobic...... and anaerobic ammonia oxidizing guilds were dominated by fast-growing Nitrosomonas spp. and Ca. Brocadia spp., while the nitrite oxidizing guild was dominated by high affinity Nitrospira spp. Emission of nitrous oxide (N2O) was evaluated from both reactors under dynamic aeration regimes. Contrary to the widely...... impacts could be isolated, increasing process understanding. It was demonstrated that aeration strategy can be used as a powerful tool to manipulate the microbial community composition, its architecture and reactor performance. We suggest operation via intermittent aeration with short aerated periods...

  20. A performance requirements analysis of the SSC control system

    International Nuclear Information System (INIS)

    Hunt, S.M.; Low, K.

    1992-01-01

    This paper presents the results of analysis of the performance requirements of the Superconducting Super Collider Control System. We quantify the performance requirements of the system in terms of response time, throughput and reliability. We then examine the effect of distance and traffic patterns on control system performance and examine how these factors influence the implementation of the control network architecture and compare the proposed system against those criteria. (author)

  1. Building and measuring a high performance network architecture

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  2. High-performance Sonitopia (Sonic Utopia): Hyper intelligent Material-based Architectural Systems for Acoustic Energy Harvesting

    Science.gov (United States)

    Heidari, F.; Mahdavinejad, M.

    2017-08-01

    The rate of energy consumption in all over the world, based on reliable statistics of international institutions such as the International Energy Agency (IEA) shows significant increase in energy demand in recent years. Periodical recorded data shows a continuous increasing trend in energy consumption especially in developed countries as well as recently emerged developing economies such as China and India. While air pollution and water contamination as results of high consumption of fossil energy resources might be consider as menace to civic ideals such as livability, conviviality and people-oriented cities. In other hand, automobile dependency, cars oriented design and other noisy activities in urban spaces consider as threats to urban life. Thus contemporary urban design and planning concentrates on rethinking about ecology of sound, reorganizing the soundscape of neighborhoods, redesigning the sonic order of urban space. It seems that contemporary architecture and planning trends through soundscape mapping look for sonitopia (Sonic + Utopia) This paper is to propose some interactive hyper intelligent material-based architectural systems for acoustic energy harvesting. The proposed architectural design system may be result in high-performance architecture and planning strategies for future cities. The ultimate aim of research is to develop a comprehensive system for acoustic energy harvesting which cover the aim of noise reduction as well as being in harmony with architectural design. The research methodology is based on a literature review as well as experimental and quasi-experimental strategies according the paradigm of designedly ways of doing and knowing. While architectural design has solution-focused essence in problem-solving process, the proposed systems had better be hyper intelligent rather than predefined procedures. Therefore, the steps of the inference mechanism of the research include: 1- understanding sonic energy and noise potentials as energy

  3. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    Science.gov (United States)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular

  4. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    Science.gov (United States)

    2015-06-01

    CONFABULATION BASED REAL-TIME ANOMALY DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE SYRACUSE...DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-12-1-0251 5b. GRANT...processors including graphic processor units (GPUs) and Intel Xeon Phi processors. Experimental results showed significant speedups, which can enable

  5. The role of FFM accumulation and skeletal muscle architecture in powerlifting performance.

    Science.gov (United States)

    Brechue, William F; Abe, Takashi

    2002-02-01

    The purpose of this study was to determine the distribution and architectural characteristics of skeletal muscle in elite powerlifters, and to investigate their relationship to fat-free mat (FFM) accumulation and powerlifting performance. Twenty elite male powerlifters (including four world and three US national champions) volunteered for this study. FFM, skeletal muscle distribution (muscle thickness at 13 anatomical sites), and isolated muscle thickness and fascicle pennation angle (PAN) of the triceps long-head (TL), vastus lateralis, and gastrocnemius medialis (MG) muscles were measured with B-mode ultrasound. Fascicle length (FAL) was calculated. Best lifting performance in the bench press (BP), squat lift (SQT), and dead lift (DL) was recorded from competition performance. Significant correlations (P FFM and FFM relative to standing height (r = 0.86 to 0.95, P FFM (r = 0.59, P FFM and, therefore, may be limited by the ability to accumulate FFM. Additionally, muscle architecture appears to play an important role in powerlifting performance in that greater fascicle lengths are associated with greater FFM accumulation and powerlifting performance.

  6. Analysis of Employment Flow of Landscape Architecture Graduates in Agricultural Universities

    Science.gov (United States)

    Yao, Xia; He, Linchun

    2012-01-01

    A statistical analysis of employment flow of landscape architecture graduates was conducted on the employment data of graduates major in landscape architecture in 2008 to 2011. The employment flow of graduates was to be admitted to graduate students, industrial direction and regional distribution, etc. Then, the features of talent flow and factors…

  7. Fault tolerant architecture for artificial olfactory system

    International Nuclear Information System (INIS)

    Lotfivand, Nasser; Hamidon, Mohd Nizar; Abdolzadeh, Vida

    2015-01-01

    In this paper, to cover and mask the faults that occur in the sensing unit of an artificial olfactory system, a novel architecture is offered. The proposed architecture is able to tolerate failures in the sensors of the array and the faults that occur are masked. The proposed architecture for extracting the correct results from the output of the sensors can provide the quality of service for generated data from the sensor array. The results of various evaluations and analysis proved that the proposed architecture has acceptable performance in comparison with the classic form of the sensor array in gas identification. According to the results, achieving a high odor discrimination based on the suggested architecture is possible. (paper)

  8. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  9. Joint C4ISR Architecture Planning/Analysis System (JCAPS)

    National Research Council Canada - National Science Library

    Wostbrock, Bill

    2002-01-01

    The contractor satisfactorily completed all tasks under both efforts, providing the technology and technical expertise in the development of the Joint C4ISR Architecture Planning/Analysis System (JCAPS) Database Tool...

  10. Energy and architecture: improvement of energy performance in existing buildings

    Energy Technology Data Exchange (ETDEWEB)

    Haase, Matthias; Wycmans, Annemie; Solbraa, Anne; Grytli, Eir

    2011-07-01

    This book aims to give an overview of different aspects of retrofitting existing buildings. The target group is students of architecture and building engineering as well as building professionals. Eight out of ten buildings which we will inhabit in 2050 already exist. This means that a great potential for reducing our carbon footprint lies in the existing building stock. Students from NTNU have used the renovation of a 1950s school building at Linesoeya in Soer-Trondelag as a case to increase their awareness and knowledge about the challenges building professionals need to overcome to unite technical details and high user quality into good environmental performance. The students were invited by the building owners and initiators of LIPA Eco Project to contribute to its development: By retrofitting an existing building to passive house standards and combining this with energy generated on site, LIPA Eco Project aims to provide a hands-on example with regard to energy efficiency, architectural design and craftsmanship for a low carbon society. The overall goal for this project is to raise awareness regarding resource efficiency measures in architecture and particularly in existing building mass.(au)

  11. Model-based security analysis of the German health card architecture.

    Science.gov (United States)

    Jürjens, J; Rumm, R

    2008-01-01

    Health-care information systems are particularly security-critical. In order to make these applications secure, the security analysis has to be an integral part of the system design and IT management process for such systems. This work presents the experiences and results from the security analysis of the system architecture of the German Health Card, by making use of an approach to model-based security engineering that is based on the UML extension UMLsec. The focus lies on the security mechanisms and security policies of the smart-card-based architecture which were analyzed using the UMLsec method and tools. Main results of the paper include a report on the employment of the UMLsec method in an industrial health information systems context as well as indications of its benefits and limitations. In particular, two potential security weaknesses were detected and countermeasures discussed. The results indicate that it can be feasible to apply a model-based security analysis using UMLsec to an industrial health information system like the German Health Card architecture, and that doing so can have concrete benefits (such as discovering potential weaknesses, and an increased confidence that no further vulnerabilities of the kind that were considered are present).

  12. Software architecture 2

    CERN Document Server

    Oussalah, Mourad Chabanne

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural templa

  13. Software architecture 1

    CERN Document Server

    Oussalah , Mourad Chabane

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural template

  14. Performance anomaly detection in microservice architectures under continuous change

    OpenAIRE

    Düllmann, Thomas F.

    2017-01-01

    The idea of DevOps and agile approaches like Continuous Integration (CI) and microservice architectures are bocoming more and more popular as the demand for flexible and scalable solutions is increasing. By raising the degree of automation and distribution new challenges in terms of application performance monitoring arise because microservices are possibly short-lived and may be replaced within seconds. The fact that microservices are added and removed on a regular basis brings new requireme...

  15. Architectural communication: Intra and extra activity of architecture

    Directory of Open Access Journals (Sweden)

    Stamatović-Vučković Slavica

    2013-01-01

    Full Text Available Apart from a brief overview of architectural communication viewed from the standpoint of theory of information and semiotics, this paper contains two forms of dualistically viewed architectural communication. The duality denotation/connotation (”primary” and ”secondary” architectural communication is one of semiotic postulates taken from Umberto Eco who viewed architectural communication as a semiotic phenomenon. In addition, architectural communication can be viewed as an intra and an extra activity of architecture where the overall activity of the edifice performed through its spatial manifestation may be understood as an act of communication. In that respect, the activity may be perceived as the ”behavior of architecture”, which corresponds to Lefebvre’s production of space.

  16. Cost and performance analysis of conceptual designs of physical protection systems

    International Nuclear Information System (INIS)

    Hicks, M.J.; Snell, M.S.; Sandoval, J.S.; Potter, C.S.

    1998-01-01

    CPA -- Cost and Performance Analysis -- is a methodology that joins Activity Based Cost (ABC) estimation with performance based analysis of physical protection systems. CPA offers system managers an approach that supports both tactical decision making and strategic planning. Current exploratory applications of the CPA methodology are addressing analysis of alternative conceptual designs. To support these activities, the original architecture for CPA, is being expanded to incorporate results from a suite of performance and consequence analysis tools such as JTS (Joint Tactical Simulation), ERAD (Explosive Release Atmospheric Dispersion) and blast effect models. The process flow for applying CPA to the development and analysis conceptual designs is illustrated graphically

  17. DOE's Institute for Advanced Architecture and Algorithms: An application-driven approach

    International Nuclear Information System (INIS)

    Murphy, Richard C

    2009-01-01

    This paper describes an application driven methodology for understanding the impact of future architecture decisions on the end of the MPP era. Fundamental transistor device limitations combined with application performance characteristics have created the switch to multicore/multithreaded architectures. Designing large-scale supercomputers to match application demands is particularly challenging since performance characteristics are highly counter-intuitive. In fact, data movement more than FLOPS dominates. This work discusses some basic performance analysis for a set of DOE applications, the limits of CMOS technology, and the impact of both on future architectures.

  18. Building Information Modeling (BIM) for Indoor Environmental Performance Analysis

    DEFF Research Database (Denmark)

    The report is a part of a research assignment carried out by students in the 5ETCS course “Project Byggeri – [entitled as: Building Information Modeling (BIM) – Modeling & Analysis]”, during the 3rd semester of master degree in Civil and Architectural Engineering, Department of Engineering, Aarhus...... University. This includes seven papers describing BIM for Sustainability, concentrating specifically on individual topics regarding to Indoor Environment Performance Analysis....

  19. High performance integer arithmetic circuit design on FPGA architecture, implementation and design automation

    CERN Document Server

    Palchaudhuri, Ayan

    2016-01-01

    This book describes the optimized implementations of several arithmetic datapath, controlpath and pseudorandom sequence generator circuits for realization of high performance arithmetic circuits targeted towards a specific family of the high-end Field Programmable Gate Arrays (FPGAs). It explores regular, modular, cascadable, and bit-sliced architectures of these circuits, by directly instantiating the target FPGA-specific primitives in the HDL. Every proposed architecture is justified with detailed mathematical analyses. Simultaneously, constrained placement of the circuit building blocks is performed, by placing the logically related hardware primitives in close proximity to one another by supplying relevant placement constraints in the Xilinx proprietary “User Constraints File”. The book covers the implementation of a GUI-based CAD tool named FlexiCore integrated with the Xilinx Integrated Software Environment (ISE) for design automation of platform-specific high-performance arithmetic circuits from us...

  20. Use of communication architecture test bed to evaluate data network performance

    International Nuclear Information System (INIS)

    Clapp, N.E. Jr.; Swail, B.K.; Naser, J.A.

    1994-01-01

    Local area networks (LANs) are becoming more prevalent in nuclear power plants. Traditionally, LANs were only used as information highways, providing office automation services. LANs are now being used as data highways for applications in plant data acquisition and control systems. A communication architecture test bed, which contains network simulators, is needed to allow network performance studies and to resolve design issues prior to equipment purchase. Two levels of granularity of simulation are needed to provide the dynamic information about network performance. A coarse-grain simulator is used to estimate the dynamic performance of the network due to major resources such as workstations, gateways, and data acquisition systems. A fine-grain simulator allows a greater level of detail about the underlying network protocol and resources to be simulated. The combination of coarse-grain and fine-grain simulation packages provides the network designer with the required tools to thoroughly understand the behavior of the modeled network. This paper describes the development of a communication architecture test bed using commercial network simulation packages. Network simulators allow the resolution of major design issues in software without the expense of purchasing costly hardware components

  1. Dependability analysis of proposed I and C architecture for safety systems of a large PWR

    International Nuclear Information System (INIS)

    Kabra, Ashutosh; Karmakar, G.; Tiwari, A.P.; Manoj Kumar; Marathe, P.P.

    2014-01-01

    Instrumentation and Control (I and C) systems in a reactor provide protection against unsafe operation during steady-state and transient power operations. Indian reactors traditionally adopted 2-out-of-3 (2oo3) architecture for safety systems. But, contemporary reactor safety systems are employing 2-out-of-4 (2oo4) architecture in spite of the increased cost due to the additional channel. This motivated us to carry out a comparative study of 2oo3 and 2oo4 architecture, especially for their dependability attributes - safety and availability. Quantitative estimation of safety and availability has been used to adjudge the worthiness of adopting 2oo4 architecture in I and C safety systems of a large PWR. Our analysis using Markov model shows that 2oo4 architecture, even with lower diagnostic coverage and longer proof test interval, can provide better safety and availability in comparison of 2oo3 architecture. This reduces total life cycle cost of system during development phase and complexity and frequency of surveillance test during operational phase. The paper also describes the proposed architecture for Reactor Protection System (RPS), a representative safety system, and determines its dependability using Markov analysis and Failure Mode Effect Analysis (FMEA). The proposed I and C safety system architecture also has been qualitatively analyzed for their effectiveness against common cause failures (CCFs). (author)

  2. Deep learning—Accelerating Next Generation Performance Analysis Systems?

    Directory of Open Access Journals (Sweden)

    Heike Brock

    2018-02-01

    Full Text Available Deep neural network architectures show superior performance in recognition and prediction tasks of the image, speech and natural language domains. The success of such multi-layered networks encourages their implementation in further application scenarios as the retrieval of relevant motion information for performance enhancement in sports. However, to date deep learning is only seldom applied to activity recognition problems of the human motion domain. Therefore, its use for sports data analysis might remain abstract to many practitioners. This paper provides a survey on recent works in the field of high-performance motion data and examines relevant technologies for subsequent deployment in real training systems. In particular, it discusses aspects of data acquisition, processing and network modeling. Analysis suggests the advantage of deep neural networks under difficult and noisy data conditions. However, further research is necessary to confirm the benefit of deep learning for next generation performance analysis systems.

  3. An assistive technology for hearing-impaired persons: analysis, requirements and architecture.

    Science.gov (United States)

    Mielke, Matthias; Grunewald, Armin; Bruck, Rainer

    2013-01-01

    In this contribution, a concept of an assistive technology for hearing-impaired and deaf persons is presented. The concept applies pattern recognition algorithms and makes use of modern communication technology to analyze the acoustic environment around a user, identify critical acoustic signatures and give an alert to the user when an event of interest happened. A detailed analysis of the needs of deaf and hearing-impaired people has been performed. Requirements for an adequate assisting device have been derived from the results of the analysis, and have been turned into an architecture for its implementation that will be presented in this article. The presented concept is the basis for an assistive system which is now under development at the Institute of Microsystem Engineering at the University of Siegen.

  4. High-performance full adder architecture in quantum-dot cellular automata

    Directory of Open Access Journals (Sweden)

    Hamid Rashidi

    2017-06-01

    Full Text Available Quantum-dot cellular automata (QCA is a new and promising computation paradigm, which can be a viable replacement for the complementary metal–oxide–semiconductor technology at nano-scale level. This technology provides a possible solution for improving the computation in various computational applications. Two QCA full adder architectures are presented and evaluated: a new and efficient 1-bit QCA full adder architecture and a 4-bit QCA ripple carry adder (RCA architecture. The proposed architectures are simulated using QCADesigner tool version 2.0.1. These architectures are implemented with the coplanar crossover approach. The simulation results show that the proposed 1-bit QCA full adder and 4-bit QCA RCA architectures utilise 33 and 175 QCA cells, respectively. Our simulation results show that the proposed architectures outperform most results so far in the literature.

  5. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... proportions, to organize the process on site choosing either one room wall components or several rooms wall components – either horizontally or vertically. Combined with the seamless joint the playing with these possibilities the new industrialized architecture can deliver variations in choice of solutions...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  6. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... expression in the specific housing area. It is the aim of this article to expand the different design strategies which architects can use – to give the individual project attitudes and designs with architectural quality. Through the customized component production it is possible to choose different...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  7. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Architectural freedom and industrialized architecture. Inge Vestergaard, Associate Professor, Cand. Arch. Aarhus School of Architecture, Denmark Noerreport 20, 8000 Aarhus C Telephone +45 89 36 0000 E-mai l inge.vestergaard@aarch.dk Based on the repetitive architecture from the "building boom" 1960...... customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performed expression in direct relation to the given context. Through the last couple of years we have in Denmark been focusing a more sustainable and low energy building technique, which also include...... to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...

  8. Hardware Architecture of Polyphase Filter Banks Performing Embedded Resampling for Software-Defined Radio Front-Ends

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Le Moullec, Yannick; Koch, Peter

    2012-01-01

    , and power optimization for field programmable gate array (FPGA) based architectures in an M -path polyphase filter bank with modified N -path polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones......In this paper, we describe resource-efficient hardware architectures for software-defined radio (SDR) front-ends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time...... that are not multiples of the output sample rate. A non-maximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the M data-load’s time period. We present a load...

  9. A Proposed Data Fusion Architecture for Micro-Zone Analysis and Data Mining

    Energy Technology Data Exchange (ETDEWEB)

    Kevin McCarthy; Milos Manic

    2012-08-01

    Data Fusion requires the ability to combine or “fuse” date from multiple data sources. Time Series Analysis is a data mining technique used to predict future values from a data set based upon past values. Unlike other data mining techniques, however, Time Series places special emphasis on periodicity and how seasonal and other time-based factors tend to affect trends over time. One of the difficulties encountered in developing generic time series techniques is the wide variability of the data sets available for analysis. This presents challenges all the way from the data gathering stage to results presentation. This paper presents an architecture designed and used to facilitate the collection of disparate data sets well suited to Time Series analysis as well as other predictive data mining techniques. Results show this architecture provides a flexible, dynamic framework for the capture and storage of a myriad of dissimilar data sets and can serve as a foundation from which to build a complete data fusion architecture.

  10. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    Science.gov (United States)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  11. Enterprise Architecture Modeling of Core Administrative Systems at KTH : A Modifiability Analysis

    OpenAIRE

    Rosell, Peter

    2012-01-01

    This project presents a case study of modifiability analysis on the Information Systems which are central to the core business processes of Royal Institution of Technology in Stockholm, Sweden by creating, updating and using models. The case study was limited to modifiability regarding only specified Information Systems. The method selected was Enterprise Architecture together with Enterprise Architecture Analysis research results and tools from the Industrial Information and Control Systems ...

  12. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide-Semiconductor Image Sensors.

    Science.gov (United States)

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-05-02

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.

  13. Three-dimensional architectural and structural analysis--a transition in concept and design from Delaire's cephalometric analysis.

    Science.gov (United States)

    Lee, S-H; Kil, T-J; Park, K-R; Kim, B C; Kim, J-G; Piao, Z; Corre, P

    2014-09-01

    The aim of this study was to present a systematic sequence for three-dimensional (3D) measurement and cephalometry, provide the norm data for computed tomography-based 3D architectural and structural cephalometric analysis, and validate the 3D data through comparison with Delaire's two-dimensional (2D) lateral cephalometric data for the same Korean adults. 2D and 3D cephalometric analyses were performed for 27 healthy subjects and the measurements of both analyses were then individually and comparatively analyzed. Essential diagnostic tools for 3D cephalometry with modified definitions of the points, planes, and measurements were set up based on a review of the conceptual differences between two and three dimensions. Some 2D and 3D analysis results were similar, though significant differences were found with regard to craniofacial angle (C1-F1), incisal axis angles, cranial base length (C2), and cranial height (C3). The discrepancy in C2 and C3 appeared to be directly related to the magnification of 2D cephalometric images. Considering measurement discrepancies between 2D and 3D Delaire's analyses due to differences in concept and design, 3D architectural and structural analysis needs to be conducted based on norms and a sound 3D basis for the sake of its accurate application and widespread adoption. Copyright © 2014 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Performance evaluation of microservices architectures using containers

    OpenAIRE

    Amaral, Marcelo; Polo, Jordà; Carrera Pérez, David; Mohomed, Iqbal; Unuvar, Merve; Steinder, Malgorzata

    2015-01-01

    Microservices architecture has started a new trend for application development for a number of reasons: (1) to reduce complexity by using tiny services; (2) to scale, remove and deploy parts of the system easily; (3) to improve flexibility to use different frameworks and tools; (4) to increase the overall scalability; and (5) to improve the resilience of the system. Containers have empowered the usage of microservices architectures by being lightweight, providing fast start-up times, and havi...

  15. A software architectural framework specification for neutron activation analysis

    International Nuclear Information System (INIS)

    Preston, J.A.; Grant, C.N.

    2013-01-01

    Neutron Activation Analysis (NAA) is a sensitive multi-element nuclear analytical technique that has been routinely applied by research reactor (RR) facilities to environmental, nutritional, health related, geological and geochemical studies. As RR facilities face calls to increase their research output and impact, with existing or reducing budgets, automation of NAA offers a possible solution. However, automation has many challenges, not the least of which is a lack of system architecture standards to establish acceptable mechanisms for the various hardware/software and software/software interactions among data acquisition systems, specialised hardware such as sample changers, sample loaders, and data processing modules. This lack of standardization often results in automation hardware and software being incompatible with existing system components, in a facility looking to automate its NAA operations. This limits the availability of automation to a few RR facilities with adequate budgets or in-house engineering resources. What is needed is a modern open system architecture for NAA, that provides the required set of functionalities. This paper describes such an 'architectural framework' (OpenNAA), and portions of a reference implementation. As an example of the benefits, calculations indicate that applying this architecture to the compilation and QA steps associated with the analysis of 35 elements in 140 samples, with 14 SRM's, can reduce the time required by over 80 %. The adoption of open standards in the nuclear industry has been very successful over the years in promoting interchangeability and maximising the lifetime and output of nuclear measurement systems. OpenNAA will provide similar benefits within the NAA application space, safeguarding user investments in their current system, while providing a solid path for development into the future. (author)

  16. Time analysis of interconnection network implemented on the honeycomb architecture

    Energy Technology Data Exchange (ETDEWEB)

    Milutinovic, D [Inst. Michael Pupin, Belgrade (Yugoslavia)

    1996-12-31

    Problems of time domains analysis of the mapping of interconnection networks for parallel processing on one form of uniform massively parallel architecture of the cellular type are considered. The results of time analysis are discussed. It is found that changing the technology results in changing the mapping rules. 17 refs.

  17. Stability and performance of propulsion control systems with distributed control architectures and failures

    Science.gov (United States)

    Belapurkar, Rohit K.

    Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.

  18. Performance of 3-D architecture silicon sensors after intense proton irradiation

    CERN Document Server

    Parker, S I

    2001-01-01

    Silicon detectors with a three-dimensional architecture, in which the n- and p-electrodes penetrate through the entire substrate, have been successfully fabricated. The electrodes can be separated from each other by distances that are less than the substrate thickness, allowing short collection paths, low depletion voltages, and large current signals from rapid charge collection. While no special hardening steps were taken in this initial fabrication run, these features of three dimensional architectures produce an intrinsic resistance to the effects of radiation damage. Some performance measurements are given for detectors that are fully depleted and working after exposures to proton beams with doses equivalent to that from slightly more than ten years at the B-layer radius (50 mm) in the planned Atlas detector at the Large Hadron Collider at CERN. (41 refs).

  19. Architectural design and reliability analysis of a fail-operational brake-by-wire system from ISO 26262 perspectives

    International Nuclear Information System (INIS)

    Sinha, Purnendu

    2011-01-01

    Next generation drive-by-wire automotive systems enabling autonomous driving will build on the fail-operational capabilities of electronics, control and software (ECS) architectural solutions. Developing such architectural designs that would meet dependability requirements and satisfy other system constraints is a challenging task and will possibly lead to a paradigm shift in automotive ECS architecture design and development activities. This aspect is becoming quite relevant while designing battery-driven electric vehicles with integrated in-wheel drive-train and chassis subsystems. In such highly integrated dependable systems, many of the primary features and functions are attributed to the highest safety critical ratings. Brake-by-wire is one such system that interfaces with active safety features built into an automobile, and which in turn is expected to provide fail-operational capabilities. In this paper, building up on the basic concepts of fail-silent and fail-operational systems design we propose a system-architecture for a brake-by-wire system with fail-operational capabilities. The design choices are supported with proper rationale and design trade-offs. Safety and reliability analysis of the proposed system architecture is performed as per the ISO 26262 standard for functional safety of electrical/electronic systems in road vehicles.

  20. An ODMG-compatible testbed architecture for scalable management and analysis of physics data

    International Nuclear Information System (INIS)

    Malon, D.M.; May, E.N.

    1997-01-01

    This paper describes a testbed architecture for the investigation and development of scalable approaches to the management and analysis of massive amounts of high energy physics data. The architecture has two components: an interface layer that is compliant with a substantial subset of the ODMG-93 Version 1.2 specification, and a lightweight object persistence manager that provides flexible storage and retrieval services on a variety of single- and multi-level storage architectures, and on a range of parallel and distributed computing platforms

  1. Context Aware Middleware Architectures: Survey and Challenges

    Directory of Open Access Journals (Sweden)

    Xin Li

    2015-08-01

    Full Text Available Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work.

  2. Link Performance Analysis and monitoring - A unified approach to divergent requirements

    Science.gov (United States)

    Thom, G. A.

    Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.

  3. Biologically-Inspired Control Architecture for Musical Performance Robots

    Directory of Open Access Journals (Sweden)

    Jorge Solis

    2014-10-01

    Full Text Available At Waseda University, since 1990, the authors have been developing anthropomorphic musical performance robots as a means for understanding human control, introducing novel ways of interaction between musical partners and robots, and proposing applications for humanoid robots. In this paper, the design of a biologically-inspired control architecture for both an anthropomorphic flutist robot and a saxophone playing robot are described. As for the flutist robot, the authors have focused on implementing an auditory feedback system to improve the calibration procedure for the robot in order to play all the notes correctly during a performance. In particular, the proposed auditory feedback system is composed of three main modules: an Expressive Music Generator, a Feed Forward Air Pressure Control System and a Pitch Evaluation System. As for the saxophone-playing robot, a pressure-pitch controller (based on the feedback error learning to improve the sound produced by the robot during a musical performance was proposed and implemented. In both cases studied, a set of experiments are described to verify the improvements achieved while considering biologically-inspired control approaches.

  4. Architectural Analysis of Systems Based on the Publisher-Subscriber Style

    Science.gov (United States)

    Ganesun, Dharmalingam; Lindvall, Mikael; Ruley, Lamont; Wiegand, Robert; Ly, Vuong; Tsui, Tina

    2010-01-01

    Architectural styles impose constraints on both the topology and the interaction behavior of involved parties. In this paper, we propose an approach for analyzing implemented systems based on the publisher-subscriber architectural style. From the style definition, we derive a set of reusable questions and show that some of them can be answered statically whereas others are best answered using dynamic analysis. The paper explains how the results of static analysis can be used to orchestrate dynamic analysis. The proposed method was successfully applied on the NASA's Goddard Mission Services Evolution Center (GMSEC) software product line. The results show that the GMSEC has a) a novel reusable vendor-independent middleware abstraction layer that allows the NASA's missions to configure the middleware of interest without changing the publishers' or subscribers' source code, and b) some high priority bugs due to behavioral discrepancies, which were eluded during testing and code reviews, among different implementations of the same APIs for different vendors.

  5. PIXE analysis of Moroccan architectural glazed ceramics of 14th-18th centuries

    International Nuclear Information System (INIS)

    Zucchiatti, A.; Azzou, A.; El Amraoui, M.; Haddad, M.; Bejjit, L.; Ait Lyazidi, S.

    2009-01-01

    The PIXE analysis of glazes and ceramic bodies of a set of architectural glazed ceramics (mostly the zellige mosaics), sampled from seven Moroccan monuments from the 14th to the 18th century AD, has been performed. We have identified high lead glazes, opacified with tin-oxide, laid over a calciferous body to produce hard tiles easy to chisel as required by the zellige technique. The analysis has revealed significant differences between the monuments examined: in particular in the formulation of the base glass and in the use of stains to produce coloured glazes. We observed the peculiarity of materials used in Marrakech and we could distinguish, both in terms of glazes and ceramic bodies, the two almost contemporary Madersas dedicated to the sultan Bou Inan, one in Meknes the other in Fez. The PIXE measurements integrate a broad range of spectrometric investigations performed in the past few years. (author)

  6. A supportive architecture for CFD-based design optimisation

    Science.gov (United States)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture

  7. The Hi-Ring DCN Architecture

    DEFF Research Database (Denmark)

    Galili, Michael; Kamchevska, Valerija; Ding, Yunhong

    2016-01-01

    We will review recent work on the proposed hierarchical ring-based architecture (HiRing) proposed for data center networks. We will discuss the architecture and initial demonstrations of optical switching performance and time-domain synchronization......We will review recent work on the proposed hierarchical ring-based architecture (HiRing) proposed for data center networks. We will discuss the architecture and initial demonstrations of optical switching performance and time-domain synchronization...

  8. Understanding Extraordinary Architectural Experiences through Content Analysis of Written Narratives

    Directory of Open Access Journals (Sweden)

    Brandon Richard Ro

    2015-12-01

    Full Text Available This study a identifies how people describe, characterize, and communicate in written form Extraordinary Architectural Experiences (EAE, and b expands the traditional qualitative approach to architectural phenomenology by demonstrating a quantitative method to analyze written narratives. Specifically, this study reports on the content analysis of 718 personal accounts of EAEs. Using a deductive, ‘theory-driven’ approach, these narratives were read, coded, and statistically analyzed to identify storyline structure, convincing power, and the relationship between subjective and objective experiential qualities used in the story-telling process. Statistical intercoder agreement tests were conducted to verify the reliability of the interpretations to approach the hard problem of “extraordinary aesthetics” in architecture empirically. The results of this study confirm the aesthetic nature of EAE narratives (and of told experiences by showing their higher dependence on external objective content (e.g., a building’s features and location rather than its internal subjective counterpart (e.g., emotions and sensations, which makes them more outwardly focused. The strong interrelationships and intercoder agreement between the thematic realms provide a unique aesthetic construct revealing EAE narratives as memorable, embodied, emotional events mapped by the externally focused content of place, social setting, time, and building features. A majority of EAE narratives were found to possess plot-structure along with significant relationships to objective-subjective content that further grounded their storylines. This study concludes that content analysis provides not only a valid method to understand written narratives about extraordinary architectural experiences quantitatively, but also a view as to how to map the unique nature of aesthetic phenomenology empirically.

  9. Architectural and growth traits differ in effects on performance of clonal plants: an analysis using a field-parameterized simulation model

    Czech Academy of Sciences Publication Activity Database

    Wildová, Radka; Gough, L.; Herben, Tomáš; Hershock, Ch.; Goldberg, D. E.

    2007-01-01

    Roč. 116, č. 5 (2007), s. 836-852 ISSN 0030-1299 R&D Projects: GA ČR(CZ) GA206/02/0953; GA ČR(CZ) GA206/02/0578 Grant - others:NSF(US) DEB99-74296; NSF(US) DEB99-74284 Institutional research plan: CEZ:AV0Z60050516 Keywords : individual-based model * performance * plant architecture * competitive response * resource allocation Subject RIV: EF - Botanics Impact factor: 3.136, year: 2007

  10. Critical analysis of key determinants and barriers to digital innovation adoption among architectural organizations

    Directory of Open Access Journals (Sweden)

    Runddy Ramilo

    2014-12-01

    Full Text Available The development and use of design technology for architecture in the modern world have led to the emergence of various design methodologies. Current design research has focused on a computationally mediated design process. This method is essentially concerned with finding forms and building performance simulation, i.e., structural, environmental, constructional, and cost performance, by integrating physics and algorithms. From the emergence of this process, design practices have been increasingly aided by and dependent on the technology, which has resulted in a major paradigm shift. Advancement of the new technology has the potential to improve design and productivity dramatically. However, related literature shows that substantial technical and organizational barriers exist. These barriers inhibit the effective adoption of these technologies. The effect of these obstacles on architectural practice varies depending on the size of an architectural organization. To further understand the problem, we conducted an in-depth study on several small, medium, and large architectural organizations. This study involves in-depth evaluation of technological, financial, organizational, governmental, psychological, and process barriers encountered in the adoption of digital innovation. Results reveal relevant attributes and patterns of variables, which can be used to establish a framework for digital innovation adoption. Valuable findings of this study reveal that smaller architectural organizations present more barriers to digital innovation compared with their larger counterparts. This study is important because it contributes to the research on digital innovation in architecture and addresses the barriers faced by different sizes of architectural organizations.

  11. Architecture effects of glucose oxidase/Au nanoparticle composite Langmuir-Blodgett films on glucose sensing performance

    Science.gov (United States)

    Wang, Ke-Hsuan; Wu, Jau-Yann; Chen, Liang-Huei; Lee, Yuh-Lang

    2016-03-01

    The Langmuir-Blodgett (LB) deposition technique is employed to prepare nano-composite films consisting of glucose oxidase (GOx) and gold nanoparticles (AuNPs) for glucose sensing applications. The GOx and AuNPs are co-adsorbed from an aqueous solution onto an air/liquid interface in the presence of an octadecylamine (ODA) template monolayer, forming a mixed (GOx-AuNP) monolayer. Alternatively, a composite film with a cascade architecture (AuNP/GOx) is also prepared by sequentially depositing monolayers of AuNPs and GOx. The architecture effects of the composite LB films on the glucose sensing are studied. The results show that the presence of AuNPs in the co-adsorption system does not affect the adsorption amount and preferred conformation (α-helix) of GOx. Furthermore, the incorporation of AuNPs in both composite films can significantly improve the sensing performance. However, the enhancement effects of the AuNPs in the two architectures are distinct. The major effect of the AuNPs is on the facilitation of charge-transfer in the (GOx-AuNP) film, but on the increase of catalytic activity in the (AuNP/GOx) one. Therefore, the sensing performance can be greatly improved by utilizing a film combining both architectures (AuNP/GOx-AuNP).

  12. Advanced Architectures for Astrophysical Supercomputing

    Science.gov (United States)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  13. Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Science.gov (United States)

    Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.

    1992-01-01

    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions.

  14. Specification, Design, and Analysis of Advanced HUMS Architectures

    Science.gov (United States)

    Mukkamala, Ravi

    2004-01-01

    During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the

  15. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  16. Service oriented architecture assessment based on software components

    Directory of Open Access Journals (Sweden)

    Mahnaz Amirpour

    2016-01-01

    Full Text Available Enterprise architecture, with detailed descriptions of the functions of information technology in the organization, tries to reduce the complexity of technology applications resulting in tools with greater efficiency in achieving the objectives of the organization. Enterprise architecture consists of a set of models describing this technology in different components performance as well as various aspects of the applications in any organization. Therefore, information technology development and maintenance management can perform well within organizations. This study aims to suggest a method to identify different types of services in service-oriented architecture analysis step that applies some previous approaches in an integrated form and, based on the principles of software engineering, to provide a simpler and more transparent approach through the expression of analysis details. Advantages and disadvantages of proposals should be evaluated before the implementation and costs allocation. Evaluation methods can better identify strengths and weaknesses of the current situation apart from selecting appropriate model out of several suggestions, and clarify this technology development solution for organizations in the future. We will be able to simulate data and processes flow within the organization by converting the output of the model to colored Petri nets and evaluate and test it by examining various inputs to enterprise architecture before implemented in terms of reliability and response time. A model of application has been studied for the proposed model and the results can describe and design architecture for data.

  17. Analysis and Design of a Context Adaptable SAD/MSE Architecture

    Directory of Open Access Journals (Sweden)

    Arvind Sudarsanam

    2009-01-01

    Full Text Available Design of flexible multimedia accelerators that can cater to multiple algorithms is being aggressively pursued in the media processors community. Such an approach is justified in the era of sub-45 nm technology where an increasingly dominating leakage power component is forcing designers to make the best possible use of on-chip resources. In this paper we present an analysis of two commonly used window-based operations (sum of absolute differences and mean squared error across a variety of search patterns and block sizes (2×3, 5×5, etc.. We propose a context adaptable architecture that has (i configurable 2D systolic array and (ii 2D Configurable Register Array (CRA. CRA can cater to variable pixel access patterns while reusing fetched pixels across search windows. Benefits of proposed architecture when compared to 15 other published architectures are adaptability, high throughput, and low latency at a cost of increased footprint, when ported on a Xilinx FPGA.

  18. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  19. A Coarse-Grained Reconfigurable Architecture with Compilation for High Performance

    Directory of Open Access Journals (Sweden)

    Lu Wan

    2012-01-01

    Full Text Available We propose a fast data relay (FDR mechanism to enhance existing CGRA (coarse-grained reconfigurable architecture. FDR can not only provide multicycle data transmission in concurrent with computations but also convert resource-demanding inter-processing-element global data accesses into local data accesses to avoid communication congestion. We also propose the supporting compiler techniques that can efficiently utilize the FDR feature to achieve higher performance for a variety of applications. Our results on FDR-based CGRA are compared with two other works in this field: ADRES and RCP. Experimental results for various multimedia applications show that FDR combined with the new compiler deliver up to 29% and 21% higher performance than ADRES and RCP, respectively.

  20. Italian bioclimatic architecture

    Energy Technology Data Exchange (ETDEWEB)

    D' Errico, E

    1987-04-01

    This paper deals with the results of solar space heating research developed within the Finalized Energy Project of the National Research Council of Italy. Energy and cost/benefit parameters were compared for a certain number of Italian buildings incorporating solar architecture. The technical and economic analysis was performed on 31 buildings, of which 24 are residential, and 7 are schools, with different solar devices (direct gain, Trombe walls, sunspaces, hybrid systems). The buildings were constructed between 1976 and 1982. The results emphasize that simple technologies with lower costs and good design usually have a higher performance/cost ratio.

  1. RISC. A new style in the design of architectures

    International Nuclear Information System (INIS)

    Cortadella, J.; Gonzalez, A.; Llaberia, J.M.

    1988-01-01

    In the 80's a new architecture design and implementation style has appeared: the RISC style. It proposes an overall view of the system were the processor is included. For each function, an extensive analysis has to be performed in order to evaluate the advantages and disadvantages that hardware and software introduce in the design. An optimum design involved an agreement between both levels and has to take into account cost, performance, and technological factors. In this paper, the main features of this new architecture design style are presented. (Author)

  2. Domain architecture conservation in orthologs

    Science.gov (United States)

    2011-01-01

    Background As orthologous proteins are expected to retain function more often than other homologs, they are often used for functional annotation transfer between species. However, ortholog identification methods do not take into account changes in domain architecture, which are likely to modify a protein's function. By domain architecture we refer to the sequential arrangement of domains along a protein sequence. To assess the level of domain architecture conservation among orthologs, we carried out a large-scale study of such events between human and 40 other species spanning the entire evolutionary range. We designed a score to measure domain architecture similarity and used it to analyze differences in domain architecture conservation between orthologs and paralogs relative to the conservation of primary sequence. We also statistically characterized the extents of different types of domain swapping events across pairs of orthologs and paralogs. Results The analysis shows that orthologs exhibit greater domain architecture conservation than paralogous homologs, even when differences in average sequence divergence are compensated for, for homologs that have diverged beyond a certain threshold. We interpret this as an indication of a stronger selective pressure on orthologs than paralogs to retain the domain architecture required for the proteins to perform a specific function. In general, orthologs as well as the closest paralogous homologs have very similar domain architectures, even at large evolutionary separation. The most common domain architecture changes observed in both ortholog and paralog pairs involved insertion/deletion of new domains, while domain shuffling and segment duplication/deletion were very infrequent. Conclusions On the whole, our results support the hypothesis that function conservation between orthologs demands higher domain architecture conservation than other types of homologs, relative to primary sequence conservation. This supports the

  3. Implementation and Performance of a GPS/INS Tightly Coupled Assisted PLL Architecture Using MEMS Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Youssef Tawk

    2014-02-01

    Full Text Available The use of global navigation satellite system receivers for navigation still presents many challenges in urban canyon and indoor environments, where satellite availability is typically reduced and received signals are attenuated. To improve the navigation performance in such environments, several enhancement methods can be implemented. For instance, external aid provided through coupling with other sensors has proven to contribute substantially to enhancing navigation performance and robustness. Within this context, coupling a very simple GPS receiver with an Inertial Navigation System (INS based on low-cost micro-electro-mechanical systems (MEMS inertial sensors is considered in this paper. In particular, we propose a GPS/INS Tightly Coupled Assisted PLL (TCAPLL architecture, and present most of the associated challenges that need to be addressed when dealing with very-low-performance MEMS inertial sensors. In addition, we propose a data monitoring system in charge of checking the quality of the measurement flow in the architecture. The implementation of the TCAPLL is discussed in detail, and its performance under different scenarios is assessed. Finally, the architecture is evaluated through a test campaign using a vehicle that is driven in urban environments, with the purpose of highlighting the pros and cons of combining MEMS inertial sensors with GPS over GPS alone.

  4. Implementation and Performance of a GPS/INS Tightly Coupled Assisted PLL Architecture Using MEMS Inertial Sensors

    Science.gov (United States)

    Tawk, Youssef; Tomé, Phillip; Botteron, Cyril; Stebler, Yannick; Farine, Pierre-André

    2014-01-01

    The use of global navigation satellite system receivers for navigation still presents many challenges in urban canyon and indoor environments, where satellite availability is typically reduced and received signals are attenuated. To improve the navigation performance in such environments, several enhancement methods can be implemented. For instance, external aid provided through coupling with other sensors has proven to contribute substantially to enhancing navigation performance and robustness. Within this context, coupling a very simple GPS receiver with an Inertial Navigation System (INS) based on low-cost micro-electro-mechanical systems (MEMS) inertial sensors is considered in this paper. In particular, we propose a GPS/INS Tightly Coupled Assisted PLL (TCAPLL) architecture, and present most of the associated challenges that need to be addressed when dealing with very-low-performance MEMS inertial sensors. In addition, we propose a data monitoring system in charge of checking the quality of the measurement flow in the architecture. The implementation of the TCAPLL is discussed in detail, and its performance under different scenarios is assessed. Finally, the architecture is evaluated through a test campaign using a vehicle that is driven in urban environments, with the purpose of highlighting the pros and cons of combining MEMS inertial sensors with GPS over GPS alone. PMID:24569773

  5. Coupling SIMD and SIMT architectures to boost performance of a phylogeny-aware alignment kernel

    Directory of Open Access Journals (Sweden)

    Alachiotis Nikolaos

    2012-08-01

    Full Text Available Abstract Background Aligning short DNA reads to a reference sequence alignment is a prerequisite for detecting their biological origin and analyzing them in a phylogenetic context. With the PaPaRa tool we introduced a dedicated dynamic programming algorithm for simultaneously aligning short reads to reference alignments and corresponding evolutionary reference trees. The algorithm aligns short reads to phylogenetic profiles that correspond to the branches of such a reference tree. The algorithm needs to perform an immense number of pairwise alignments. Therefore, we explore vector intrinsics and GPUs to accelerate the PaPaRa alignment kernel. Results We optimized and parallelized PaPaRa on CPUs and GPUs. Via SSE 4.1 SIMD (Single Instruction, Multiple Data intrinsics for x86 SIMD architectures and multi-threading, we obtained a 9-fold acceleration on a single core as well as linear speedups with respect to the number of cores. The peak CPU performance amounts to 18.1 GCUPS (Giga Cell Updates per Second using all four physical cores on an Intel i7 2600 CPU running at 3.4 GHz. The average CPU performance (averaged over all test runs is 12.33 GCUPS. We also used OpenCL to execute PaPaRa on a GPU SIMT (Single Instruction, Multiple Threads architecture. A NVIDIA GeForce 560 GPU delivered peak and average performance of 22.1 and 18.4 GCUPS respectively. Finally, we combined the SIMD and SIMT implementations into a hybrid CPU-GPU system that achieved an accumulated peak performance of 33.8 GCUPS. Conclusions This accelerated version of PaPaRa (available at http://www.exelixis-lab.org/software.html provides a significant performance improvement that allows for analyzing larger datasets in less time. We observe that state-of-the-art SIMD and SIMT architectures deliver comparable performance for this dynamic programming kernel when the “competing programmer approach” is deployed. Finally, we show that overall performance can be substantially increased

  6. Using Enterprise Architecture for Analysis of a Complex Adaptive Organization's Risk Inducing Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Salguero, Laura Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huff, Johnathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Matta, Anthony R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Collins, Sue S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    Sandia National Laboratories is an organization with a wide range of research and development activities that include nuclear, explosives, and chemical hazards. In addition, Sandia has over 2000 labs and over 40 major test facilities, such as the Thermal Test Complex, the Lightning Test Facility, and the Rocket Sled Track. In order to support safe operations, Sandia has a diverse Environment, Safety, and Health (ES&H) organization that provides expertise to support engineers and scientists in performing work safely. With such a diverse organization to support, the ES&H program continuously seeks opportunities to improve the services provided for Sandia by using various methods as part of their risk management strategy. One of the methods being investigated is using enterprise architecture analysis to mitigate risk inducing characteristics such as normalization of deviance, organizational drift, and problems in information flow. This paper is a case study for how a Department of Defense Architecture Framework (DoDAF) model of the ES&H enterprise, including information technology applications, can be analyzed to understand the level of risk associated with the risk inducing characteristics discussed above. While the analysis is not complete, we provide proposed analysis methods that will be used for future research as the project progresses.

  7. INVESTIGATION OF FLIP-FLOP PERFORMANCE ON DIFFERENT TYPE AND ARCHITECTURE IN SHIFT REGISTER WITH PARALLEL LOAD APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Dwi Purnomo

    2015-08-01

    Full Text Available Register is one of the computer components that have a key role in computer organisation. Every computer contains millions of registers that are manifested by flip-flop. This research focuses on the investigation of flip-flop performance based on its type (D, T, S-R, and J-K and architecture (structural, behavioural, and hybrid. Each type of flip-flop on each architecture would be tested in different bit of shift register with parallel load applications. The experiment criteria that will be assessed are power consumption, resources required, memory required, latency, and efficiency. Based on the experiment, it could be shown that D flip-flop and hybrid architecture showed the best performance in required memory, latency, power consumption, and efficiency. In addition, the experiment results showed that the greater the register number, the less efficient the system would be.

  8. The Present of Architectural Psychology Researches in China- Based on the Bibliometric Analysis and Knowledge Mapping

    Science.gov (United States)

    Zhu, LeiYe; Wang, Qi; Xu, JunHua; Wu, Qing; Jin, MeiDong; Liao, RongJun; Wang, HaiBin

    2018-03-01

    Architectural Psychology is an interdisciplinary subject of psychology and architecture that focuses on architectural design by using Gestalt psychology, cognitive psychology and other related psychology principles. Researchers from China have achieved fruitful achievements in the field of architectural psychology during past thirty-three years. To reveal the current situation of the field in China, 129 related papers from the China National Knowledge Infrastructure (CNKI) were analyzed by CiteSpace II software. The results show that: (1) the studies of the field in China have been started since 1984 and the annual number of the papers increased dramatically from 2008 and reached a historical peak in 2016. Shanxi Architecture tops the list of contributing publishing journals; Wuhan University, Southwest Jiaotong University and Chongqing University are the best performer among the contributing organizations. (2) “Environmental Psychology”, “Architectural Design” and “Architectural Psychology” are the most frequency keywords. The frontiers of the field in China are “architectural creation” and “environmental psychology” while the popular research topics were“residential environment”, “spatial environment”, “environmental psychology”, “architectural theory” and “architectural psychology”.

  9. Performance evaluation of OpenFOAM on many-core architectures

    International Nuclear Information System (INIS)

    Brzobohatý, Tomáš; Říha, Lubomír; Karásek, Tomáš; Kozubek, Tomáš

    2015-01-01

    In this article application of Open Source Field Operation and Manipulation (OpenFOAM) C++ libraries for solving engineering problems on many-core architectures is presented. Objective of this article is to present scalability of OpenFOAM on parallel platforms solving real engineering problems of fluid dynamics. Scalability test of OpenFOAM is performed using various hardware and different implementation of standard PCG and PBiCG Krylov iterative methods. Speed up of various implementations of linear solvers using GPU and MIC accelerators are presented in this paper. Numerical experiments of 3D lid-driven cavity flow for several cases with various number of cells are presented

  10. Performance evaluation of OpenFOAM on many-core architectures

    Energy Technology Data Exchange (ETDEWEB)

    Brzobohatý, Tomáš; Říha, Lubomír; Karásek, Tomáš, E-mail: tomas.karasek@vsb.cz; Kozubek, Tomáš [IT4Innovations National Supercomputing Center, VŠB-Technical University of Ostrava (Czech Republic)

    2015-03-10

    In this article application of Open Source Field Operation and Manipulation (OpenFOAM) C++ libraries for solving engineering problems on many-core architectures is presented. Objective of this article is to present scalability of OpenFOAM on parallel platforms solving real engineering problems of fluid dynamics. Scalability test of OpenFOAM is performed using various hardware and different implementation of standard PCG and PBiCG Krylov iterative methods. Speed up of various implementations of linear solvers using GPU and MIC accelerators are presented in this paper. Numerical experiments of 3D lid-driven cavity flow for several cases with various number of cells are presented.

  11. SCALEA-G: A Unified Monitoring and Performance Analysis System for the Grid

    Directory of Open Access Journals (Sweden)

    Hong-Linh Truong

    2004-01-01

    Full Text Available This paper describes SCALEA-G, a unified monitoring and performance analysis system for the Grid. SCALEA-G is implemented as a set of grid services based on the Open Grid Services Architecture (OGSA. SCALEA-G provides an infrastructure for conducting online monitoring and performance analysis of a variety of Grid services including computational and network resources, and Grid applications. Both push and pull models are supported, providing flexible and scalable monitoring and performance analysis. Source code and dynamic instrumentation are implemented to perform profiling and monitoring of Grid applications. A novel instrumentation request language for dynamic instrumentation and a standardized intermediate representation for binary code have been developed to facilitate the interaction between client and instrumentation services.

  12. Space Station data system analysis/architecture study. Task 1: Functional requirements definition, DR-5

    Science.gov (United States)

    1985-01-01

    The initial task in the Space Station Data System (SSDS) Analysis/Architecture Study is the definition of the functional and key performance requirements for the SSDS. The SSDS is the set of hardware and software, both on the ground and in space, that provides the basic data management services for Space Station customers and systems. The primary purpose of the requirements development activity was to provide a coordinated, documented requirements set as a basis for the system definition of the SSDS and for other subsequent study activities. These requirements should also prove useful to other Space Station activities in that they provide an indication of the scope of the information services and systems that will be needed in the Space Station program. The major results of the requirements development task are as follows: (1) identification of a conceptual topology and architecture for the end-to-end Space Station Information Systems (SSIS); (2) development of a complete set of functional requirements and design drivers for the SSIS; (3) development of functional requirements and key performance requirements for the Space Station Data System (SSDS); and (4) definition of an operating concept for the SSIS. The operating concept was developed both from a Space Station payload customer and operator perspective in order to allow a requirements practicality assessment.

  13. A high performance architecture for accelerator controls

    International Nuclear Information System (INIS)

    Allen, M.; Hunt, S.M; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-01-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of < 100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipment: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost

  14. A high performance architecture for accelerator controls

    International Nuclear Information System (INIS)

    Allen, M.; Hunt, S.M.; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-03-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of <100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipments: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost. 1 fig

  15. Energy and architecture — An overview

    Science.gov (United States)

    Sonetti, G.

    2013-06-01

    This paper aims to provide a short overview on the complex aspects and growing concern about energy in architecture by gradually zooming into it, starting from a macro-scale analysis of building contribution in the total EU energy consumption, related policies, user behaviour's impacts and vernacular architecture techniques; then looking at the meso scale of building energy performance during its use, dynamic simulations of heat transfer and insights from a whole life cycle analysis of the energy involved during construction and disposal phases; finally, at the building element micro-scale, describing local heat transfer and human thermal comfort measurements. Conclusions gather recommendations and further scenarios where different stakeholders and techniques can play their part for a wiser and more sustainable energy use, and a better built environment for us and those to come.

  16. Performance simulation and analysis of a CMOS/nano hybrid nanoprocessor system

    International Nuclear Information System (INIS)

    Cabe, Adam C; Das, Shamik

    2009-01-01

    This paper provides detailed simulation results and analysis of the prospective performance of hybrid CMOS/nanoelectronic processor systems based upon the field-programmable nanowire interconnect (FPNI) architecture. To evaluate this architecture, a complete design was developed for an FPNI implementation using 90 nm CMOS with 15 nm wide nanowire interconnects. Detailed simulations of this design illustrate that critical design choices and tradeoffs exist beyond those specified by the architecture. This includes the selection of the types of junction nanodevices, as well as the implementation of low-level circuits. In particular, the simulation results presented here show that only nanodevices with an 'on/off' current ratio of 200 or more are suitable to produce correct system-level behaviour. Furthermore, the design of the CMOS logic gates in the FPNI system must be customized to accommodate the resistances of both 'on'-state and 'off'-state nanodevices. Using these customized designs together with models of suitable nanodevices, additional simulations demonstrate that, relative to conventional 90 nm CMOS FPGA systems, performance gains can be obtained of up to 70% greater speed or up to a ninefold reduction in energy consumption.

  17. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  18. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    Energy Technology Data Exchange (ETDEWEB)

    Wijayasekara, Dumidu, E-mail: wija2589@vandals.uidaho.edu [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Manic, Milos [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Sabharwall, Piyush [Idaho National Laboratory, Idaho Falls, ID (United States); Utgikar, Vivek [Department of Chemical Engineering, University of Idaho, Idaho Falls, ID 83402 (United States)

    2011-07-15

    Highlights: > Performance prediction of PCHE using artificial neural networks. > Evaluating artificial neural network performance for PCHE modeling. > Selection of over-training resilient artificial neural networks. > Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the testing

  19. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    International Nuclear Information System (INIS)

    Wijayasekara, Dumidu; Manic, Milos; Sabharwall, Piyush; Utgikar, Vivek

    2011-01-01

    Highlights: → Performance prediction of PCHE using artificial neural networks. → Evaluating artificial neural network performance for PCHE modeling. → Selection of over-training resilient artificial neural networks. → Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the

  20. Performance monitoring and analysis of task-based OpenMP.

    Directory of Open Access Journals (Sweden)

    Yi Ding

    Full Text Available OpenMP, a typical shared memory programming paradigm, has been extensively applied in high performance computing community due to the popularity of multicore architectures in recent years. The most significant feature of the OpenMP 3.0 specification is the introduction of the task constructs to express parallelism at a much finer level of detail. This feature, however, has posed new challenges for performance monitoring and analysis. In particular, task creation is separated from its execution, causing the traditional monitoring methods to be ineffective. This paper presents a mechanism to monitor task-based OpenMP programs with interposition and proposes two demonstration graphs for performance analysis as well. The results of two experiments are discussed to evaluate the overhead of monitoring mechanism and to verify the effects of demonstration graphs using the BOTS benchmarks.

  1. Architectural Physics: Lighting.

    Science.gov (United States)

    Hopkinson, R. G.

    The author coordinates the many diverse branches of knowledge which have dealt with the field of lighting--physiology, psychology, engineering, physics, and architectural design. Part I, "The Elements of Architectural Physics", discusses the physiological aspects of lighting, visual performance, lighting design, calculations and measurements of…

  2. AN ARCHITECTURAL ANALYSIS: THE MUSEUM OF CONTEMPORARY ART, TEHRAN, IRAN

    Directory of Open Access Journals (Sweden)

    Kambiz Navai

    2010-03-01

    Full Text Available Kamran Tabatabai Diba is one of Iranian Architects, whose works during 60’s and 70’s are well-known among architects and scholars. His works are mostly considered as examples of Modern Style, scented by Iranian Architecture. His efforts on creating public, socio-cultural centers in Iran was a result of his concern about social matters, as well as seeking for a national, contemporary Architecture. Tehran’s Museum of Contemporary Art is one of the most popular and well-known Diba’s works. In this article an effort has been made to get a better understanding of this remarkable piece of work, and to light up the Architect’s intents and the architectural methods he used to express them. The critique is concentrated mostly on two mentioned aspects of Diba’s works: “Integrating Modern Style and traditional Iranian Architecture”, and “Creating socio-cultural centers and institutions well related to society.” The Analysis is based on the most important features of every work of Architecture: “Space” and “Form”. The author seeks for the meaning by “watching” the whole complex carefully, “giving descriptive information” about it, and in the meantime “analyzing data” with the help of “basic design methods” together with the knowledge of “Modern Style”, “Characteristics of Late Modern Movement” and “Traditional Iranian Architecture.” Text is accompanied by drawings and figures, which help for better knowing the complex. The effort is made to use a simple language, understandable not only by Architects or scholars, but by every other interested non-specialist reader.

  3. High performance 3D neutron transport on peta scale and hybrid architectures within APOLLO3 code

    International Nuclear Information System (INIS)

    Jamelot, E.; Dubois, J.; Lautard, J-J.; Calvin, C.; Baudron, A-M.

    2011-01-01

    APOLLO3 code is a common project of CEA, AREVA and EDF for the development of a new generation system for core physics analysis. We present here the parallelization of two deterministic transport solvers of APOLLO3: MINOS, a simplified 3D transport solver on structured Cartesian and hexagonal grids, and MINARET, a transport solver based on triangular meshes on 2D and prismatic ones in 3D. We used two different techniques to accelerate MINOS: a domain decomposition method, combined with an accelerated algorithm using GPU. The domain decomposition is based on the Schwarz iterative algorithm, with Robin boundary conditions to exchange information. The Robin parameters influence the convergence and we detail how we optimized the choice of these parameters. MINARET parallelization is based on angular directions calculation using explicit message passing. Fine grain parallelization is also available for each angular direction using shared memory multithreaded acceleration. Many performance results are presented on massively parallel architectures using more than 103 cores and on hybrid architectures using some tens of GPUs. This work contributes to the HPC development in reactor physics at the CEA Nuclear Energy Division. (author)

  4. A new method for performance evaluation of enterprise architecture using streotypes

    Directory of Open Access Journals (Sweden)

    Samaneh Khamseh

    2013-11-01

    Full Text Available These days, we see many organizations with extremely complex systems with various processes, organizational units, individuals, and information technology support where there are complex relationships among their various elements. In these organizations, poor architecture reduces efficiency and flexibility. Enterprise architecture, with full description of the functions of information technology in the organization, attempts to reduce the complexity of the most efficient tools to reach organizational objectives. Enterprise architecture can better assess the optimal conditions for achieving organizational goals. For evaluating enterprise architecture, executable model need to be applied. Executable model using a static architectural view to describe necessary documents need to be created. Therefore, to make an executable model, we need a requirement to produce products of the enterprise architecture to create an executable model. In this paper, for the production of an enterprise architecture, object-oriented approach is implemented. We present an algorithm to use stereotypes by considering reliability assessment. The approach taken in this algorithm is to improve the reliability by considering additional components in parallel and using redundancy techniques to maintain the minimum number of components. Furthermore, we implement the proposed algorithm on a case study and the results are compared with previous algorithms.

  5. METRIC context unit architecture

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, R.O.

    1988-01-01

    METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.

  6. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Science.gov (United States)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  7. Performance Analysis of Faulty Gallager-B Decoding of QC-LDPC Codes with Applications

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2014-06-01

    Full Text Available In this paper we evaluate the performance of Gallager-B algorithm, used for decoding low-density parity-check (LDPC codes, under unreliable message computation. Our analysis is restricted to LDPC codes constructed from circular matrices (QC-LDPC codes. Using Monte Carlo simulation we investigate the effects of different code parameters on coding system performance, under a binary symmetric communication channel and independent transient faults model. One possible application of the presented analysis in designing memory architecture with unreliable components is considered.

  8. Performance assessment of distributed communication architectures in smart grid.

    OpenAIRE

    Jiang, Jing; Sun, Hongjian

    2016-01-01

    The huge amount of smart meters and growing frequent data readings have become a big challenge on data acquisition and processing in smart grid advanced metering infrastructure systems. This requires a distributed communication architecture in which multiple distributed meter data management systems (MDMSs) are deployed and meter data are processed locally. In this paper, we present the network model for supporting this distributed communication architecture and propos...

  9. Point Cloud Analysis for Conservation and Enhancement of Modernist Architecture

    Science.gov (United States)

    Balzani, M.; Maietti, F.; Mugayar Kühl, B.

    2017-02-01

    Documentation of cultural assets through improved acquisition processes for advanced 3D modelling is one of the main challenges to be faced in order to address, through digital representation, advanced analysis on shape, appearance and conservation condition of cultural heritage. 3D modelling can originate new avenues in the way tangible cultural heritage is studied, visualized, curated, displayed and monitored, improving key features such as analysis and visualization of material degradation and state of conservation. An applied research focused on the analysis of surface specifications and material properties by means of 3D laser scanner survey has been developed within the project of Digital Preservation of FAUUSP building, Faculdade de Arquitetura e Urbanismo da Universidade de São Paulo, Brazil. The integrated 3D survey has been performed by the DIAPReM Center of the Department of Architecture of the University of Ferrara in cooperation with the FAUUSP. The 3D survey has allowed the realization of a point cloud model of the external surfaces, as the basis to investigate in detail the formal characteristics, geometric textures and surface features. The digital geometric model was also the basis for processing the intensity values acquired by laser scanning instrument; this method of analysis was an essential integration to the macroscopic investigations in order to manage additional information related to surface characteristics displayable on the point cloud.

  10. Architecture and Stages of the Experience City

    DEFF Research Database (Denmark)

    This book presents more than 41 articles on ‘Architecture and Stages of the Experience City'. The aim of the book is to investigate current challenges related to architecture, art and city life in the ‘Experience City' and it is presenting cutting edge knowledge and experiences within the following...... themes: Experience City Making Digital Architecture Stages in the Experience City The City as a Learning Lab Experience City Architecture Performative Architecture Art and Performance Urban Catalyst and Temporary Use...

  11. A Study Effects Architectural Marketing Capabilities on Performance Marketing unit Based on: Morgan et al case: Past Industry in Tehran

    OpenAIRE

    Mohammad Reza Dalvi; Robabe Seifi

    2014-01-01

    Over a period of time architectural marketing capabilities combination of knowledge and skills develop in to capabilities. These architectural marketing capabilities have been identified as one of the important ways firms can achieve a competitive advantage The following research tests effects architectural marketing capabilities on performance marketing unit Based on a survey .a structural equation model was developed to test our hypotheses. the study develops a structural model linking arch...

  12. A computer architecture for the implementation of SDL

    Energy Technology Data Exchange (ETDEWEB)

    Crutcher, L A

    1989-01-01

    Finite State Machines (FSMs) are a part of well-established automata theory. The FSM model is useful in all stages of system design, from abstract specification to implementation in hardware. The FSM model has been studied as a technique in software design, and the implementation of this type of software considered. The Specification and Description Language (SDL) has been considered in detail as an example of this approach. The complexity of systems designed using SDL warrants their implementation through a programmed computer. A benchmark for the implementation of SDL has been established and the performance of SDL on three particular computer architectures investigated. Performance is judged according to this benchmark and also the ease of implementation, which is related to the confidence of a correct implementation. The implementation on 68000s and transputers is considered as representative of established and state-of-the-art microprocessors respectively. A third architecture that uses a processor that has been proposed specifically for the implementation of SDL is considered as a high-level custom architecture. Analysis and measurements of the benchmark on each architecture indicates that the execution time of SDL decreases by an order of magnitude from the 68000 to the transputer to the custom architecture. The ease of implementation is also greater when the execution time is reduced. A study of some real applications of SDL indicates that the benchmark figures are reflected in user-oriented measures of performance such as data throughput and response time. A high-level architecture such as the one proposed here for SDL can provide benefits in terms of execution time and correctness.

  13. Predicting the academic success of architecture students by pre-enrolment requirement: using machine-learning techniques

    Directory of Open Access Journals (Sweden)

    Ralph Olusola Aluko

    2016-12-01

    Full Text Available In recent years, there has been an increase in the number of applicants seeking admission into architecture programmes. As expected, prior academic performance (also referred to as pre-enrolment requirement is a major factor considered during the process of selecting applicants. In the present study, machine learning models were used to predict academic success of architecture students based on information provided in prior academic performance. Two modeling techniques, namely K-nearest neighbour (k-NN and linear discriminant analysis were applied in the study. It was found that K-nearest neighbour (k-NN outperforms the linear discriminant analysis model in terms of accuracy. In addition, grades obtained in mathematics (at ordinary level examinations had a significant impact on the academic success of undergraduate architecture students. This paper makes a modest contribution to the ongoing discussion on the relationship between prior academic performance and academic success of undergraduate students by evaluating this proposition. One of the issues that emerges from these findings is that prior academic performance can be used as a predictor of academic success in undergraduate architecture programmes. Overall, the developed k-NN model can serve as a valuable tool during the process of selecting new intakes into undergraduate architecture programmes in Nigeria.

  14. Information architecture: study and analysis of data Public Medical base (PubMed

    Directory of Open Access Journals (Sweden)

    Odete Máyra Mesquita Sales

    2016-07-01

    Full Text Available Objective. Based on principles proposed by Rosenfeld and Morville (2006, the present study examined the PubMed database interface, since a well-structured information architecture contributes to good usability in any digital environment. Method. The research development occurred through the use of literature techniques and empirical study on the analysis of information architecture based on organization, navigation, recommended labeling and search for Rosenfeld and Morville (2006 for the sake of usability base PubMed. For better understanding and description of these principles, we used the technique of content analysis. Results. The results showed that the database interface meets the criteria established by the elements of Information Architecture, such as organization based on hypertext structure, horizontal menu and local content divided into categories, identifying active links, global navigation , breadcrumb, textual labeling and iconographic and highlight the search engine. Conclusions. This research showed that the PubMed database interface is well structured, friendly and objective, with numerous possibilities of search and information retrieval. However, there is a need to adopt accessibility standards on this website, so that it reaches more efficiently its purpose of facilitating access to information organized and stored in the PubMed database.

  15. Performance of Modular Prefabricated Architecture: Case Study-Based Review and Future Pathways

    Directory of Open Access Journals (Sweden)

    Fred Edmond Boafo

    2016-06-01

    Full Text Available Even though tightened building energy efficiency standards are implemented periodically in many countries, existing buildings continually consume a momentous quota of the total primary energy. Energy efficiency solutions range from material components to bulk systems. A technique of building construction, referred to as prefabricated architecture (prefab, is increasing in reputation. Prefab encompasses the offsite fabrication of building components to a greater degree of finish as bulk building structures and systems, and their assembly on-site. In this context, prefab improves the speed of construction, quality of architecture, efficiency of materials, and worker safety, while limiting environmental impacts of construction, as compared to conventional site-built construction practices. Quite recently, a 57 story skyscraper was built in 19 days using prefabricated modules. From the building physics point of view, the bulk systems and tighter integration method of prefab minimizes thermal bridges. This study seeks to clearly characterize the levels of prefab and to investigate the performance of modular prefab; considering acoustic constrain, seismic resistance, thermal behavior, energy consumption, and life cycle analysis of existing prefab cases and, thus, provides a dynamic case study-based review. Generally, prefab can be categorized into components, panels (2D, modules (3D, hybrids, and unitized whole buildings. On average, greenhouse gas emissions from conventional construction were higher than for modular construction, not discounting some individual discrepancies. Few studies have focused on monitored data on prefab and occupants’ comfort but additional studies are required to understand the public’s perception of the technology. The scope of the work examined will be of interest to building engineers, manufacturers, and energy experts, as well as serve as a foundational reference for future study.

  16. Blaze-DEMGPU: Modular high performance DEM framework for the GPU architecture

    Directory of Open Access Journals (Sweden)

    Nicolin Govender

    2016-01-01

    Full Text Available Blaze-DEMGPU is a modular GPU based discrete element method (DEM framework that supports polyhedral shaped particles. The high level performance is attributed to the light weight and Single Instruction Multiple Data (SIMD that the GPU architecture offers. Blaze-DEMGPU offers suitable algorithms to conduct DEM simulations on the GPU and these algorithms can be extended and modified. Since a large number of scientific simulations are particle based, many of the algorithms and strategies for GPU implementation present in Blaze-DEMGPU can be applied to other fields. Blaze-DEMGPU will make it easier for new researchers to use high performance GPU computing as well as stimulate wider GPU research efforts by the DEM community.

  17. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.

    Science.gov (United States)

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2017-03-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current

  18. Energy and architecture — An overview

    Directory of Open Access Journals (Sweden)

    Sonetti G.

    2013-06-01

    Full Text Available This paper aims to provide a short overview on the complex aspects and growing concern about energy in architecture by gradually zooming into it, starting from a macro-scale analysis of building contribution in the total EU energy consumption, related policies, user behaviour’s impacts and vernacular architecture techniques; then looking at the meso scale of building energy performance during its use, dynamic simulations of heat transfer and insights from a whole life cycle analysis of the energy involved during construction and disposal phases; finally, at the building element micro-scale, describing local heat transfer and human thermal comfort measurements. Conclusions gather recommendations and further scenarios where different stakeholders and techniques can play their part for a wiser and more sustainable energy use, and a better built environment for us and those to come.

  19. Motion/imagery secure cloud enterprise architecture analysis

    Science.gov (United States)

    DeLay, John L.

    2012-06-01

    Cloud computing with storage virtualization and new service-oriented architectures brings a new perspective to the aspect of a distributed motion imagery and persistent surveillance enterprise. Our existing research is focused mainly on content management, distributed analytics, WAN distributed cloud networking performance issues of cloud based technologies. The potential of leveraging cloud based technologies for hosting motion imagery, imagery and analytics workflows for DOD and security applications is relatively unexplored. This paper will examine technologies for managing, storing, processing and disseminating motion imagery and imagery within a distributed network environment. Finally, we propose areas for future research in the area of distributed cloud content management enterprises.

  20. Analysis of central enterprise architecture elements in models of six eHealth projects.

    Science.gov (United States)

    Virkanen, Hannu; Mykkänen, Juha

    2014-01-01

    Large-scale initiatives for eHealth services have been established in many countries on regional or national level. The use of Enterprise Architecture has been suggested as a methodology to govern and support the initiation, specification and implementation of large-scale initiatives including the governance of business changes as well as information technology. This study reports an analysis of six health IT projects in relation to Enterprise Architecture elements, focusing on central EA elements and viewpoints in different projects.

  1. Exploring the architectural trade space of NASAs Space Communication and Navigation Program

    Science.gov (United States)

    Sanchez, M.; Selva, D.; Cameron, B.; Crawley, E.; Seas, A.; Seery, B.

    NASAs Space Communication and Navigation (SCaN) Program is responsible for providing communication and navigation services to space missions and other users in and beyond low Earth orbit. The current SCaN architecture consists of three independent networks: the Space Network (SN), which contains the TDRS relay satellites in GEO; the Near Earth Network (NEN), which consists of several NASA owned and commercially operated ground stations; and the Deep Space Network (DSN), with three ground stations in Goldstone, Madrid, and Canberra. The first task of this study is the stakeholder analysis. The goal of the stakeholder analysis is to identify the main stakeholders of the SCaN system and their needs. Twenty-one main groups of stakeholders have been identified and put on a stakeholder map. Their needs are currently being elicited by means of interviews and an extensive literature review. The data will then be analyzed by applying Cameron and Crawley's stakeholder analysis theory, with a view to highlighting dominant needs and conflicting needs. The second task of this study is the architectural tradespace exploration of the next generation TDRSS. The space of possible architectures for SCaN is represented by a set of architectural decisions, each of which has a discrete set of options. A computational tool is used to automatically synthesize a very large number of possible architectures by enumerating different combinations of decisions and options. The same tool contains models to evaluate the architectures in terms of performance and cost. The performance model uses the stakeholder needs and requirements identified in the previous steps as inputs, and it is based in the VASSAR methodology presented in a companion paper. This paper summarizes the current status of the MIT SCaN architecture study. It starts by motivating the need to perform tradespace exploration studies in the context of relay data systems through a description of the history NASA's space communicati

  2. Photoperiodic envelope: application of the generative design based on the performance of architectural envelopes, the exploring its shape and performance optimization

    International Nuclear Information System (INIS)

    Viquez Alas, Ernesto Alonso

    2013-01-01

    An alternative method of design is demonstrated to be used in the creation of an architectural envelope, through the application of tools and techniques such as algorithms, optimization, parametrization and simulation. The aesthetic criteria of the form are enriched to achieve the decrease in solar radiation rates. The methods and techniques of optimization, simulation, analysis and synthesis are habituated through the study of the contemporary paradigm of generative design and design by performance. Some of the applying of potential benefits an alternative design method and conditions to be met are designed to facilitate its application in the design of envelopes. A study of application and testing is demonstrated to explore the surround topology. The optimization results in relation to reducing the solar incidence are examined in a simulated environment [es

  3. Organizational capabilities and bottom line performance : The relationship between organizational architecture and strategic performance of business units in Dutch headquartered multinationals

    NARCIS (Netherlands)

    Eikelenboom, B.L.

    2005-01-01

    This study addresses a key question in business: do organizational capabilities relate to bottom line performance? It is a hard struggle to assess intangible, organizational capabilities, but due to web-based technology, serious advances have been made to measure organizational architecture, a

  4. Clock generators for SOC processors circuits and architectures

    CERN Document Server

    Fahim, Amr

    2004-01-01

    This book explores the design of fully-integrated frequency synthesizers suitable for system-on-a-chip (SOC) processors. The text takes a more global design perspective in jointly examining the design space at the circuit level as well as at the architectural level. The comprehensive coverage includes summary chapters on circuit theory as well as feedback control theory relevant to the operation of phase locked loops (PLLs). On the circuit level, the discussion includes low-voltage analog design in deep submicron digital CMOS processes, effects of supply noise, substrate noise, as well device noise. On the architectural level, the discussion includes PLL analysis using continuous-time as well as discrete-time models, linear and nonlinear effects of PLL performance, and detailed analysis of locking behavior. The book provides numerous real world applications, as well as practical rules-of-thumb for modern designers to use at the system, architectural, as well as the circuit level.

  5. Fractal analysis of bone architecture at distal radius

    International Nuclear Information System (INIS)

    Tomomitsu, Tatsushi; Mimura, Hiroaki; Murase, Kenya; Sone, Teruki; Fukunaga, Masao

    2005-01-01

    Bone strength depends on bone quality (architecture, turnover, damage accumulation, and mineralization) as well as bone mass. In this study, human bone architecture was analyzed using fractal image analysis, and the clinical relevance of this method was evaluated. The subjects were 12 healthy female controls and 16 female patients suspected of having osteoporosis (age range, 22-70 years; mean age, 49.1 years). High-resolution CT images of the distal radius were acquired and analyzed using a peripheral quantitative computed tomography (pQCT) system. On the same day, bone mineral densities of the lumbar spine (L-BMD), proximal femur (F-BMD), and distal radius (R-BMD) were measured by dual-energy X-ray absorptiometry (DXA). We examined the correlation between the fractal dimension and six bone mass indices. Subjects diagnosed with osteopenia or osteoporosis were divided into two groups (with and without vertebral fracture), and we compared measured values between these two groups. The fractal dimension correlated most closely with L-BMD (r=0.744). The coefficient of correlation between the fractal dimension and L-BMD was very similar to the coefficient of correlation between L-BMD and F-BMD (r=0.783) and the coefficient of correlation between L-BMD and R-BMD (r=0.742). The fractal dimension was the only measured value that differed significantly between both the osteopenic and the osteoporotic subjects with and without vertebral fracture. The present results suggest that the fractal dimension of the distal radius can be reliably used as a bone strength index that reflects bone architecture as well as bone mass. (author)

  6. Non-Planar Nanotube and Wavy Architecture Based Ultra-High Performance Field Effect Transistors

    KAUST Repository

    Hanna, Amir

    2016-11-01

    This dissertation presents a unique concept for a device architecture named the nanotube (NT) architecture, which is capable of higher drive current compared to the Gate-All-Around Nanowire architecture when applied to heterostructure Tunnel Field Effect Transistors. Through the use of inner/outer core-shell gates, heterostructure NT TFET leverages physically larger tunneling area thus achieving higher driver current (ION) and saving real estates by eliminating arraying requirement. We discuss the physics of p-type (Silicon/Indium Arsenide) and n-type (Silicon/Germanium hetero-structure) based TFETs. Numerical TCAD simulations have shown that NT TFETs have 5x and 1.6 x higher normalized ION when compared to GAA NW TFET for p and n-type TFETs, respectively. This is due to the availability of larger tunneling junction cross sectional area, and lower Shockley-Reed-Hall recombination, while achieving sub 60 mV/dec performance for more than 5 orders of magnitude of drain current, thus enabling scaling down of Vdd to 0.5 V. This dissertation also introduces a novel thin-film-transistors architecture that is named the Wavy Channel (WC) architecture, which allows for extending device width by integrating vertical fin-like substrate corrugations giving rise to up to 50% larger device width, without occupying extra chip area. The novel architecture shows 2x higher output drive current per unit chip area when compared to conventional planar architecture. The current increase is attributed to both the extra device width and 50% enhancement in field effect mobility due to electrostatic gating effects. Digital circuits are fabricated to demonstrate the potential of integrating WC TFT based circuits. WC inverters have shown 2× the peak-to-peak output voltage for the same input, and ~2× the operation frequency of the planar inverters for the same peak-to-peak output voltage. WC NAND circuits have shown 2× higher peak-to-peak output voltage, and 3× lower high-to-low propagation

  7. Transcriptomic Analysis Using Olive Varieties and Breeding Progenies Identifies Candidate Genes Involved in Plant Architecture.

    Science.gov (United States)

    González-Plaza, Juan J; Ortiz-Martín, Inmaculada; Muñoz-Mérida, Antonio; García-López, Carmen; Sánchez-Sevilla, José F; Luque, Francisco; Trelles, Oswaldo; Bejarano, Eduardo R; De La Rosa, Raúl; Valpuesta, Victoriano; Beuzón, Carmen R

    2016-01-01

    Plant architecture is a critical trait in fruit crops that can significantly influence yield, pruning, planting density and harvesting. Little is known about how plant architecture is genetically determined in olive, were most of the existing varieties are traditional with an architecture poorly suited for modern growing and harvesting systems. In the present study, we have carried out microarray analysis of meristematic tissue to compare expression profiles of olive varieties displaying differences in architecture, as well as seedlings from their cross pooled on the basis of their sharing architecture-related phenotypes. The microarray used, previously developed by our group has already been applied to identify candidates genes involved in regulating juvenile to adult transition in the shoot apex of seedlings. Varieties with distinct architecture phenotypes and individuals from segregating progenies displaying opposite architecture features were used to link phenotype to expression. Here, we identify 2252 differentially expressed genes (DEGs) associated to differences in plant architecture. Microarray results were validated by quantitative RT-PCR carried out on genes with functional annotation likely related to plant architecture. Twelve of these genes were further analyzed in individual seedlings of the corresponding pool. We also examined Arabidopsis mutants in putative orthologs of these targeted candidate genes, finding altered architecture for most of them. This supports a functional conservation between species and potential biological relevance of the candidate genes identified. This study is the first to identify genes associated to plant architecture in olive, and the results obtained could be of great help in future programs aimed at selecting phenotypes adapted to modern cultivation practices in this species.

  8. Improving the Performance of CPU Architectures by Reducing the Operating System Overhead (Extended Version

    Directory of Open Access Journals (Sweden)

    Zagan Ionel

    2016-07-01

    Full Text Available The predictable CPU architectures that run hard real-time tasks must be executed with isolation in order to provide a timing-analyzable execution for real-time systems. The major problems for real-time operating systems are determined by an excessive jitter, introduced mainly through task switching. This can alter deadline requirements, and, consequently, the predictability of hard real-time tasks. New requirements also arise for a real-time operating system used in mixed-criticality systems, when the executions of hard real-time applications require timing predictability. The present article discusses several solutions to improve the performance of CPU architectures and eventually overcome the Operating Systems overhead inconveniences. This paper focuses on the innovative CPU implementation named nMPRA-MT, designed for small real-time applications. This implementation uses the replication and remapping techniques for the program counter, general purpose registers and pipeline registers, enabling multiple threads to share a single pipeline assembly line. In order to increase predictability, the proposed architecture partially removes the hazard situation at the expense of larger execution latency per one instruction.

  9. Security Analysis of Dynamic SDN Architectures Based on Game Theory

    Directory of Open Access Journals (Sweden)

    Chao Qi

    2018-01-01

    Full Text Available Security evaluation of SDN architectures is of critical importance to develop robust systems and address attacks. Focused on a novel-proposed dynamic SDN framework, a game-theoretic model is presented to analyze its security performance. This model can represent several kinds of players’ information, simulate approximate attack scenarios, and quantitatively estimate systems’ reliability. And we explore several typical game instances defined by system’s capability, players’ objects, and strategies. Experimental results illustrate that the system’s detection capability is not a decisive element to security enhancement as introduction of dynamism and redundancy into SDN can significantly improve security gain and compensate for its detection weakness. Moreover, we observe a range of common strategic actions across environmental conditions. And analysis reveals diverse defense mechanisms adopted in dynamic systems have different effect on security improvement. Besides, the existence of equilibrium in particular situations further proves the novel structure’s feasibility, flexibility, and its persistent ability against long-term attacks.

  10. Crosstalk performance of integrated optical cross-connects

    NARCIS (Netherlands)

    Herben, C.G.P.; Leijtens, X.J.M.; Maat, D.H.P.; Blok, H.; Smit, M.K.

    1999-01-01

    Crosstalk performance of monolithically integrated multiwavelength optical cross-connects (OXC's) depends strongly on their architecture. In this paper, a semiquantitative analysis of crosstalk in 11 different architectures is presented. Two architectures are analyzed numerically in more detail and

  11. Fractionated Spacecraft Architectures Seeding Study

    National Research Council Canada - National Science Library

    Mathieu, Charlotte; Weigel, Annalisa

    2006-01-01

    .... Models were developed from a customer-centric perspective to assess different fractionated spacecraft architectures relative to traditional spacecraft architectures using multi-attribute analysis...

  12. Exploration Space Suit Architecture: Destination Environmental-Based Technology Development

    Science.gov (United States)

    Hill, Terry R.

    2010-01-01

    This paper picks up where EVA Space Suit Architecture: Low Earth Orbit Vs. Moon Vs. Mars (Hill, Johnson, IEEEAC paper #1209) left off in the development of a space suit architecture that is modular in design and interfaces and could be reconfigured to meet the mission or during any given mission depending on the tasks or destination. This paper will walk though the continued development of a space suit system architecture, and how it should evolve to meeting the future exploration EVA needs of the United States space program. In looking forward to future US space exploration and determining how the work performed to date in the CxP and how this would map to a future space suit architecture with maximum re-use of technology and functionality, a series of thought exercises and analysis have provided a strong indication that the CxP space suit architecture is well postured to provide a viable solution for future exploration missions. Through the destination environmental analysis that is presented in this paper, the modular architecture approach provides the lowest mass, lowest mission cost for the protection of the crew given any human mission outside of low Earth orbit. Some of the studies presented here provide a look and validation of the non-environmental design drivers that will become every-increasingly important the further away from Earth humans venture and the longer they are away. Additionally, the analysis demonstrates a logical clustering of design environments that allows a very focused approach to technology prioritization, development and design that will maximize the return on investment independent of any particular program and provide architecture and design solutions for space suit systems in time or ahead of being required for any particular manned flight program in the future. The new approach to space suit design and interface definition the discussion will show how the architecture is very adaptable to programmatic and funding changes with

  13. Requirement analysis and architecture of data communication system for integral reactor

    International Nuclear Information System (INIS)

    Jeong, K. I.; Kwon, H. J.; Park, J. H.; Park, H. Y.; Koo, I. S.

    2005-05-01

    When digitalizing the Instrumentation and Control(I and C) systems in Nuclear Power Plants(NPP), a communication network is required for exchanging the digitalized data between I and C equipments in a NPP. A requirements analysis and an analysis of design elements and techniques are required for the design of a communication network. Through the requirements analysis of the code and regulation documents such as NUREG/CR-6082, section 7.9 of NUREG 0800 , IEEE Standard 7-4.3.2 and IEEE Standard 603, the extracted requirements can be used as a design basis and design concept for a detailed design of a communication network in the I and C system of an integral reactor. Design elements and techniques such as a physical topology, protocol transmission media and interconnection device should be considered for designing a communication network. Each design element and technique should be analyzed and evaluated as a portion of the integrated communication network design. In this report, the basic design requirements related to the design of communication network are investigated by using the code and regulation documents and an analysis of the design elements and techniques is performed. Based on these investigation and analysis, the overall architecture including the safety communication network and the non-safety communication network is proposed for an integral reactor

  14. Performance Analysis of Multiradio Transmitter with Polar or Cartesian Architectures Associated with High Efficiency Switched-Mode Power Amplifiers (invited paper

    Directory of Open Access Journals (Sweden)

    F. Robert

    2010-12-01

    Full Text Available This paper deals with wireless multi-radio transmitter architectures operating in the frequency band of 800 MHz – 6 GHz. As a consequence of the constant evolution in the communication systems, mobile transmitters must be able to operate at different frequency bands and modes according to existing standards specifications. The concept of a unique multiradio architecture is an evolution of the multistandard transceiver characterized by a parallelization of circuits for each standard. Multi-radio concept optimizes surface and power consumption. Transmitter architectures using sampling techniques and baseband ΣΔ or PWM coding of signals before their amplification appear as good candidates for multiradio transmitters for several reasons. They allow using high efficiency power amplifiers such as switched-mode PAs. They are highly flexible and easy to integrate because of their digital nature. But when the transmitter efficiency is considered, many elements have to be taken into account: signal coding efficiency, PA efficiency, RF filter. This paper investigates the interest of these architectures for a multiradio transmitter able to support existing wireless communications standards between 800 MHz and 6 GHz. It evaluates and compares the different possible architectures for WiMAX and LTE standards in terms of signal quality and transmitter power efficiency.

  15. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  16. Architecture and performance of radiation-hardened 64-bit SOS/MNOS memory

    International Nuclear Information System (INIS)

    Kliment, D.C.; Ronen, R.S.; Nielsen, R.L.; Seymour, R.N.; Splinter, M.R.

    1976-01-01

    This paper discusses the circuit architecture and performance of a nonvolatile 64-bit MNOS memory fabricated on silicon on sapphire (SOS). The circuit is a test vehicle designed to demonstrate the feasibility of a high-performance, high-density, radiation-hardened MNOS/SOS memory. The array is organized as 16 words by 4 bits and is fully decoded. It utilizes a two-(MNOS) transistor-per-bit cell and differential sensing scheme and is realized in PMOS static resistor load logic. The circuit was fabricated and tested as both a fast write random access memory (RAM) and an electrically alterable read only memory (EAROM) to demonstrate design and process flexibility. Discrete device parameters such as retention, circuit electrical characteristics, and tolerance to total dose and transient radiation are presented

  17. Wavy channel thin film transistor architecture for area efficient, high performance and low power displays

    KAUST Repository

    Hanna, Amir

    2013-12-23

    We demonstrate a new thin film transistor (TFT) architecture that allows expansion of the device width using continuous fin features - termed as wavy channel (WC) architecture. This architecture allows expansion of transistor width in a direction perpendicular to the substrate, thus not consuming extra chip area, achieving area efficiency. The devices have shown for a 13% increase in the device width resulting in a maximum 2.5× increase in \\'ON\\' current value of the WCTFT, when compared to planar devices consuming the same chip area, while using atomic layer deposition based zinc oxide (ZnO) as the channel material. The WCTFT devices also maintain similar \\'OFF\\' current value, ~100 pA, when compared to planar devices, thus not compromising on power consumption for performance which usually happens with larger width devices. This work offers an interesting opportunity to use WCTFTs as backplane circuitry for large-area high-resolution display applications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Raexplore: Enabling Rapid, Automated Architecture Exploration for Full Applications

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yao [Argonne National Lab. (ANL), Argonne, IL (United States); Balaprakash, Prasanna [Argonne National Lab. (ANL), Argonne, IL (United States); Meng, Jiayuan [Argonne National Lab. (ANL), Argonne, IL (United States); Morozov, Vitali [Argonne National Lab. (ANL), Argonne, IL (United States); Parker, Scott [Argonne National Lab. (ANL), Argonne, IL (United States); Kumaran, Kalyan [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-12-01

    We present Raexplore, a performance modeling framework for architecture exploration. Raexplore enables rapid, automated, and systematic search of architecture design space by combining hardware counter-based performance characterization and analytical performance modeling. We demonstrate Raexplore for two recent manycore processors IBM Blue- Gene/Q compute chip and Intel Xeon Phi, targeting a set of scientific applications. Our framework is able to capture complex interactions between architectural components including instruction pipeline, cache, and memory, and to achieve a 3–22% error for same-architecture and cross-architecture performance predictions. Furthermore, we apply our framework to assess the two processors, and discover and evaluate a list of architectural scaling options for future processor designs.

  19. High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures

    KAUST Repository

    Ltaief, Hatem

    2013-04-01

    This article presents a new high-performance bidiagonal reduction (BRD) for homogeneous multicore architectures. This article is an extension of the high-performance tridiagonal reduction implemented by the same authors [Luszczek et al., IPDPS 2011] to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix, which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this article comes from the combination of four important features: (1) tile algorithms with tile data layout, which provide an efficient data representation in main memory; (2) a two-stage reduction approach that allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second stage (reduction from band to bidiagonal form) by using high-performance kernels optimized for cache reuse; (3) a data dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures that the data dependencies are not violated. A detailed analysis is provided to understand the critical impact of the tile size on the total execution time, which also corresponds to the matrix bandwidth size after the reduction of the first stage. The performance results show a significant improvement over currently established alternatives. The new high-performance BRD achieves up to a 30-fold speedup on a 16-core Intel Xeon machine with a 12000×12000 matrix size against the state-of-the-art open source and commercial numerical software packages, namely LAPACK, compiled with optimized and multithreaded BLAS from MKL as well as Intel MKL version 10.2. © 2013 ACM.

  20. Architectural Quality of Low Energy Houses

    DEFF Research Database (Denmark)

    Lauring, Michael; Marsh, Rob

    2008-01-01

    This paper expounds a systematic vocabulary concerning architectural quality in houses in general and low energy houses in particular. The vocabulary consists of nine themes. Inside each theme, examples are given of how to achieve both architectural quality and good environmental performance....... The purpose is to provide a useful tool for communication and argumentation in order to further integrated design of houses with good architecture and good environmental performance. ...

  1. Architecture for improved mass transport and system performance in redox flow batteries

    Science.gov (United States)

    Houser, Jacob; Pezeshki, Alan; Clement, Jason T.; Aaron, Douglas; Mench, Matthew M.

    2017-05-01

    In this work, electrochemical performance and parasitic losses are combined in an overall system-level efficiency metric for a high performance, all-vanadium redox flow battery. It was found that pressure drop and parasitic pumping losses are relatively negligible for high performance cells, i.e., those capable of operating at a high current density while at a low flow rate. Through this finding, the Equal Path Length (EPL) flow field architecture was proposed and evaluated. This design has superior mass transport characteristics in comparison with the standard serpentine and interdigitated designs at the expense of increased pressure drop. An Aspect Ratio (AR) design is discussed and evaluated, which demonstrates decreased pressure drop compared to the EPL design, while maintaining similar electrochemical performance under most conditions. This AR design is capable of leading to improved system energy efficiency for flow batteries of all chemistries.

  2. Globalization and Landscape Architecture

    Directory of Open Access Journals (Sweden)

    Robert R. Hewitt

    2014-02-01

    Full Text Available The literature review examines globalization and landscape architecture as discourse, samples its various meanings, and proposes methods to identify and contextualize its specific literature. Methodologically, the review surveys published articles and books by leading authors and within the WorldCat.org Database associated with landscape architecture and globalization, analyzing survey results for comprehensive conceptual and co-relational frameworks. Three “higher order” dimensions frame the review’s conceptual organization, facilitating the organization of subordinate/subtopical areas of interest useful for comparative analysis. Comparative analysis of the literature suggests an uneven clustering of discipline-related subject matter across the literature’s “higher order” dimensions, with a much smaller body of literature related to landscape architecture confined primarily to topics associated with the dispersion of global phenomena. A subcomponent of this smaller body of literature is associated with other fields of study, but inferentially related to landscape architecture. The review offers separate references and bibliographies for globalization literature in general and globalization and landscape architecture literature, specifically.

  3. High Performance Processing and Analysis of Geospatial Data Using CUDA on GPU

    Directory of Open Access Journals (Sweden)

    STOJANOVIC, N.

    2014-11-01

    Full Text Available In this paper, the high-performance processing of massive geospatial data on many-core GPU (Graphic Processing Unit is presented. We use CUDA (Compute Unified Device Architecture programming framework to implement parallel processing of common Geographic Information Systems (GIS algorithms, such as viewshed analysis and map-matching. Experimental evaluation indicates the improvement in performance with respect to CPU-based solutions and shows feasibility of using GPU and CUDA for parallel implementation of GIS algorithms over large-scale geospatial datasets.

  4. Weighted Components of i-Government Enterprise Architecture

    Science.gov (United States)

    Budiardjo, E. K.; Firmansyah, G.; Hasibuan, Z. A.

    2017-01-01

    Lack of government performance, among others due to the lack of coordination and communication among government agencies. Whilst, Enterprise Architecture (EA) in the government can be use as a strategic planning tool to improve productivity, efficiency, and effectivity. However, the existence components of Government Enterprise Architecture (GEA) do not show level of importance, that cause difficulty in implementing good e-government for good governance. This study is to explore the weight of GEA components using Principal Component Analysis (PCA) in order to discovered an inherent structure of e-government. The results show that IT governance component of GEA play a major role in the GEA. The rest of components that consist of e-government system, e-government regulation, e-government management, and application key operational, contributed more or less the same. Beside that GEA from other countries analyzes using comparative base on comon enterprise architecture component. These weighted components use to construct i-Government enterprise architecture. and show the relative importance of component in order to established priorities in developing e-government.

  5. Nest-like LiFePO4/C architectures for high performance lithium ion batteries

    International Nuclear Information System (INIS)

    Deng Honggui; Jin Shuangling; Zhan Liang; Qiao Wenming; Ling Licheng

    2012-01-01

    Highlights: ► Nest-like LiFePO 4 /C architectures (nest-like LPCs) were synthesized by solvothermal method. ► The microstructures of nest-like LPCs are very stable constructed by many nanosheets. ► The unique structures offer nest-like LPC electrode with high rate performance. ► The reversible capacity of nest-like LPCs electrode is as high as 120 mAh g −1 at 10 C. - Abstract: A novel kind of microsized nest-like LiFePO 4 /C architectures was synthesized by solvothermal method using inexpensive and stable Fe 3+ salt as iron source and ethylene glycol as mediate. A layer of carbon could be coated directly on the surface of LiFePO 4 crystals and the nest-like unique structures offer the cathode materials with high reversible capacity, excellent cycling stability and high rate performance. The reversible capacity can maintain 159 mAh g −1 at 0.1 C and 120 mAh g −1 at 10 C.

  6. The Archaeology of Architecture. New models of analysis applied to structures of Alta Andalusia in the Iberian period

    OpenAIRE

    Sánchez, Julia

    1998-01-01

    New theories of architectural space, based on the philosophy of Lao-Tsé emerged at the end of nineteenth century. Interior space was now considered the core of architecture. Developing concepts of the movement of the human body in this space, new contributions focused on the detailed study of architecture have led to the creation of a new discipline called the Archaeology of Architecture. New models of analysis, based on access and visibility, are applied to the interior space of Iberian dome...

  7. Analyzing Resiliency of the Smart Grid Communication Architectures

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2016-08-01

    Smart grids are susceptible to cyber-attack as a result of new communication, control and computation techniques employed in the grid. In this paper, we characterize and analyze the resiliency of smart grid communication architecture, specifically an RF mesh based architecture, under cyber attacks. We analyze the resiliency of the communication architecture by studying the performance of high-level smart grid functions such as metering, and demand response which depend on communication. Disrupting the operation of these functions impacts the operational resiliency of the smart grid. Our analysis shows that it takes an attacker only a small fraction of meters to compromise the communication resiliency of the smart grid. We discuss the implications of our result to critical smart grid functions and to the overall security of the smart grid.

  8. Architecture and Intelligentsia

    Directory of Open Access Journals (Sweden)

    Alexander Rappaport

    2015-08-01

    Full Text Available The article observes intellectual and cultural level of architecture and its important functions in social process. Historical analysis shows constant decline of intellectual level of profession, as a reaction on radical changes in its social functions and mass scale, leading to degrading of individual critical reflection and growing dependence of architecture to political and economical bureaucracy.

  9. Architecture and Intelligentsia

    OpenAIRE

    Alexander Rappaport

    2015-01-01

    The article observes intellectual and cultural level of architecture and its important functions in social process. Historical analysis shows constant decline of intellectual level of profession, as a reaction on radical changes in its social functions and mass scale, leading to degrading of individual critical reflection and growing dependence of architecture to political and economical bureaucracy.

  10. Improvement of Wear Performance of Nano-Multilayer PVD Coatings under Dry Hard End Milling Conditions Based on Their Architectural Development

    Directory of Open Access Journals (Sweden)

    Shahereen Chowdhury

    2018-02-01

    Full Text Available The TiAlCrSiYN-based family of PVD (physical vapor deposition hard coatings was specially designed for extreme conditions involving the dry ultra-performance machining of hardened tool steels. However, there is a strong potential for further advances in the wear performance of the coatings through improvements in their architecture. A few different coating architectures (monolayer, multilayer, bi-multilayer, bi-multilayer with increased number of alternating nano-layers were studied in relation to cutting-tool life. Comprehensive characterization of the structure and properties of the coatings has been performed using XRD, SEM, TEM, micro-mechanical studies and tool-life evaluation. The wear performance was then related to the ability of the coating layer to exhibit minimal surface damage under operation, which is directly associated with the various micro-mechanical characteristics (such as hardness, elastic modulus and related characteristics; nano-impact; scratch test-based characteristics. The results presented exhibited that a substantial increase in tool life as well as improvement of the mechanical properties could be achieved through the architectural development of the coatings.

  11. Efficient Sorting on the Tilera Manycore Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Morari, Alessandro; Tumeo, Antonino; Villa, Oreste; Secchi, Simone; Valero, Mateo

    2012-10-24

    e present an efficient implementation of the radix sort algo- rithm for the Tilera TILEPro64 processor. The TILEPro64 is one of the first successful commercial manycore processors. It is com- posed of 64 tiles interconnected through multiple fast Networks- on-chip and features a fully coherent, shared distributed cache. The architecture has a large degree of flexibility, and allows various optimization strategies. We describe how we mapped the algorithm to this architecture. We present an in-depth analysis of the optimizations for each phase of the algorithm with respect to the processor’s sustained performance. We discuss the overall throughput reached by our radix sort implementation (up to 132 MK/s) and show that it provides comparable or better performance-per-watt with respect to state-of-the art implemen- tations on x86 processors and graphic processing units.

  12. The architectural design of networks of protein domain architectures.

    Science.gov (United States)

    Hsu, Chia-Hsin; Chen, Chien-Kuo; Hwang, Ming-Jing

    2013-08-23

    Protein domain architectures (PDAs), in which single domains are linked to form multiple-domain proteins, are a major molecular form used by evolution for the diversification of protein functions. However, the design principles of PDAs remain largely uninvestigated. In this study, we constructed networks to connect domain architectures that had grown out from the same single domain for every single domain in the Pfam-A database and found that there are three main distinctive types of these networks, which suggests that evolution can exploit PDAs in three different ways. Further analysis showed that these three different types of PDA networks are each adopted by different types of protein domains, although many networks exhibit the characteristics of more than one of the three types. Our results shed light on nature's blueprint for protein architecture and provide a framework for understanding architectural design from a network perspective.

  13. Architecture for high performance stereoscopic game rendering on Android

    Science.gov (United States)

    Flack, Julien; Sanderson, Hugh; Shetty, Sampath

    2014-03-01

    Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.

  14. The Trombe Wall during the 1970s: technological device or architectural space? Critical inquiry on the Trombe Wall in Europe and the role of architectural magazines

    Directory of Open Access Journals (Sweden)

    Piero Medici

    2017-12-01

    Full Text Available During the 1970s, before and after the international oil crisis of 1973, some European architectural periodicals were critical of standard construction methods and the architecture of the time. They described how architects and engineers reacted to the crisis, proposing new techniques and projects in order to intervene innovatively in the built environment, using energy and natural resources more efficiently. This article will provide a critical analysis of the role of architectural magazines of the time, describing the technological innovation of the Trombe Wall in Europe. It will treat when, how, and what specific aspects were described. It will also carry out a critical analysis of the Trombe Wall itself: about its performances, its evolution throughout the 1970s, its integration in different houses, and its influence on inhabitants’ behaviour. Using three houses as case studies, an analysis of the architects’ efforts to integrate the technology of the Trombe Wall with architectural elements such as shape, aesthetic, materiality, and natural light will be carried out. Though this article is historical in character, it aims to inform the contemporary debate, especially concerning issues of the built environment meeting the Paris agreement on climate change (AA, 2015.

  15. Study of Solid State Drives performance in PROOF distributed analysis system

    Science.gov (United States)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  16. High-performance field emission device utilizing vertically aligned carbon nanotubes-based pillar architectures

    Science.gov (United States)

    Gupta, Bipin Kumar; Kedawat, Garima; Gangwar, Amit Kumar; Nagpal, Kanika; Kashyap, Pradeep Kumar; Srivastava, Shubhda; Singh, Satbir; Kumar, Pawan; Suryawanshi, Sachin R.; Seo, Deok Min; Tripathi, Prashant; More, Mahendra A.; Srivastava, O. N.; Hahm, Myung Gwan; Late, Dattatray J.

    2018-01-01

    The vertical aligned carbon nanotubes (CNTs)-based pillar architectures were created on laminated silicon oxide/silicon (SiO2/Si) wafer substrate at 775 °C by using water-assisted chemical vapor deposition under low pressure process condition. The lamination was carried out by aluminum (Al, 10.0 nm thickness) as a barrier layer and iron (Fe, 1.5 nm thickness) as a catalyst precursor layer sequentially on a silicon wafer substrate. Scanning electron microscope (SEM) images show that synthesized CNTs are vertically aligned and uniformly distributed with a high density. The CNTs have approximately 2-30 walls with an inner diameter of 3-8 nm. Raman spectrum analysis shows G-band at 1580 cm-1 and D-band at 1340 cm-1. The G-band is higher than D-band, which indicates that CNTs are highly graphitized. The field emission analysis of the CNTs revealed high field emission current density (4mA/cm2 at 1.2V/μm), low turn-on field (0.6 V/μm) and field enhancement factor (6917) with better stability and longer lifetime. Emitter morphology resulting in improved promising field emission performances, which is a crucial factor for the fabrication of pillared shaped vertical aligned CNTs bundles as practical electron sources.

  17. High-performance field emission device utilizing vertically aligned carbon nanotubes-based pillar architectures

    Directory of Open Access Journals (Sweden)

    Bipin Kumar Gupta

    2018-01-01

    Full Text Available The vertical aligned carbon nanotubes (CNTs-based pillar architectures were created on laminated silicon oxide/silicon (SiO2/Si wafer substrate at 775 °C by using water-assisted chemical vapor deposition under low pressure process condition. The lamination was carried out by aluminum (Al, 10.0 nm thickness as a barrier layer and iron (Fe, 1.5 nm thickness as a catalyst precursor layer sequentially on a silicon wafer substrate. Scanning electron microscope (SEM images show that synthesized CNTs are vertically aligned and uniformly distributed with a high density. The CNTs have approximately 2–30 walls with an inner diameter of 3–8 nm. Raman spectrum analysis shows G-band at 1580 cm−1 and D-band at 1340 cm−1. The G-band is higher than D-band, which indicates that CNTs are highly graphitized. The field emission analysis of the CNTs revealed high field emission current density (4mA/cm2 at 1.2V/μm, low turn-on field (0.6 V/μm and field enhancement factor (6917 with better stability and longer lifetime. Emitter morphology resulting in improved promising field emission performances, which is a crucial factor for the fabrication of pillared shaped vertical aligned CNTs bundles as practical electron sources.

  18. Effects of classrooms’ architecture on academic performance in view of telic versus paratelic motivation: a review

    NARCIS (Netherlands)

    Lewinski, P.

    2015-01-01

    This mini literature review analyzes research papers from many countries that directly or indirectly test how classrooms’ architecture influences academic performance. These papers evaluate and explain specific characteristics of classrooms, with an emphasis on how they affect learning processes and

  19. Association of collagen architecture with glioblastoma patient survival.

    Science.gov (United States)

    Pointer, Kelli B; Clark, Paul A; Schroeder, Alexandra B; Salamat, M Shahriar; Eliceiri, Kevin W; Kuo, John S

    2017-06-01

    OBJECTIVE Glioblastoma (GBM) is the most malignant primary brain tumor. Collagen is present in low amounts in normal brain, but in GBMs, collagen gene expression is reportedly upregulated. However, to the authors' knowledge, direct visualization of collagen architecture has not been reported. The authors sought to perform the first direct visualization of GBM collagen architecture, identify clinically relevant collagen signatures, and link them to differential patient survival. METHODS Second-harmonic generation microscopy was used to detect collagen in a GBM patient tissue microarray. Focal and invasive GBM mouse xenografts were stained with Picrosirius red. Quantitation of collagen fibers was performed using custom software. Multivariate survival analysis was done to determine if collagen is a survival marker for patients. RESULTS In focal xenografts, collagen was observed at tumor brain boundaries. For invasive xenografts, collagen was intercalated with tumor cells. Quantitative analysis showed significant differences in collagen fibers for focal and invasive xenografts. The authors also found that GBM patients with more organized collagen had a longer median survival than those with less organized collagen. CONCLUSIONS Collagen architecture can be directly visualized and is different in focal versus invasive GBMs. The authors also demonstrate that collagen signature is associated with patient survival. These findings suggest that there are collagen differences in focal versus invasive GBMs and that collagen is a survival marker for GBM.

  20. Design and Analysis of Architectures for Structural Health Monitoring Systems

    Science.gov (United States)

    Mukkamala, Ravi; Sixto, S. L. (Technical Monitor)

    2002-01-01

    During the two-year project period, we have worked on several aspects of Health Usage and Monitoring Systems for structural health monitoring. In particular, we have made contributions in the following areas. 1. Reference HUMS architecture: We developed a high-level architecture for health monitoring and usage systems (HUMS). The proposed reference architecture is shown. It is compatible with the Generic Open Architecture (GOA) proposed as a standard for avionics systems. 2. HUMS kernel: One of the critical layers of HUMS reference architecture is the HUMS kernel. We developed a detailed design of a kernel to implement the high level architecture.3. Prototype implementation of HUMS kernel: We have implemented a preliminary version of the HUMS kernel on a Unix platform.We have implemented both a centralized system version and a distributed version. 4. SCRAMNet and HUMS: SCRAMNet (Shared Common Random Access Memory Network) is a system that is found to be suitable to implement HUMS. For this reason, we have conducted a simulation study to determine its stability in handling the input data rates in HUMS. 5. Architectural specification.

  1. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... correlation between the study of existing architectures and the training of competences to design for present-day realities.......This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...

  2. Integrated Optical Interconnect Architectures for Embedded Systems

    CERN Document Server

    Nicolescu, Gabriela

    2013-01-01

    This book provides a broad overview of current research in optical interconnect technologies and architectures. Introductory chapters on high-performance computing and the associated issues in conventional interconnect architectures, and on the fundamental building blocks for integrated optical interconnect, provide the foundations for the bulk of the book which brings together leading experts in the field of optical interconnect architectures for data communication. Particular emphasis is given to the ways in which the photonic components are assembled into architectures to address the needs of data-intensive on-chip communication, and to the performance evaluation of such architectures for specific applications.   Provides state-of-the-art research on the use of optical interconnects in Embedded Systems; Begins with coverage of the basics for high-performance computing and optical interconnect; Includes a variety of on-chip optical communication topologies; Features coverage of system integration and opti...

  3. Abstract interfaces for data analysis - component architecture for data analysis tools

    International Nuclear Information System (INIS)

    Barrand, G.; Binko, P.; Doenszelmann, M.; Pfeiffer, A.; Johnson, A.

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis'99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. The authors give an overview of the architecture and design of the various components for data analysis as discussed in AIDA

  4. Analysis of fault tolerance and reliability in distributed real-time system architectures

    International Nuclear Information System (INIS)

    Philippi, Stephan

    2003-01-01

    Safety critical real-time systems are becoming ubiquitous in many areas of our everyday life. Failures of such systems potentially have catastrophic consequences on different scales, in the worst case even the loss of human life. Therefore, safety critical systems have to meet maximum fault tolerance and reliability requirements. As the design of such systems is far from being trivial, this article focuses on concepts to specifically support the early architectural design. In detail, a simulation based approach for the analysis of fault tolerance and reliability in distributed real-time system architectures is presented. With this approach, safety related features can be evaluated in the early development stages and thus prevent costly redesigns in later ones

  5. The mathematics of the modernist villa architectural analysis using space syntax and isovists

    CERN Document Server

    Ostwald, Michael J

    2018-01-01

    This book presents the first detailed mathematical analysis of the social, cognitive and experiential properties of Modernist domestic architecture. The Modern Movement in architecture, which came to prominence during the first half of the twentieth century, may have been famous for its functional forms and machine-made aesthetic, but it also sought to challenge the way people inhabit, understand and experience space. Ludwig Mies van der Rohe’s buildings were not only minimalist and transparent, they were designed to subvert traditional social hierarchies. Frank Lloyd Wright’s organic Modernism not only attempted to negotiate a more responsive relationship between nature and architecture, but also shape the way people experience space. Richard Neutra’s Californian Modernism is traditionally celebrated for its sleek, geometric forms, but his intention was to use design to support a heightened understanding of context. Glenn Murcutt’s pristine pavilions, seemingly the epitome of regional Modernism, actu...

  6. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    Science.gov (United States)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  7. A high performance 90 nm CMOS SAR ADC with hybrid architecture

    International Nuclear Information System (INIS)

    Tong Xingyuan; Zhu Zhangming; Yang Yintang; Chen Jianming

    2010-01-01

    A 10-bit 2.5 MS/s SAR A/D converter is presented. In the circuit design, an R-C hybrid architecture D/A converter, pseudo-differential comparison architecture and low power voltage level shifters are utilized. Design challenges and considerations are also discussed. In the layout design, each unit resistor is sided by dummies for good matching performance, and the capacitors are routed with a common-central symmetry method to reduce the nonlin-earity error. This proposed converter is implemented based on 90 nm CMOS logic process. With a 3.3 V analog supply and a 1.0 V digital supply, the differential and integral nonlinearity are measured to be less than 0.36 LSB and 0.69 LSB respectively. With an input frequency of 1.2 MHz at 2.5 MS/s sampling rate, the SFDR and ENOB are measured to be 72.86 dB and 9.43 bits respectively, and the power dissipation is measured to be 6.62 mW including the output drivers. This SAR A/D converter occupies an area of 238 x 214 μm 2 . The design results of this converter show that it is suitable for multi-supply embedded SoC applications. (semiconductor integrated circuits)

  8. A Short Survey on the State of the Art in Architectures and Platforms for Large Scale Data Analysis and Knowledge Discovery from Data

    Energy Technology Data Exchange (ETDEWEB)

    Begoli, Edmon [ORNL

    2012-01-01

    Intended as a survey for practicing architects and researchers seeking an overview of the state-of-the-art architectures for data analysis, this paper provides an overview of the emerg- ing data management and analytic platforms including par- allel databases, Hadoop-based systems, High Performance Computing (HPC) platforms and platforms popularly re- ferred to as NoSQL platforms. Platforms are presented based on their relevance, analysis they support and the data organization model they support.

  9. Tectonic Indexicality and Architectural Semiosis

    NARCIS (Netherlands)

    Lee, S.

    A work of architecture occupies a delicate position between functional performance and production of certain meaning and experience. The design of architecture becomes stifling when an architects attempts to harmonize the two facets. Modernist architects argued that a building should express

  10. Optimizing the Performance of Reactive Molecular Dynamics Simulations for Multi-core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Aktulga, Hasan Metin [Michigan State Univ., East Lansing, MI (United States); Coffman, Paul [Argonne National Lab. (ANL), Argonne, IL (United States); Shan, Tzu-Ray [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knight, Chris [Argonne National Lab. (ANL), Argonne, IL (United States); Jiang, Wei [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-12-01

    Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups in the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.

  11. Timing Analysis of Mixed-Criticality Hard Real-Time Applications Implemented on Distributed Partitioned Architectures

    DEFF Research Database (Denmark)

    Marinescu, Sorin Ovidiu; Tamas-Selicean, Domitian; Acretoaie, Vlad

    In this paper we are interested in the timing analysis of mixed-criticality embedded real-time applications mapped on distributed heterogeneous architectures. Mixedcriticality tasks can be integrated onto the same architecture only if there is enough spatial and temporal separation among them. We...... in partitions using fixedpriority preemptive scheduling. We have extended the stateof- the-art algorithms for schedulability analysis to take into account the partitions. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks....... consider that the separation is provided by partitioning, such that applications run in separate partitions, and each partition is allocated several time slots on a processor. Each partition can have its own scheduling policy. We are interested to determine the worst-case response times of tasks scheduled...

  12. Neutron-activation analysis of wall soils of ancient architectural monuments

    International Nuclear Information System (INIS)

    Khatamov, Sh.; Zhumamuratov, A.; Ibragimov, T.; Tillyaev, T.; Osinskaya, N.S.; Rakhmanova, T.P.; Pulatov, D.D.

    2001-01-01

    The simplified, relatively inexpensive, and productive multielemental neutron activation techniques for analysis of solid of the architectural monuments of Karakalpakstan have been elaborated. A comparison of the elemental composition of the wall soils of the ancient buildings, constructed at different historical periods, with the composition of the agricultural soils allows us to estimate the present ecological and agrogeochemical states of the agricultural soils and to trace changing the dynamics of about 30 chemical elements. (author)

  13. Real-time FPGA architectures for computer vision

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar

    2000-03-01

    This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.

  14. Genome-wide association mapping and agronomic impact of cowpea root architecture.

    Science.gov (United States)

    Burridge, James D; Schneider, Hannah M; Huynh, Bao-Lam; Roberts, Philip A; Bucksch, Alexander; Lynch, Jonathan P

    2017-02-01

    Genetic analysis of data produced by novel root phenotyping tools was used to establish relationships between cowpea root traits and performance indicators as well between root traits and Striga tolerance. Selection and breeding for better root phenotypes can improve acquisition of soil resources and hence crop production in marginal environments. We hypothesized that biologically relevant variation is measurable in cowpea root architecture. This study implemented manual phenotyping (shovelomics) and automated image phenotyping (DIRT) on a 189-entry diversity panel of cowpea to reveal biologically important variation and genome regions affecting root architecture phenes. Significant variation in root phenes was found and relatively high heritabilities were detected for root traits assessed manually (0.4 for nodulation and 0.8 for number of larger laterals) as well as repeatability traits phenotyped via DIRT (0.5 for a measure of root width and 0.3 for a measure of root tips). Genome-wide association study identified 11 significant quantitative trait loci (QTL) from manually scored root architecture traits and 21 QTL from root architecture traits phenotyped by DIRT image analysis. Subsequent comparisons of results from this root study with other field studies revealed QTL co-localizations between root traits and performance indicators including seed weight per plant, pod number, and Striga (Striga gesnerioides) tolerance. The data suggest selection for root phenotypes could be employed by breeding programs to improve production in multiple constraint environments.

  15. A Microwave Photonic Interference Canceller: Architectures, Systems, and Integration

    Science.gov (United States)

    Chang, Matthew P.

    This thesis is a comprehensive portfolio of work on a Microwave Photonic Self-Interference Canceller (MPC), a specialized optical system designed to eliminate interference from radio-frequency (RF) receivers. The novelty and value of the microwave photonic system lies in its ability to operate over bandwidths and frequencies that are orders of magnitude larger than what is possible using existing RF technology. The work begins, in 2012, with a discrete fiber-optic microwave photonic canceller, which prior work had demonstrated as a proof-of-concept, and culminates, in 2017, with the first ever monolithically integrated microwave photonic canceller. With an eye towards practical implementation, the thesis establishes novelty through three major project thrusts. (Fig. 1): (1) Extensive RF and system analysis to develop a full understanding of how, and through what mechanisms, MPCs affect an RF receiver. The first investigations of how a microwave photonic canceller performs in an actual wireless environment and a digital radio are also presented. (2) New architectures to improve the performance and functionality of MPCs, based on the analysis performed in Thrust 1. A novel balanced microwave photonic canceller architecture is developed and experimentally demonstrated. The balanced architecture shows significant improvements in link gain, noise figure, and dynamic range. Its main advantage is its ability to suppress common-mode noise and reduce noise figure by increasing the optical power. (3) Monolithic integration of the microwave photonic canceller into a photonic integrated circuit. This thrust presents the progression of integrating individual discrete devices into their semiconductor equivalent, as well as a full functional and RF analysis of the first ever integrated microwave photonic canceller.

  16. Model-centric software architecture reconstruction

    NARCIS (Netherlands)

    Stoermer, C.; Rowe, A.; O'Brien, L.; Verhoef, C.

    2006-01-01

    Much progress has been achieved in defining methods, techniques, and tools for software architecture reconstruction (SAR). However, less progress has been achieved in constructing reasoning frameworks from existing systems that support organizations in architecture analysis and design decisions.

  17. World-wide architecture of osteoporosis research: density-equalizing mapping studies and gender analysis.

    Science.gov (United States)

    Brüggmann, D; Mäule, L-S; Klingelhöfer, D; Schöffel, N; Gerber, A; Jaque, J M; Groneberg, D A

    2016-10-01

    While research activities on osteoporosis grow constantly, no concise description of the global research architecture exists. Hence, we aim to analyze and depict the world-wide scientific output on osteoporosis combining bibliometric tools, density-equalizing mapping projections and gender analysis. Using the NewQIS platform, we analyzed all osteoporosis-related publications authored from 1900 to 2012 and indexed by the Web of Science. Bibliometric details were analyzed related to quantitative and semi-qualitative aspects. The majority of 57 453 identified publications were original research articles. The USA and Western Europe dominated the field regarding cooperation activity, publication and citation performance. Asia, Africa and South America played a minimal role. Gender analysis revealed a dominance of male scientists in almost all countries except Brazil. Although the scientific performance on osteoporosis is increasing world-wide, a significant disparity in terms of research output was visible between developed and low-income countries. This finding is particularly concerning since epidemiologic evaluations of future osteoporosis prevalences predict enormous challenges for the health-care systems in low-resource countries. Hence, our study underscores the need to address these disparities by fostering future research endeavors in these nations with the aim to successfully prevent a growing global burden related to osteoporosis.

  18. Memory controllers for mixed-time-criticality systems architectures, methodologies and trade-offs

    CERN Document Server

    Goossens, Sven; Akesson, Benny; Goossens, Kees

    2016-01-01

    This book discusses the design and performance analysis of SDRAM controllers that cater to both real-time and best-effort applications, i.e. mixed-time-criticality memory controllers. The authors describe the state of the art, and then focus on an architecture template for reconfigurable memory controllers that addresses effectively the quickly evolving set of SDRAM standards, in terms of worst-case timing and power analysis, as well as implementation. A prototype implementation of the controller in SystemC and synthesizable VHDL for an FPGA development board are used as a proof of concept of the architecture template.

  19. MRSA: a density-equalizing mapping analysis of the global research architecture.

    Science.gov (United States)

    Addicks, Johann P; Uibel, Stefanie; Jensen, Anna-Maria; Bundschuh, Matthias; Klingelhoefer, Doris; Groneberg, David A

    2014-09-30

    Methicillin-resistant Staphylococcus aureus (MRSA) has evolved as an alarming public health thread due to its global spread as hospital and community pathogen. Despite this role, a scientometric analysis has not been performed yet. Therefore, the NewQIS platform was used to conduct a combined density-equalizing mapping and scientometric study. As database, the Web of Science was used, and all entries between 1961 and 2007 were analyzed. In total, 7671 entries were identified. Density equalizing mapping demonstrated a distortion of the world map for the benefit of the USA as leading country with a total output of 2374 publications, followed by the UK (1030) and Japan (862). Citation rate analysis revealed Portugal as leading country with a rate of 35.47 citations per article, followed by New Zealand and Denmark. Country cooperation network analyses showed 743 collaborations with US-UK being most frequent. Network citation analyses indicated the publications that arose from the cooperation of USA and France as well as USA and Japan as the most cited (75.36 and 74.55 citations per collaboration article, respectively). The present study provides the first combined density-equalizing mapping and scientometric analysis of MRSA research. It illustrates the global MRSA research architecture. It can be assumed that this highly relevant topic for public health will achieve even greater dimensions in the future.

  20. Real-Time Shop-Floor Production Performance Analysis Method for the Internet of Manufacturing Things

    Directory of Open Access Journals (Sweden)

    Yingfeng Zhang

    2014-04-01

    Full Text Available Typical challenges that manufacturing enterprises are facing now are compounded by lack of timely, accurate, and consistent information of manufacturing resources. As a result, it is difficult to analyze the real-time production performance for the shop-floor. In this paper, the definition and overall architecture of the internet of manufacturing things is presented to provide a new paradigm by extending the techniques of internet of things (IoT to manufacturing field. Under this architecture, the real-time primitive events which occurred at different manufacturing things such as operators, machines, pallets, key materials, and so forth can be easily sensed. Based on these distributed primitive events, a critical event model is established to automatically analyze the real-time production performance. Here, the up-level production performance analysis is regarded as a series of critical events, and the real-time value of each critical event can be easily calculated according to the logical and sequence relationships among these multilevel events. Finally, a case study is used to illustrate how to apply the designed methods to analyze the real-time production performance.

  1. A Performance Analytical Strategy for Network-on-Chip Router with Input Buffer Architecture

    Directory of Open Access Journals (Sweden)

    WANG, J.

    2012-11-01

    Full Text Available In this paper, a performance analytical strategy is proposed for Network-on-Chip router with input buffer architecture. First, an analytical model is developed based on semi-Markov process. For the non-work-conserving router with small buffer size, the model can be used to analyze the schedule delay and the average service time for each buffer when given the related parameters. Then, the packet average delay in router is calculated by using the model. Finally, we validate the effectiveness of our strategy by simulation. By comparing our analytical results to simulation results, we show that our strategy successfully captures the Network-on-Chip router performance and it performs better than the state-of-art technology. Therefore, our strategy can be used as an efficiency performance analytical tool for Network-on-Chip design.

  2. Architectural and Algorithmic Requirements for a Next-Generation System Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    V.A. Mousseau

    2010-05-01

    This document presents high-level architectural and system requirements for a next-generation system analysis code (NGSAC) to support reactor safety decision-making by plant operators and others, especially in the context of light water reactor plant life extension. The capabilities of NGSAC will be different from those of current-generation codes, not only because computers have evolved significantly in the generations since the current paradigm was first implemented, but because the decision-making processes that need the support of next-generation codes are very different from the decision-making processes that drove the licensing and design of the current fleet of commercial nuclear power reactors. The implications of these newer decision-making processes for NGSAC requirements are discussed, and resulting top-level goals for the NGSAC are formulated. From these goals, the general architectural and system requirements for the NGSAC are derived.

  3. Software Architectures – Present and Visions

    Directory of Open Access Journals (Sweden)

    Catalin STRIMBEI

    2015-01-01

    Full Text Available Nowadays, architectural software systems are increasingly important because they can determine the success of the entire system. In this article we intend to rigorously analyze the most common types of systems architectures and present a personal opinion about the specifics of the university architecture. After analyzing monolithic architectures, SOA architecture and those of the micro- based services, we present specific issues and specific criteria for the university software systems. Each type of architecture is rundown and analyzed according to specific academic challenges. During the analysis, we took into account the factors that determine the success of each architecture and also the common causes of failure. At the end of the article, we objectively decide which architecture is best suited to be implemented in the university area.

  4. The microcomputer workstation - An alternate hardware architecture for remotely sensed image analysis

    Science.gov (United States)

    Erickson, W. K.; Hofman, L. B.; Donovan, W. E.

    1984-01-01

    Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.

  5. Multiprocessor architecture: Synthesis and evaluation

    Science.gov (United States)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  6. Analysis of facility needs level in architecture studio for students’ studio grades

    Science.gov (United States)

    Lubis, A. S.; Hamid, B.; Pane, I. F.; Marpaung, B. O. Y.

    2018-03-01

    Architects must be able to play an active role and contribute to the realization of a sustainable environment. Architectural education has inherited many education research used qualitative and quantitative methods. The data were gathered by conducting (a) observation,(b) interviews, (c) documentation, (d) literature study, and (e) Questionnaire. The gathered data were analyzed qualitatively to find out what equipment needed in the learning process in the Architecture Studio, USU. Questionnaires and Ms. Excel were used for the quantitative analysis. The tabulation of quantitative data would be correlated with the students’ studio grades. The result of the research showed that equipment with the highest level of needs was (1) drawing table, (2) Special room for each student, (3) Internet Network, (4) Air Conditioning, (5) Sufficient lighting.

  7. Computer architecture evaluation for structural dynamics computations: Project summary

    Science.gov (United States)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  8. Preserving urban objects of historicaland architectural heritage

    Directory of Open Access Journals (Sweden)

    Bal'zannikova Ekaterina Mikhailovna

    2014-01-01

    structural elements, delivering building materials, preparing the construction site and the basic period when condemned structures are demolished, new design elements are formed and assembled, interior finishing work is performed and the object facade is restored. In contrast to it, our method includes additional periods and a performance list. In particular, it is proposed to carry out a research period prior to the preparatory period, and after the basic period there should be the ending period.Thus, during the research period it is necessary to study urban development features in architectural and town-planning environment, to identify the historical and architectural value of the object, to estimate its ramshackle state and whether it is habitable, to determine the relationship of the object with the architectural and aesthetic image of surrounding objects and to develop a conservation program; and during the ending period it is proposed to assess the historical and architectural significance of the reconstructed object in relation to the aesthetic and architectural image of the surrounding area. The proposed complex method will increase the attractiveness of a historical and architectural heritage object and its surrounding area for tourists and, consequently, raise the cultural level of the visitors. Furthermore, the method will ensure the construction of recreation zones, their more frequent usage and visiting surrounding objects of social infrastructure, because more opportunities for cultural and aesthetic pastime will be offered. The method will also provide a more reasonable and effective use of available funding due to the careful analysis and proper choice of the methods to preserve objects of historical and architectural heritage.

  9. BUILDING A COMPETITIVE BUSINESS INTELLIGENCE ARCHITECTURE THAT CAN FOSTER PERFORMANCE IN THE ROMANIAN NATIONAL RAILWAY COMPANY

    Directory of Open Access Journals (Sweden)

    Dragan George Bogdan

    2015-05-01

    Full Text Available Today many industry players from banking, financial services, insurance, IT, healthcare, telecommunications and transportation are deploying competitive business intelligence to grow their company’s financial results. The use of such advanced business applications is one key enabler to increase their spread which provides them an edge over their competitors. Companies of the future are buiding a new culture developed on fact-based decisions. (BusinessWeek Research Services, 2009 These decisions are made through analysis using the business analytics systems which encourage the anticipation in solving complex business problems in the entire organization. Embracing this approach, these companies focus on their most profitable customers, define the right pricing, a faster product innovation, optimize supply chains and identify the real drivers of financial performance. This research paper will detail the theorethical importance of using competitive business intelligence architectures to gain competitive advantage.

  10. Holey Nanocarbon Architectures for High-Performance Lithium-Air Batteries

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposal is to develop 3-dimensional hierarchical mesoporous nanocarbon architecture using primarily our unique holey nanocarbon platforms...

  11. Communication-Oriented Design Space Exploration for Reconfigurable Architectures

    Directory of Open Access Journals (Sweden)

    Gogniat Guy

    2007-01-01

    Full Text Available Many academic works in computer engineering focus on reconfigurable architectures and associated tools. Fine-grain architectures, field programmable gate arrays (FPGAs, are the most well-known structures of reconfigurable hardware. Dedicated tools (generic or specific allow for the exploration of their design space to choose the best architecture characteristics and/or to explore the application characteristics. The aim is to increase the synergy between the application and the architecture in order to get the best performance. However, there is no generic tool to perform such an exploration for coarse-grain or heterogeneous-grain architectures, just a small number of very specific tools are able to explore a limited set of architectures. To address this major lack, in this paper we propose a new design space exploration approach adapted to fine- and coarse-grain granularities. Our approach combines algorithmic and architecture explorations. It relies on an automatic estimation tool which computes the communication hierarchical distribution and the architectural processing resources use rate for the architecture under exploration. Such an approach forwards the rapid definition of efficient reconfigurable architectures dedicated to one or several applications.

  12. TAD-free analysis of architectural proteins and insulators.

    Science.gov (United States)

    Mourad, Raphaël; Cuvier, Olivier

    2018-03-16

    The three-dimensional (3D) organization of the genome is intimately related to numerous key biological functions including gene expression and DNA replication regulations. The mechanisms by which molecular drivers functionally organize the 3D genome, such as topologically associating domains (TADs), remain to be explored. Current approaches consist in assessing the enrichments or influences of proteins at TAD borders. Here, we propose a TAD-free model to directly estimate the blocking effects of architectural proteins, insulators and DNA motifs on long-range contacts, making the model intuitive and biologically meaningful. In addition, the model allows analyzing the whole Hi-C information content (2D information) instead of only focusing on TAD borders (1D information). The model outperforms multiple logistic regression at TAD borders in terms of parameter estimation accuracy and is validated by enhancer-blocking assays. In Drosophila, the results support the insulating role of simple sequence repeats and suggest that the blocking effects depend on the number of repeats. Motif analysis uncovered the roles of the transcriptional factors pannier and tramtrack in blocking long-range contacts. In human, the results suggest that the blocking effects of the well-known architectural proteins CTCF, cohesin and ZNF143 depend on the distance between loci, where each protein may participate at different scales of the 3D chromatin organization.

  13. Neural Architectures for Control

    Science.gov (United States)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  14. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  15. Innovative architecture design for high performance organic and hybrid multi-junction solar cells

    Science.gov (United States)

    Li, Ning; Spyropoulos, George D.; Brabec, Christoph J.

    2017-08-01

    The multi-junction concept is especially attractive for the photovoltaic (PV) research community owing to its potential to overcome the Schockley-Queisser limit of single-junction solar cells. Tremendous research interests are now focused on the development of high-performance absorbers and novel device architectures for emerging PV technologies, such as organic and perovskite PVs. It has been predicted that the multi-junction concept is able to boost the organic and perovskite PV technologies approaching the 20% and 30% benchmarks, respectively, showing a bright future of commercialization of the emerging PV technologies. In this contribution, we will demonstrate innovative architecture design for solution-processed, highly functional organic and hybrid multi-junction solar cells. A simple but elegant approach to fabricating organic and hybrid multi-junction solar cells will be introduced. By laminating single organic/hybrid solar cells together through an intermediate layer, the manufacturing cost and complexity of large-scale multi-junction solar cells can be significantly reduced. This smart approach to balancing the photocurrents as well as open circuit voltages in multi-junction solar cells will be demonstrated and discussed in detail.

  16. The Simulation Intranet Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  17. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    Science.gov (United States)

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  18. VERNACULAR ARCHITECTURE: AN INTRODUCTORY COURSE TO LEARN ARCHITECTURE IN INDIA

    Directory of Open Access Journals (Sweden)

    Miki Desai

    2010-07-01

    -climatic forces, human and material resources and techniques that satisfy the socio cultural needs and desires of a given people. Research analysis, large scale model making, simulation, actual size mockups and such engage the students in make-believe world of architectural learning in this course.

  19. Big Data Analytics Embedded Smart City Architecture for Performance Enhancement through Real-Time Data Processing and Decision-Making

    Directory of Open Access Journals (Sweden)

    Bhagya Nathali Silva

    2017-01-01

    Full Text Available The concept of the smart city is widely favored, as it enhances the quality of life of urban citizens, involving multiple disciplines, that is, smart community, smart transportation, smart healthcare, smart parking, and many more. Continuous growth of the complex urban networks is significantly challenged by real-time data processing and intelligent decision-making capabilities. Therefore, in this paper, we propose a smart city framework based on Big Data analytics. The proposed framework operates on three levels: (1 data generation and acquisition level collecting heterogeneous data related to city operations, (2 data management and processing level filtering, analyzing, and storing data to make decisions and events autonomously, and (3 application level initiating execution of the events corresponding to the received decisions. In order to validate the proposed architecture, we analyze a few major types of dataset based on the proposed three-level architecture. Further, we tested authentic datasets on Hadoop ecosystem to determine the threshold and the analysis shows that the proposed architecture offers useful insights into the community development authorities to improve the existing smart city architecture.

  20. New energy storage option: toward ZnCo2O4 nanorods/nickel foam architectures for high-performance supercapacitors.

    Science.gov (United States)

    Liu, Bin; Liu, Boyang; Wang, Qiufan; Wang, Xianfu; Xiang, Qingyi; Chen, Di; Shen, Guozhen

    2013-10-23

    Hierarchical ZnCo2O4/nickel foam architectures were first fabricated from a simple scalable solution approach, exhibiting outstanding electrochemical performance in supercapacitors with high specific capacitance (∼1400 F g(-1) at 1 A g(-1)), excellent rate capability (72.5% capacity retention at 20 A g(-1)), and good cycling stability (only 3% loss after 1000 cycles at 6 A g(-1)). All-solid-state supercapacitors were also fabricated by assembling two pieces of the ZnCo2O4-based electrodes, showing superior performance in terms of high specific capacitance and long cycling stability. Our work confirms that the as-prepared architectures can not only be applied in high energy density fields, but also be used in high power density applications, such as electric vehicles, flexible electronics, and energy storage devices.

  1. Benchmarking the energy performance of office buildings: A data envelopment analysis approach

    Directory of Open Access Journals (Sweden)

    Molinos-Senante, María

    2016-12-01

    Full Text Available The achievement of energy efficiency in buildings is an important challenge facing both developed and developing countries. Very few papers have assessed the energy efficiency of office buildings using real data. To overcome this limitation, this paper proposes an energy efficiency index for buildings having a large window-to-wall ratio, and uses this index to identify the main architectural factors affecting energy performance. This paper assesses, for the first time, the energy performances of 34 office buildings in Santiago, Chile, by using data envelopment analysis. Overall energy efficiency is decomposed into two indices: the architectural energy efficiency index, and the management energy efficiency index. This decomposition is an essential step in identifying the main drivers of energy inefficiency and designing measures for improvement. Office buildings examined here have significant room for improving their energy efficiencies, saving operational costs and reducing greenhouse gas emissions. The methodology and results of this study will be of great interest to building managers and policymakers seeking to increase the sustainability of cities.

  2. Flexible weapons architecture design

    Science.gov (United States)

    Pyant, William C., III

    Present day air-delivered weapons are of a closed architecture, with little to no ability to tailor the weapon for the individual engagement. The closed architectures require weaponeers to make the target fit the weapon instead of fitting the individual weapons to a target. The concept of a flexible weapons aims to modularize weapons design using an open architecture shell into which different modules are inserted to achieve the desired target fractional damage while reducing cost and civilian casualties. This thesis shows that the architecture design factors of damage mechanism, fusing, weapons weight, guidance, and propulsion are significant in enhancing weapon performance objectives, and would benefit from modularization. Additionally, this thesis constructs an algorithm that can be used to design a weapon set for a particular target class based on these modular components.

  3. Service Modularity and Architecture

    DEFF Research Database (Denmark)

    Brax, Saara A.; Bask, Anu; Hsuan, Juliana

    2017-01-01

    , platform-based and mass-customized service business models, comparative research designs, customer perspectives and service experience, performance in context of modular services, empirical evidence of benefits and challenges, architectural innovation in services, modularization in multi-provider contexts......Purpose: Services are highly important in a world economy which has increasingly become service driven. There is a growing need to better understand the possibilities for, and requirements of, designing modular service architectures. The purpose of this paper is to elaborate on the roots...... of the emerging research stream on service modularity, provide a concise overview of existing work on the subject, and outline an agenda for future research on service modularity and architecture. The articles in the special issue offer four diverse sets of research on service modularity and architecture. Design...

  4. Space-Based Information Infrastructure Architecture for Broadband Services

    Science.gov (United States)

    Price, Kent M.; Inukai, Tom; Razdan, Rajendev; Lazeav, Yvonne M.

    1996-01-01

    This study addressed four tasks: (1) identify satellite-addressable information infrastructure markets; (2) perform network analysis for space-based information infrastructure; (3) develop conceptual architectures; and (4) economic assessment of architectures. The report concludes that satellites will have a major role in the national and global information infrastructure, requiring seamless integration between terrestrial and satellite networks. The proposed LEO, MEO, and GEO satellite systems have satellite characteristics that vary widely. They include delay, delay variations, poorer link quality and beam/satellite handover. The barriers against seamless interoperability between satellite and terrestrial networks are discussed. These barriers are the lack of compatible parameters, standards and protocols, which are presently being evaluated and reduced.

  5. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  6. CMOL/CMOS hardware architectures and performance/price for Bayesian memory - The building block of intelligent systems

    Science.gov (United States)

    Zaveri, Mazad Shaheriar

    The semiconductor/computer industry has been following Moore's law for several decades and has reaped the benefits in speed and density of the resultant scaling. Transistor density has reached almost one billion per chip, and transistor delays are in picoseconds. However, scaling has slowed down, and the semiconductor industry is now facing several challenges. Hybrid CMOS/nano technologies, such as CMOL, are considered as an interim solution to some of the challenges. Another potential architectural solution includes specialized architectures for applications/models in the intelligent computing domain, one aspect of which includes abstract computational models inspired from the neuro/cognitive sciences. Consequently in this dissertation, we focus on the hardware implementations of Bayesian Memory (BM), which is a (Bayesian) Biologically Inspired Computational Model (BICM). This model is a simplified version of George and Hawkins' model of the visual cortex, which includes an inference framework based on Judea Pearl's belief propagation. We then present a "hardware design space exploration" methodology for implementing and analyzing the (digital and mixed-signal) hardware for the BM. This particular methodology involves: analyzing the computational/operational cost and the related micro-architecture, exploring candidate hardware components, proposing various custom hardware architectures using both traditional CMOS and hybrid nanotechnology - CMOL, and investigating the baseline performance/price of these architectures. The results suggest that CMOL is a promising candidate for implementing a BM. Such implementations can utilize the very high density storage/computation benefits of these new nano-scale technologies much more efficiently; for example, the throughput per 858 mm2 (TPM) obtained for CMOL based architectures is 32 to 40 times better than the TPM for a CMOS based multiprocessor/multi-FPGA system, and almost 2000 times better than the TPM for a PC

  7. Assessing the genetic architecture of epithelial ovarian cancer histological subtypes

    DEFF Research Database (Denmark)

    Cuellar-Partida, Gabriel; Lu, Yi; Dixon, Suzanne C

    2016-01-01

    studies show that certain genetic variants confer susceptibility to all subtypes while other variants are subtype-specific. Here, we perform an extensive analysis of the genetic architecture of EOC subtypes. To this end, we used data of 10,014 invasive EOC patients and 21,233 controls from the Ovarian...

  8. Architectural Strategies for Enabling Data-Driven Science at Scale

    Science.gov (United States)

    Crichton, D. J.; Law, E. S.; Doyle, R. J.; Little, M. M.

    2017-12-01

    The analysis of large data collections from NASA or other agencies is often executed through traditional computational and data analysis approaches, which require users to bring data to their desktops and perform local data analysis. Alternatively, data are hauled to large computational environments that provide centralized data analysis via traditional High Performance Computing (HPC). Scientific data archives, however, are not only growing massive, but are also becoming highly distributed. Neither traditional approach provides a good solution for optimizing analysis into the future. Assumptions across the NASA mission and science data lifecycle, which historically assume that all data can be collected, transmitted, processed, and archived, will not scale as more capable instruments stress legacy-based systems. New paradigms are needed to increase the productivity and effectiveness of scientific data analysis. This paradigm must recognize that architectural and analytical choices are interrelated, and must be carefully coordinated in any system that aims to allow efficient, interactive scientific exploration and discovery to exploit massive data collections, from point of collection (e.g., onboard) to analysis and decision support. The most effective approach to analyzing a distributed set of massive data may involve some exploration and iteration, putting a premium on the flexibility afforded by the architectural framework. The framework should enable scientist users to assemble workflows efficiently, manage the uncertainties related to data analysis and inference, and optimize deep-dive analytics to enhance scalability. In many cases, this "data ecosystem" needs to be able to integrate multiple observing assets, ground environments, archives, and analytics, evolving from stewardship of measurements of data to using computational methodologies to better derive insight from the data that may be fused with other sets of data. This presentation will discuss

  9. Study of Selected Components of Architectural Environment of Primary Schools - Preferences of Adults and Analysis of the Specialist Literature

    Science.gov (United States)

    Halarewicz, Aleksandra

    2017-10-01

    The school is one of the oldest social institutions designed to prepare a young man for an adult life. It performs a teaching and educational function in child’s life. It is a place where, apart from home, the child spends most of the time in a day, therefore it is one of the most important institutions in the life of a young person. The school environment has a direct impact on the student's personality and ambition, and it shapes an attitude of the young person. Therefore, the design process preceding the establishment of school facilities is extremely responsible and should be conducted in a conscious and thoughtful way. This article is a summary and an attempt to synthesize the data obtained from the survey carried out by the author in the context of the design guidelines contained in the specialist literature. The questionnaire survey was designed to make an attempt to determine adult’s preferences, opinions and perceptions about selected components of the primary school environment, including the factors which determine the choice of school for children, the priorities of architecture components made for early childhood use, also to specify the type and the scale of existing drawbacks and problems in the school construction industry, as well as expectations about the contemporary architecture of primary schools and its future changes. Moreover, in the article, based on the analysis of the available specialist’s literature, the following are broadly discussed: the general division and characterization of school spaces, issues related to the influence of selected components of the architectural environment on the physical, mental and psychological safety of children. Furthermore, the author raises the subject of the influence of the architectural interiors and furniture on the mood, emotions or comfort of children in the early school age, based on the anthropometric characteristics of children and issues related to the perception of space with an extra

  10. Network contingencies in the relationship between design rules and architectural innovation performance

    NARCIS (Netherlands)

    Hofman, Erwin; Halman, Johannes; van Looy, Bart

    2016-01-01

    Architectural innovation is fundamental to the renewal of technological systems. However, it can be a real challenge to organize architectural innovation, all the more so when success hinges upon close collaboration with other firms that are responsible for different subsystems of the end product.

  11. Exploring Hardware Support For Scaling Irregular Applications on Multi-node Multi-core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Ceriani, Marco; Tumeo, Antonino; Villa, Oreste; Palermo, Gianluca; Raffo, Luigi

    2013-06-05

    With the recent emergence of large-scale knowledge dis- covery, data mining and social network analysis, irregular applications have gained renewed interest. Classic cache-based high-performance architectures do not provide optimal performances with such kind of workloads, mainly due to the very low spatial and temporal locality of the irregular control and memory access patterns. In this paper, we present a multi-node, multi-core, fine-grained multi-threaded shared-memory system architecture specifically designed for the execution of large-scale irregular applications, and built on top of three pillars, that we believe are fundamental to support these workloads. First, we offer transparent hardware support for Partitioned Global Address Space (PGAS) to provide a large globally-shared address space with no software library overhead. Second, we employ multi-threaded multi-core processing nodes to achieve the necessary latency tolerance required by accessing global memory, which potentially resides in a remote node. Finally, we devise hardware support for inter-thread synchronization on the whole global address space. We first model the performances by using an analytical model that takes into account the main architecture and application characteristics. We describe the hardware design of the proposed cus- tom architectural building blocks that provide support for the above- mentioned three pillars. Finally, we present a limited-scale evaluation of the system on a multi-board FPGA prototype with typical irregular kernels and benchmarks. The experimental evaluation demonstrates the architecture performance scalability for different configurations of the whole system.

  12. Performative Responsive Architecture Powered by Climate

    DEFF Research Database (Denmark)

    Foged, Isak Worre; Pasold, Anke

    2010-01-01

    This paper is to link the thermonastic behaviour found in flower heads in nature with the material research into bimetallic strips. This is to advance the discussion of environmental responsive systems on the basis of thermal properties for advanced environmental studies within the field of archi......This paper is to link the thermonastic behaviour found in flower heads in nature with the material research into bimetallic strips. This is to advance the discussion of environmental responsive systems on the basis of thermal properties for advanced environmental studies within the field...... of architecture in general and in the form of a responsive building skin in particular....

  13. Transitioning ISR architecture into the cloud

    Science.gov (United States)

    Lash, Thomas D.

    2012-06-01

    Emerging cloud computing platforms offer an ideal opportunity for Intelligence, Surveillance, and Reconnaissance (ISR) intelligence analysis. Cloud computing platforms help overcome challenges and limitations of traditional ISR architectures. Modern ISR architectures can benefit from examining commercial cloud applications, especially as they relate to user experience, usage profiling, and transformational business models. This paper outlines legacy ISR architectures and their limitations, presents an overview of cloud technologies and their applications to the ISR intelligence mission, and presents an idealized ISR architecture implemented with cloud computing.

  14. An Architectural Style for Closed-loop Process-Control

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Eriksen, Ole

    2003-01-01

    This report describes an architectural style for distributed closed-loop process control systems with high performance and hard real-time constraints. The style strikes a good balance between the architectural qualities of performance and modifiability/maintainability that traditionally are often...

  15. An Architectural Style for Closed-loop Process-Control

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    This report describes an architectural style for distributed closed-loop process control systems with high performance and hard real-time constraints. The style strikes a good balance between the architectural qualities of performance and modifiability/maintainability that traditionally are often...

  16. A procedure for the evaluation of 2D radiographic texture analysis to assess 3D bone micro-architecture

    International Nuclear Information System (INIS)

    Apostol, L.; Peyrin, F.; Yot, S.; Basset, O.; Odet, Ch.; Apostal, L.; Peyrin, F.; Boller, E.; Tabary, J.; Dinten, J.M.; Boudousq, V.; Kotzki, P.O.

    2004-01-01

    Although the diagnosis of osteoporosis is mainly based on Dual X-ray Absorptiometry, it has been shown that trabecular bone micro-architecture is also an important factor in regards of fracture risk, which can be efficiently assessed in vitro using three-dimensional x-ray microtomography (μCT). In vivo, techniques based on high-resolution x-ray radiography associated to texture analysis have been proposed to investigate bone micro-architecture, but their relevance for giving pertinent 3D information is unclear. The purpose of this work was to develop a method for evaluating the relationships between 3D micro-architecture and 2D texture parameters, and optimizing the conditions for radiographic imaging. Bone sample images taken from cortical to cortical were acquired using 3D-synchrotron x-ray μCT at the ESRF. The 3D digital images were further used for two purposes: 1) quantification of three-dimensional bone micro-architecture, 2) simulation of realistic x-ray radiographs under different acquisition conditions. Texture analysis was then applied to these 2D radiographs using a large variety of methods (co-occurrence, spectrum, fractal...). First results of the statistical analysis between 2D and 3D parameters allowed identifying the most relevant 2D texture parameters. (authors)

  17. Monte Carlo based performance assessment of different animal PET architectures using pixellated CZT detectors

    International Nuclear Information System (INIS)

    Visvikis, D.; Lefevre, T.; Lamare, F.; Kontaxakis, G.; Santos, A.; Darambara, D.

    2006-01-01

    The majority of present position emission tomography (PET) animal systems are based on the coupling of high-density scintillators and light detectors. A disadvantage of these detector configurations is the compromise between image resolution, sensitivity and energy resolution. In addition, current combined imaging devices are based on simply placing back-to-back and in axial alignment different apparatus without any significant level of software or hardware integration. The use of semiconductor CdZnTe (CZT) detectors is a promising alternative to scintillators for gamma-ray imaging systems. At the same time CZT detectors have the potential properties necessary for the construction of a truly integrated imaging device (PET/SPECT/CT). The aims of this study was to assess the performance of different small animal PET scanner architectures based on CZT pixellated detectors and compare their performance with that of state of the art existing PET animal scanners. Different scanner architectures were modelled using GATE (Geant4 Application for Tomographic Emission). Particular scanner design characteristics included an overall cylindrical scanner format of 8 and 24 cm in axial and transaxial field of view, respectively, and a temporal coincidence window of 8 ns. Different individual detector modules were investigated, considering pixel pitch down to 0.625 mm and detector thickness from 1 to 5 mm. Modified NEMA NU2-2001 protocols were used in order to simulate performance based on mouse, rat and monkey imaging conditions. These protocols allowed us to directly compare the performance of the proposed geometries with the latest generation of current small animal systems. Results attained demonstrate the potential for higher NECR with CZT based scanners in comparison to scintillator based animal systems

  18. Digital architecture, wearable computers and providing affinity

    DEFF Research Database (Denmark)

    Guglielmi, Michel; Johannesen, Hanne Louise

    2005-01-01

    as the setting for the events of experience. Contemporary architecture is a meta-space residing almost any thinkable field, striving to blur boundaries between art, architecture, design and urbanity and break down the distinction between the material and the user or inhabitant. The presentation for this paper...... will, through research, a workshop and participation in a cumulus competition, focus on the exploration of boundaries between digital architecture, performative space and wearable computers. Our design method in general focuses on the interplay between the performing body and the environment – between...

  19. High performance architecture design for large scale fibre-optic sensor arrays using distributed EDFAs and hybrid TDM/DWDM

    Science.gov (United States)

    Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.

    2013-09-01

    A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.

  20. Importance of New Use of Concrete in Iraq Analysis of Development And Use of Concrete in Architecture

    Directory of Open Access Journals (Sweden)

    Mohammed Ridha Shakir Majeed

    2015-04-01

    Full Text Available Since its invention by the Ancient Romans and later developed during the mid-18th century, the concrete structure and finish, has been considered as the most powerful, practical, economic and constructional material that meets the building’s architectural and aesthetical requirements. By creating unique architectural forms, the pioneer architects used concrete widely to shape up their innovative designs and buildings.The pre-mixed ultra-high performance concrete which manufactured by Lafarge. The transparent concrete and cement that allow the light beams to pass through them, introduces remarkable well-lit architectural spaces within the same structural criteria. This product is a recyclable, sustainable, friendly environmental and cost efficient back up. Due to its characteristics, strength, flexibility, affordability and long term performance, the concert integrated and contributed in modern architecture, urbanism and civil developments. Apparently, most of the 20th Century architects employed high-tech concrete method to deliver Iconic and bespoke architectural monuments world-wide. The interaction between the architectural form and the concrete as a buildable, executable, structural and constructional material has been always the main concern for architects over generations. The formalism in architecture was first identified by the Art-Nouveau movement during the early 20 century in Europe as well as in Northern America. It formed, utilized and sculptured the concert to meet the use, function, aesthetical and spatial needs of spaces. This wave generated series of most significant, outstanding and impressive buildings in the architectural symbolized record. This was followed by the Brutalism architecture presented by Alison and Peter Smithson in England and also by Le Corbusier works in Marseille and India. However, Alvar Alto and Louis Khan have participated and established a tremendous use of concrete to erect public interest developments

  1. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  2. A Dual Launch Robotic and Human Lunar Mission Architecture

    Science.gov (United States)

    Jones, David L.; Mulqueen, Jack; Percy, Tom; Griffin, Brand; Smitherman, David

    2010-01-01

    This paper describes a comprehensive lunar exploration architecture developed by Marshall Space Flight Center's Advanced Concepts Office that features a science-based surface exploration strategy and a transportation architecture that uses two launches of a heavy lift launch vehicle to deliver human and robotic mission systems to the moon. The principal advantage of the dual launch lunar mission strategy is the reduced cost and risk resulting from the development of just one launch vehicle system. The dual launch lunar mission architecture may also enhance opportunities for commercial and international partnerships by using expendable launch vehicle services for robotic missions or development of surface exploration elements. Furthermore, this architecture is particularly suited to the integration of robotic and human exploration to maximize science return. For surface operations, an innovative dual-mode rover is presented that is capable of performing robotic science exploration as well as transporting human crew conducting surface exploration. The dual-mode rover can be deployed to the lunar surface to perform precursor science activities, collect samples, scout potential crew landing sites, and meet the crew at a designated landing site. With this approach, the crew is able to evaluate the robotically collected samples to select the best samples for return to Earth to maximize the scientific value. The rovers can continue robotic exploration after the crew leaves the lunar surface. The transportation system for the dual launch mission architecture uses a lunar-orbit-rendezvous strategy. Two heavy lift launch vehicles depart from Earth within a six hour period to transport the lunar lander and crew elements separately to lunar orbit. In lunar orbit, the crew transfer vehicle docks with the lander and the crew boards the lander for descent to the surface. After the surface mission, the crew returns to the orbiting transfer vehicle for the return to the Earth. This

  3. Using enterprise architecture to analyse how organisational structure impact motivation and learning

    Science.gov (United States)

    Närman, Pia; Johnson, Pontus; Gingnell, Liv

    2016-06-01

    When technology, environment, or strategies change, organisations need to adjust their structures accordingly. These structural changes do not always enhance the organisational performance as intended partly because organisational developers do not understand the consequences of structural changes in performance. This article presents a model-based analysis framework for quantitative analysis of the effect of organisational structure on organisation performance in terms of employee motivation and learning. The model is based on Mintzberg's work on organisational structure. The quantitative analysis is formalised using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) and implemented in an enterprise architecture tool.

  4. High Performance Motion-Planner Architecture for Hardware-In-the-Loop System Based on Position-Based-Admittance-Control

    Directory of Open Access Journals (Sweden)

    Francesco La Mura

    2018-02-01

    Full Text Available This article focuses on a Hardware-In-the-Loop application developed from the advanced energy field project LIFES50+. The aim is to replicate, inside a wind gallery test facility, the combined effect of aerodynamic and hydrodynamic loads on a floating wind turbine model for offshore energy production, using a force controlled robotic device, emulating floating substructure’s behaviour. In addition to well known real-time Hardware-In-the-Loop (HIL issues, the particular application presented has stringent safety requirements of the HIL equipment and difficult to predict operating conditions, so that extra computational efforts have to be spent running specific safety algorithms and achieving desired performance. To meet project requirements, a high performance software architecture based on Position-Based-Admittance-Control (PBAC is presented, combining low level motion interpolation techniques, efficient motion planning, based on buffer management and Time-base control, and advanced high level safety algorithms, implemented in a rapid real-time control architecture.

  5. Architectural analysis of Douglas-fir forests

    NARCIS (Netherlands)

    Kuiper, L.C.

    1994-01-01

    The architecture of natural and semi-natural Douglas-fir forest ecosystems in western Washington and western Oregon was analyzed by various case-studies, to yield vital information needed for the design of new silvicultural systems with a high level of biodiversity, intended for low-input

  6. Architectures for wrist-worn energy harvesting

    Science.gov (United States)

    Rantz, R.; Halim, M. A.; Xue, T.; Zhang, Q.; Gu, L.; Yang, K.; Roundy, S.

    2018-04-01

    This paper reports the simulation-based analysis of six dynamical structures with respect to their wrist-worn vibration energy harvesting capability. This work approaches the problem of maximizing energy harvesting potential at the wrist by considering multiple mechanical substructures; rotational and linear motion-based architectures are examined. Mathematical models are developed and experimentally corroborated. An optimization routine is applied to the proposed architectures to maximize average power output and allow for comparison. The addition of a linear spring element to the structures has the potential to improve power output; for example, in the case of rotational structures, a 211% improvement in power output was estimated under real walking excitation. The analysis concludes that a sprung rotational harvester architecture outperforms a sprung linear architecture by 66% when real walking data is used as input to the simulations.

  7. Global architecture of gestational diabetes research: density-equalizing mapping studies and gender analysis.

    Science.gov (United States)

    Brüggmann, Dörthe; Richter, Theresa; Klingelhöfer, Doris; Gerber, Alexander; Bundschuh, Matthias; Jaque, Jenny; Groneberg, David A

    2016-04-04

    Gestational diabetes mellitus (GDM) is associated with substantial morbidity for mothers and their offspring. While clinical and basic research activities on this important disease grow constantly, there is no concise analysis of global architecture of GDM research. Hence, it was the objective of this study to assess the global scientific performance chronologically, geographically and in relation to existing research networks and gender distribution of publishing authors. On the basis of the New Quality and Quantity Indices in Science (NewQIS) platform, scientometric methods were combined with modern visualizing techniques such as density equalizing mapping, and the Web of Science database was used to assess GDM-related entries from 1900 to 2012. Twelve thousand five hundred four GDM-related publications were identified and analyzed. The USA (4295 publications) and the UK (1354 publications) dominated the field concerning research activity, overall citations and country-specific Hirsch-Index, which quantified the impact of a country's published research on the scientific community. Semi-qualitative indices such as country-specific citation rates ranked New Zealand and the UK at top positions. Annual collaborative publications increased steeply between the years 1990 and 2012 (71 to 1157 respectively). Subject category analysis pointed to a minor interest of public health issues in GDM research. Gender analysis in terms of publication authorship revealed a clear dominance of the male gender until 2005; then a trend towards gender equity started and the activity of female scientists grew visibly in many countries. The country-specific gender analysis revealed large differences, i.e. female scientists dominated the scientific output in the USA, whereas the majority of research was published by male authors in countries such as Japan. This study provides the first global sketch of GDM research architecture. While North-American and Western-European countries were

  8. Economic Analysis on the Space Transportation Architecture Study (STAS) NASA Team

    Science.gov (United States)

    Shaw, Eric J.

    1999-01-01

    The National Aeronautics and Space Administration (NASA) performed the Space Transportation Architecture Study (STAS) to provide information to support end-of-the-decade decisions on possible near-term US Government (USG) investments in space transportation. To gain a clearer understanding of the costs and benefits of the broadest range of possible space transportation options, six teams, five from aerospace industry companies and one internal to NASA, were tasked to answer three primary questions: a) If the Space Shuttle system should be replaced; b) If so, when the replacement should take place and how the transition should be implemented; and c) If not, what is the upgrade strategy to continue safe and affordable flight of the Space Shuttle beyond 2010. The overall goal of the Study was "to develop investment options to be considered by the Administration for the President's FY2001 budget to meet NASA's future human space flight requirements with significant reductions in costs." This emphasis on government investment, coupled with the participation by commercial f'trms, required an unprecedented level of economic analysis of costs and benefits from both industry and government viewpoints. This paper will discuss the economic and market models developed by the in-house NASA Team to analyze space transportation architectures, the results of those analyses, and how those results were reflected in the conclusions and recommendations of the STAS NASA Team. Copyright 1999 by the American Institute of Aeronautics and Astronautics, Inc. No copyright is asserted in the United States under Title 17, U.$. Code. The U.S. Government has a royalty-free license to exercise all rights under the copyright claimed herein for Governmental purposes. All other rights are reserved by the copyright owner.

  9. Study on the utilization of the cognitive architecture EPIC to the task analysis of a nuclear power plant operator

    International Nuclear Information System (INIS)

    Soares, Herculano Vieira

    2003-02-01

    This work presents a study of the use of the integrative cognitive architecture EPIC - Executive-Process - Interactive-Control, designed to evaluate the performance of a person performing tasks in parallel in a man-machine interface, as a methodology for Cognitive Task Analysis of a nuclear power plant operator. A comparison of the results obtained by the simulation by EPIC and the results obtained by application of the MHP model to the tasks performed by a shift operator during the execution of the procedure PO-E-3 - Steam Generator Tube Rupture of Angra 1 Nuclear Power Plant is done. To subsidize that comparison, an experiment was performed at the Angra 2 Nuclear Power Plant Full Scope Simulator in which three operator tasks were executed, its completion time measured and compared with the results of MHP and EPIC modeling. (author)

  10. The Complexity of Architecture : An Analysis of Design Intentions and Theories in the Norwegian National Tourist Routes

    OpenAIRE

    Roan, Beck

    2013-01-01

    This thesis analyzes the design intentions of architects in the Norwegian National Tourist Routes. Since architecture is not an isolated concept, this analysis incorporates philosophical and theoretical ideas in an investigation of the Tourist Route site design. Although they were hired to build compelling architectural sites along unique Norwegian nature, the Tourist Route architects also address subjects like practicality, balance, intention, meaning, control, freedom, and reflection in the...

  11. Non-Planar Nanotube and Wavy Architecture Based Ultra-High Performance Field Effect Transistors

    KAUST Repository

    Hanna, Amir

    2016-01-01

    This dissertation also introduces a novel thin-film-transistors architecture that is named the Wavy Channel (WC) architecture, which allows for extending device width by integrating vertical fin-like substrate corrugations giving

  12. Insights into Working Memory from The Perspective of The EPIC Architecture for Modeling Skilled Perceptual-Motor and Cognitive Human Performance

    National Research Council Canada - National Science Library

    Kieras, David

    1998-01-01

    Computational modeling of human perceptual-motor and cognitive performance based on a comprehensive detailed information- processing architecture leads to new insights about the components of working memory...

  13. N-Doped Dual Carbon-Confined 3D Architecture rGO/Fe3O4/AC Nanocomposite for High-Performance Lithium-Ion Batteries.

    Science.gov (United States)

    Ding, Ranran; Zhang, Jie; Qi, Jie; Li, Zhenhua; Wang, Chengyang; Chen, Mingming

    2018-04-25

    To address the issues of low electrical conductivity, sluggish lithiation kinetics and dramatic volume variation in Fe 3 O 4 anodes of lithium ion battery, herein, a double carbon-confined three-dimensional (3D) nanocomposite architecture was synthesized by an electrostatically assisted self-assembly strategy. In the constructed architecture, the ultrafine Fe 3 O 4 subunits (∼10 nm) self-organize to form nanospheres (NSs) that are fully coated by amorphous carbon (AC), formatting core-shell structural Fe 3 O 4 /AC NSs. By further encapsulation by reduced graphene oxide (rGO) layers, a constructed 3D architecture was built as dual carbon-confined rGO/Fe 3 O 4 /AC. Such structure restrains the adverse reaction of the electrolyte, improves the electronic conductivity and buffers the mechanical stress of the entire electrode, thus performing excellent long-term cycling stability (99.4% capacity retention after 465 cycles relevant to the second cycle at 5 A g -1 ). Kinetic analysis reveals that a dual lithium storage mechanism including a diffusion reaction mechanism and a surface capacitive behavior mechanism coexists in the composites. Consequently, the resulting rGO/Fe 3 O 4 /AC nanocomposite delivers a high reversible capacity (835.8 mA h g -1 for 300 cycles at 1 A g -1 ), as well as remarkable rate capability (436.7 mA h g -1 at 10 A g -1 ).

  14. Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  15. A Hybrid Architecture for Vision-Based Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Güzel

    2013-01-01

    Full Text Available This paper proposes a new obstacle avoidance method using a single monocular vision camera as the only sensor which is called as Hybrid Architecture. This architecture integrates a high performance appearance-based obstacle detection method into an optical flow-based navigation system. The hybrid architecture was designed and implemented to run both methods simultaneously and is able to combine the results of each method using a novel arbitration mechanism. The proposed strategy successfully fused two different vision-based obstacle avoidance methods using this arbitration mechanism in order to permit a safer obstacle avoidance system. Accordingly, to establish the adequacy of the design of the obstacle avoidance system, a series of experiments were conducted. The results demonstrate the characteristics of the proposed architecture, and the results prove that its performance is somewhat better than the conventional optical flow-based architecture. Especially, the robot employing Hybrid Architecture avoids lateral obstacles in a more smooth and robust manner than when using the conventional optical flow-based technique.

  16. Automatic Functionality Assignment to AUTOSAR Multicore Distributed Architectures

    DEFF Research Database (Denmark)

    Maticu, Florin; Pop, Paul; Axbrink, Christian

    2016-01-01

    The automotive electronic architectures have moved from federated architectures, where one function is implemented in one ECU (Electronic Control Unit), to distributed architectures, where several functions may share resources on an ECU. In addition, multicore ECUs are being adopted because...... of better performance, cost, size, fault-tolerance and power consumption. In this paper we present an approach for the automatic software functionality assignment to multicore distributed architectures. We consider that the systems use the AUTomotive Open System ARchitecture (AUTOSAR). The functionality...

  17. Base Camp Architecture

    Directory of Open Access Journals (Sweden)

    Warebi Gabriel Brisibe

    2016-03-01

    Full Text Available Longitudinal or time line studies of change in the architecture of a particular culture are common, but an area still open to further research is change across space or place. In particular, there is need for studies on architectural change of cultures stemming from the same ethnic source split between their homeland and other Diasporas. This change may range from minor deviations to drastic shifts away from an architectural norm and the accumulation of these shifts within a time frame constitutes variations. This article focuses on identifying variations in the architecture of the Ijo fishing group that migrates along the coastline of West Africa. It examines the causes of cross-cultural variation between base camp dwellings of Ijo migrant fishermen in the Bakassi Peninsula in Cameroon and Bayelsa State in Nigeria. The study draws on the idea of the inevitability of cultural and social change over time as proposed in the theories of cultural dynamism and evolution. It tests aspects of cultural transmission theory using the principal coordinates analysis to ascertain the possible causes of variation. From the findings, this research argues that migration has enhanced the forces of cultural dynamism, which have resulted in significant variations in the architecture of this fishing group.

  18. A set-theoretic model reference adaptive control architecture for disturbance rejection and uncertainty suppression with strict performance guarantees

    Science.gov (United States)

    Arabi, Ehsan; Gruenwald, Benjamin C.; Yucelen, Tansel; Nguyen, Nhan T.

    2018-05-01

    Research in adaptive control algorithms for safety-critical applications is primarily motivated by the fact that these algorithms have the capability to suppress the effects of adverse conditions resulting from exogenous disturbances, imperfect dynamical system modelling, degraded modes of operation, and changes in system dynamics. Although government and industry agree on the potential of these algorithms in providing safety and reducing vehicle development costs, a major issue is the inability to achieve a-priori, user-defined performance guarantees with adaptive control algorithms. In this paper, a new model reference adaptive control architecture for uncertain dynamical systems is presented to address disturbance rejection and uncertainty suppression. The proposed framework is predicated on a set-theoretic adaptive controller construction using generalised restricted potential functions.The key feature of this framework allows the system error bound between the state of an uncertain dynamical system and the state of a reference model, which captures a desired closed-loop system performance, to be less than a-priori, user-defined worst-case performance bound, and hence, it has the capability to enforce strict performance guarantees. Examples are provided to demonstrate the efficacy of the proposed set-theoretic model reference adaptive control architecture.

  19. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2007-01-01

    The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis

  20. Cathode architectures for alkali metal / oxygen batteries

    Science.gov (United States)

    Visco, Steven J; Nimon, Vitaliy; De Jonghe, Lutgard C; Volfkovich, Yury; Bograchev, Daniil

    2015-01-13

    Electrochemical energy storage devices, such as alkali metal-oxygen battery cells (e.g., non-aqueous lithium-air cells), have a cathode architecture with a porous structure and pore composition that is tailored to improve cell performance, especially as it pertains to one or more of the discharge/charge rate, cycle life, and delivered ampere-hour capacity. A porous cathode architecture having a pore volume that is derived from pores of varying radii wherein the pore size distribution is tailored as a function of the architecture thickness is one way to achieve one or more of the aforementioned cell performance improvements.

  1. New results on performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2013-12-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the selected relay. We first derive the signal-to-noise (SNR) statistics for each hop, which are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation and end-to-end outage probability for a transmission rate R over Rayleigh fading channels. Furthermore, we evaluate the asymptotical performance and deduce the diversity order. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network architecture. © 2013 Elsevier B.V.

  2. The application of diagrams in architectural design

    Directory of Open Access Journals (Sweden)

    Dulić Olivera

    2014-01-01

    Full Text Available Diagrams in architecture represent the visualization of the thinking process, or selective abstraction of concepts or ideas translated into the form of drawings. In addition, they provide insight into the way of thinking about and in architecture, thus creating a balance between the visual and the conceptual. The subject of research presented in this paper are diagrams as a specific kind of architectural representation, and possibilities and importance of their application in the design process. Diagrams are almost old as architecture itself, and they are an element of some of the most important studies of architecture during all periods of history - which results in a large number of different definitions of diagrams, but also very different conceptualizations of their features, functions and applications. The diagrams become part of contemporary architectural discourse during the eighties and nineties of the twentieth century, especially through the work of architects like Bernard Tschumi, Peter Eisenman, Rem Koolhaas, SANAA and others. The use of diagrams in the design process allows unification of some of the essential aspects of the profession: architectural representation and design process, as well as the question of the concept of architectural and urban design at a time of rapid changes at all levels of contemporary society. The aim of the research is the analysis of the diagram as a specific medium for processing large amounts of information that the architect should consider and incorporate into the architectural work. On that basis, it is assumed that an architectural diagram allows the creator the identification and analysis of specific elements or ideas of physical form, thereby constantly maintaining concept of the integrity of the architectural work.

  3. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-01-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  4. SANDS: an architecture for clinical decision support in a National Health Information Network.

    Science.gov (United States)

    Wright, Adam; Sittig, Dean F

    2007-10-11

    A new architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support) is introduced and its performance evaluated. The architecture provides a method for performing clinical decision support across a network, as in a health information exchange. Using the prototype we demonstrated that, first, a number of useful types of decision support can be carried out using our architecture; and, second, that the architecture exhibits desirable reliability and performance characteristics.

  5. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  6. Tank waste remediation system architecture tree

    International Nuclear Information System (INIS)

    PECK, L.G.

    1999-01-01

    The TWRS Architecture Tree presented in this document is a hierarchical breakdown to support the TWRS systems engineering analysis of the TWRS physical system, including facilities, hardware and software. The purpose for this systems engineering architecture tree is to describe and communicate the system's selected and existing architecture, to provide a common structure to improve the integration of work and resulting products, and to provide a framework as a basis for TWRS Specification Tree development

  7. Optical linear algebra processors - Architectures and algorithms

    Science.gov (United States)

    Casasent, David

    1986-01-01

    Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture.

  8. Performance analysis

    International Nuclear Information System (INIS)

    2008-05-01

    This book introduces energy and resource technology development business with performance analysis, which has business division and definition, analysis of current situation of support, substance of basic plan of national energy, resource technique development, selection of analysis index, result of performance analysis by index, performance result of investigation, analysis and appraisal of energy and resource technology development business in 2007.

  9. Exploration of Heterogeneous FPGA Architectures

    Directory of Open Access Journals (Sweden)

    Umer Farooq

    2011-01-01

    mesh and tree-based architectures are evaluated for three sets of benchmark circuits. Experimental results show that a more flexible floor-planning in mesh-based FPGA gives better results as compared to the column-based floor-planning. Also it is shown that compared to different floor-plannings of mesh-based FPGA, tree-based architecture gives better area, performance, and power results.

  10. Optimized readout configuration for PIXE spectrometers based on Silicon Drift Detectors: Architecture and performance

    International Nuclear Information System (INIS)

    Alberti, R.; Grassi, N.; Guazzoni, C.; Klatka, T.

    2009-01-01

    An optimized readout configuration based on a charge preamplifier with pulsed-reset has been designed for Silicon Drift Detectors (SDDs) to be used in Particle Induced X-ray Emission (PIXE) measurements. The customized readout electronics is able to manage the large pulses originated by the protons backscattered from the target material that would otherwise cause significant degradation of X-ray spectra and marked increase in dead time. In this way, the excellent performance of SDDs can be exploited in high-quality proton-induced spectroscopy of low- and medium-energy X-rays. This paper describes the designed readout architecture and the performance characterization carried out in a PIXE setup with MeV proton beams.

  11. A Study On The Influence Of Illuminance Quality To Student’s Performance Of Visual Activities: Case Study Of Architecture Studio Room In Universitas Islam Indonesia

    OpenAIRE

    Bayuaji, Wisnu Hendrawan; Ayu Iswardhani, Tanty Kesuma; Risky, Alif Angga

    2017-01-01

    This research aims to measure and evaluate the level of illuminance quality in architecture studio room in relation to the achievement of standard performance and minimum visual comfortableness that should be fulfilled. This study also explores the effect of both illumination quality and quantity to the students physical and psychological performance in conducting visual activities in the architecture studio room. This evaluation will also evaluate the deviation between perception of brightne...

  12. Evaluating the performance of the particle finite element method in parallel architectures

    Science.gov (United States)

    Gimenez, Juan M.; Nigro, Norberto M.; Idelsohn, Sergio R.

    2014-05-01

    This paper presents a high performance implementation for the particle-mesh based method called particle finite element method two (PFEM-2). It consists of a material derivative based formulation of the equations with a hybrid spatial discretization which uses an Eulerian mesh and Lagrangian particles. The main aim of PFEM-2 is to solve transport equations as fast as possible keeping some level of accuracy. The method was found to be competitive with classical Eulerian alternatives for these targets, even in their range of optimal application. To evaluate the goodness of the method with large simulations, it is imperative to use of parallel environments. Parallel strategies for Finite Element Method have been widely studied and many libraries can be used to solve Eulerian stages of PFEM-2. However, Lagrangian stages, such as streamline integration, must be developed considering the parallel strategy selected. The main drawback of PFEM-2 is the large amount of memory needed, which limits its application to large problems with only one computer. Therefore, a distributed-memory implementation is urgently needed. Unlike a shared-memory approach, using domain decomposition the memory is automatically isolated, thus avoiding race conditions; however new issues appear due to data distribution over the processes. Thus, a domain decomposition strategy for both particle and mesh is adopted, which minimizes the communication between processes. Finally, performance analysis running over multicore and multinode architectures are presented. The Courant-Friedrichs-Lewy number used influences the efficiency of the parallelization and, in some cases, a weighted partitioning can be used to improve the speed-up. However the total cputime for cases presented is lower than that obtained when using classical Eulerian strategies.

  13. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  14. Changes in Muscle Architecture, Explosive Ability, and Track and Field Throwing Performance Throughout a Competitive Season and After a Taper.

    Science.gov (United States)

    Bazyler, Caleb D; Mizuguchi, Satoshi; Harrison, Alex P; Sato, Kimitake; Kavanaugh, Ashley A; DeWeese, Brad H; Stone, Michael H

    2017-10-01

    The purpose of this study was to examine the effects of an overreach and taper on measures of muscle architecture, jumping, and throwing performance in Division I collegiate throwers preparing for conference championships. Six collegiate track and field throwers (3 hammer, 2 discus, 1 javelin) trained for 12 weeks using a block-periodization model culminating with a 1-week overreach followed by a 3-week taper (ORT). Session rating of perceived exertion training load (RPETL) and strength training volume-load times bar displacement (VLd) were recorded weekly. Athletes were tested pre-ORT and post-ORT on measures of vastus lateralis architecture, unloaded and loaded squat and countermovement jump performance, underhand and overhead throwing performance, and competition throwing performance. There was a statistical reduction in weight training VLd/session (d = 1.21, p ≤ 0.05) and RPETL/session (d = 0.9, p ≤ 0.05) between the in-season and ORT training phases. Five of 6 athletes improved overhead throw and competition throwing performance after the ORT (d = 0.50, p ≤ 0.05). Vastus lateralis muscle thickness statistically increased after the in-season training phase (d = 0.28, p ≤ 0.05) but did not change after the ORT. Unloaded countermovement jump peak force and relative peak power improved significantly after the ORT (d = 0.59, p ≤ 0.05, d = 0.31, p ≤ 0.05, respectively). These findings demonstrate that an overreaching week followed by a 3-week taper is an effective means of improving explosive ability and throwing performance in collegiate track and field throwers despite the absence of detectable changes in muscle architecture.

  15. Economic assessment model architecture for AGC/AVLIS selection

    International Nuclear Information System (INIS)

    Hoglund, R.L.

    1984-01-01

    The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint

  16. INTEGRATED INFORMATION SYSTEM ARCHITECTURE PROVIDING BEHAVIORAL FEATURE

    Directory of Open Access Journals (Sweden)

    Vladimir N. Shvedenko

    2016-11-01

    Full Text Available The paper deals with creation of integrated information system architecture capable of supporting management decisions using behavioral features. The paper considers the architecture of information decision support system for production system management. The behavioral feature is given to an information system, and it ensures extraction, processing of information, management decision-making with both automated and automatic modes of decision-making subsystem being permitted. Practical implementation of information system with behavior is based on service-oriented architecture: there is a set of independent services in the information system that provides data of its subsystems or data processing by separate application under the chosen variant of the problematic situation settlement. For creation of integrated information system with behavior we propose architecture including the following subsystems: data bus, subsystem for interaction with the integrated applications based on metadata, business process management subsystem, subsystem for the current state analysis of the enterprise and management decision-making, behavior training subsystem. For each problematic situation a separate logical layer service is created in Unified Service Bus handling problematic situations. This architecture reduces system information complexity due to the fact that with a constant amount of system elements the number of links decreases, since each layer provides communication center of responsibility for the resource with the services of corresponding applications. If a similar problematic situation occurs, its resolution is automatically removed from problem situation metamodel repository and business process metamodel of its settlement. In the business process performance commands are generated to the corresponding centers of responsibility to settle a problematic situation.

  17. Transforming the existing building stock to high performed energy efficient and experienced architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    architectural heritage to energy efficiency and from architectural quality to sustainability. The first, second and third renovations are discussed from financial and sustainable view points. The role of housing related to the public energy supply system and the relation between the levels of renovation......The project Sustainable Renovation examines the challenge of the current and future architectural renovation of Danish suburbs which were designed in the period from 1945 to 1973. The research project takes its starting point in the perspectives of energy optimization and the fact that the building...

  18. SUSTAINABLE ARCHITECTURE : WHAT ARCHITECTURE STUDENTS THINK

    OpenAIRE

    SATWIKO, PRASASTO

    2013-01-01

    Sustainable architecture has become a hot issue lately as the impacts of climate change become more intense. Architecture educations have responded by integrating knowledge of sustainable design in their curriculum. However, in the real life, new buildings keep coming with designs that completely ignore sustainable principles. This paper discusses the results of two national competitions on sustainable architecture targeted for architecture students (conducted in 2012 and 2013). The results a...

  19. A performance analysis of DS-CDMA and SCPC VSAT networks

    Science.gov (United States)

    Hayes, David P.; Ha, Tri T.

    1990-01-01

    Spread-spectrum and single-channel-per-carrier (SCPC) transmission techniques work well in very small aperture terminal (VSAT) networks for multiple-access purposes while allowing the earth station antennas to remain small. Direct-sequence code-division multiple-access (DS-CDMA) is the simplest spread-spectrum technique to use in a VSAT network since a frequency synthesizer is not required for each terminal. An examination is made of the DS-CDMA and SCPC Ku-band VSAT satellite systems for low-density (64-kb/s or less) communications. A method for improving the standardf link analysis of DS-CDMA satellite-switched networks by including certain losses is developed. The performance of 50-channel full mesh and star network architectures is analyzed. The selection of operating conditions producing optimum performance is demonstrated.

  20. An open source/real-time atomic force microscope architecture to perform customizable force spectroscopy experiments.

    Science.gov (United States)

    Materassi, Donatello; Baschieri, Paolo; Tiribilli, Bruno; Zuccheri, Giampaolo; Samorì, Bruno

    2009-08-01

    We describe the realization of an atomic force microscope architecture designed to perform customizable experiments in a flexible and automatic way. Novel technological contributions are given by the software implementation platform (RTAI-LINUX), which is free and open source, and from a functional point of view, by the implementation of hard real-time control algorithms. Some other technical solutions such as a new way to estimate the optical lever constant are described as well. The adoption of this architecture provides many degrees of freedom in the device behavior and, furthermore, allows one to obtain a flexible experimental instrument at a relatively low cost. In particular, we show how such a system has been employed to obtain measures in sophisticated single-molecule force spectroscopy experiments [Fernandez and Li, Science 303, 1674 (2004)]. Experimental results on proteins already studied using the same methodologies are provided in order to show the reliability of the measure system.

  1. Architecture

    OpenAIRE

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  2. Wavy Channel architecture thin film transistor (TFT) using amorphous zinc oxide for high-performance and low-power semiconductor circuits

    KAUST Repository

    Hanna, Amir; Hussain, Aftab M.; Hussain, Muhammad Mustafa

    2015-01-01

    We report a Wavy Channel (WC) architecture thin film transistor (TFT) for extended device width by integrating continuous vertical fin like features with lateral continuous plane in the substrate. For a WC TFT which has 50% larger device width, the enhancement in the output drive current is 100%, when compared to a conventional planar TFT consuming the same chip area. This current increase is attributed to both the extra width and enhanced field effect mobility due to corner effects. This shows the potential of WC architecture to boast circuit performance without the need for aggressive gate length scaling. © 2015 IEEE.

  3. Wavy Channel architecture thin film transistor (TFT) using amorphous zinc oxide for high-performance and low-power semiconductor circuits

    KAUST Repository

    Hanna, Amir

    2015-08-12

    We report a Wavy Channel (WC) architecture thin film transistor (TFT) for extended device width by integrating continuous vertical fin like features with lateral continuous plane in the substrate. For a WC TFT which has 50% larger device width, the enhancement in the output drive current is 100%, when compared to a conventional planar TFT consuming the same chip area. This current increase is attributed to both the extra width and enhanced field effect mobility due to corner effects. This shows the potential of WC architecture to boast circuit performance without the need for aggressive gate length scaling. © 2015 IEEE.

  4. A Massively Scalable Architecture for Instant Messaging & Presence

    NARCIS (Netherlands)

    Schippers, Jorrit; Remke, Anne Katharina Ingrid; Punt, Henk; Wegdam, M.; Haverkort, Boudewijn R.H.M.; Thomas, N.; Bradley, J.; Knottenbelt, W.; Dingle, N.; Harder, U.

    2010-01-01

    This paper analyzes the scalability of Instant Messaging & Presence (IM&P) architectures. We take a queueing-based modelling and analysis approach to ��?nd the bottlenecks of the current IM&P architecture at the Dutch social network Hyves, as well as of alternative architectures. We use the

  5. Cyber threat impact assessment and analysis for space vehicle architectures

    Science.gov (United States)

    McGraw, Robert M.; Fowler, Mark J.; Umphress, David; MacDonald, Richard A.

    2014-06-01

    This paper covers research into an assessment of potential impacts and techniques to detect and mitigate cyber attacks that affect the networks and control systems of space vehicles. Such systems, if subverted by malicious insiders, external hackers and/or supply chain threats, can be controlled in a manner to cause physical damage to the space platforms. Similar attacks on Earth-borne cyber physical systems include the Shamoon, Duqu, Flame and Stuxnet exploits. These have been used to bring down foreign power generation and refining systems. This paper discusses the potential impacts of similar cyber attacks on space-based platforms through the use of simulation models, including custom models developed in Python using SimPy and commercial SATCOM analysis tools, as an example STK/SOLIS. The paper discusses the architecture and fidelity of the simulation model that has been developed for performing the impact assessment. The paper walks through the application of an attack vector at the subsystem level and how it affects the control and orientation of the space vehicle. SimPy is used to model and extract raw impact data at the bus level, while STK/SOLIS is used to extract raw impact data at the subsystem level and to visually display the effect on the physical plant of the space vehicle.

  6. Space and place concepts analysis based on semiology approach in residential architecture

    Directory of Open Access Journals (Sweden)

    Mojtaba Parsaee

    2015-12-01

    Full Text Available Space and place are among the fundamental concepts in architecture about which many discussions have been held and the complexity and importance of these concepts were focused on. This research has introduced an approach to better cognition of the architectural concepts based on theory and method of semiology in linguistics. Hence, at first the research investigates the concepts of space and place and explains their characteristics in architecture. Then, it reviews the semiology theory and explores its concepts and ideas. After obtaining the principles of theory and also the method of semiology, they are redefined in an architectural system based on an adaptive method. Finally, the research offers a conceptual model which is called the semiology approach by considering the architectural system as a system of signs. The approach can be used to decode the content of meanings and forms and analyses of the architectural mechanism in order to obtain its meanings and concepts. In this way and based on this approach, the residential architecture of the traditional city of Bushehr – Iran was analyzed as a case of study and its concepts were extracted. The results of this research demonstrate the effectiveness of this approach in structure detection and identification of an architectural system. Besides, this approach has the capability to be used in processes of sustainable development and also be a basis for deconstruction of architectural texts. The research methods of this study are qualitative based on comparative and descriptive analyses.

  7. Towards architectural information in implementation (NIER track)

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2011-01-01

    in a fast-faced agile project. We propose to embed as much architectural information as possible in the central artefact of the agile universe, the code. We argue that thereby valuable architectural information is retained for (automatic) documentation, validation, and further analysis, based......Agile development methods favor speed and feature producing iterations. Software architecture, on the other hand, is ripe with techniques that are slow and not oriented directly towards implementation of costumers’ needs. Thus, there is a major challenge in retaining architectural information...

  8. Evolution of ZnO architecture on a nanoporous TiO{sub 2} film by a hydrothermal method and the photoelectrochemical performance

    Energy Technology Data Exchange (ETDEWEB)

    Jiang Yinhua; Wu Xiaoli; Zhang Wenli; Ni Liang [School of Chemistry and Chemical Engineering, Jiangsu University, Zhenjiang 212013 (China); Sun Yueming, E-mail: yms418@126.com [School of Chemistry and Chemical Engineering, Southeast University, Nanjing 211189 (China)

    2011-03-15

    The synthesis of ZnO architecture on a fluorine-doped SnO{sub 2} (FTO) conducting glass pre-coated with nanoporous TiO{sub 2} film has been achieved by a one-step hydrothermal method at a temperature of 70 deg. C. The effect of the reaction time on the morphology of the ZnO architecture has been investigated, and a possible growth mechanism for the formation of the ZnO architecture is discussed in detail. The morphology and phase structures of the as-obtained composite films have been investigated by field-emission scanning electron microscopy (FESEM) and X-ray diffraction (XRD). The results show that the growth time greatly affects the morphology of the obtained ZnO architecture. The photoelectrochemical performances of as-prepared composite films are measured by assembling them into dye sensitized solar cells (DSSCs). The DSSC based on the as-prepared composite film (2 h) has obtained the best power conversion efficiency of 1.845%. (semiconductor materials)

  9. MDCT after balloon kyphoplasty: analysis of vertebral body architecture one year after treatment of osteoporotic fractures

    International Nuclear Information System (INIS)

    Roehrl, B.; Dueber, C.; Sadick, M.; Brocker, K.; Voggenreiter, G.; Obertacke, U.; Brade, J.

    2006-01-01

    Purpose: to evaluate the value of MDCT in the monitoring of vertebral body architecture after balloon kyphoplasty and observe morphological changes of the vertebral body. Material and methods: during a period of 26 months, 66 osteoporotic fractures of the vertebral bodies were treated with percutanous balloon kyphoplasty. The height of the vertebral body, width of spinal space, sagittal indices, kyphosis und COBB angle, and cement leakage were evaluated by computed tomography before and after treatment and in a long-term follow up. Statistical analysis was performed by calculating quantitative constant parameters of descriptive key data. In addition, parametric and distribution-free procedures were performed for all questions. Results: after kyphoplasty, the treated vertebral bodies showed a significant gain in the height of the leading edge (0.15 cm; p < 0.0001) and in the central part of the vertebral body (0.17 cm; p < 0.0001). The height of the trailing edge did not change significantly. A corresponding gain in the sagittal index was found. The index remained stable during follow-up. Treated vertebral bodies as well as untreated references showed a comparable loss of height over the period of one year. The shape of the vertebral bodies remained stable. In comparison to these findings, treated vertebral bodies showed a reduced loss of height. A significant change in kyphosis und the COBB angle was noted. In total, pallacos leakage was detected in 71% of cases. Conclusion: MDCT is an accurate method for evaluating vertebral body architecture after treatment with balloon kyphoplasty. (orig.)

  10. Modeling Architectural Patterns Using Architectural Primitives

    NARCIS (Netherlands)

    Zdun, Uwe; Avgeriou, Paris

    2005-01-01

    Architectural patterns are a key point in architectural documentation. Regrettably, there is poor support for modeling architectural patterns, because the pattern elements are not directly matched by elements in modeling languages, and, at the same time, patterns support an inherent variability that

  11. Application of the Life Cycle Analysis and the Building Information Modelling Software in the Architectural Climate Change-Oriented Design Process

    Science.gov (United States)

    Gradziński, Piotr

    2017-10-01

    Whereas World’s climate is changing (inter alia, under the influence of architecture activity), the author attempts to reorientations design practice primarily in a direction the use and adapt to the climatic conditions. Architectural Design using in early stages of the architectural Design Process of the building, among other Life Cycle Analysis (LCA) and digital analytical tools BIM (Building Information Modelling) defines the overriding requirements which the designer/architect should meet. The first part, the text characterized the architecture activity influences (by consumption, pollution, waste, etc.) and the use of building materials (embodied energy, embodied carbon, Global Warming Potential, etc.) within the meaning of the direct negative environmental impact. The second part, the paper presents the revision of the methods and analytical techniques prevent negative influences. Firstly, showing the study of the building by using the Life Cycle Analysis of the structure (e.g. materials) and functioning (e.g. energy consumptions) of the architectural object (stages: before use, use, after use). Secondly, the use of digital analytical tools for determining the benefits of running multi-faceted simulations in terms of environmental factors (exposure to light, shade, wind) directly affecting shaping the form of the building. The conclusion, author’s research results highlight the fact that indicates the possibility of building design using the above-mentioned elements (LCA, BIM) causes correction, early designs decisions in the design process of architectural form, minimizing the impact on nature, environment. The work refers directly to the architectural-environmental dimensions, orienting the design process of buildings in respect of widely comprehended climatic changes.

  12. Architectural Thermal Forms II: Brick Envelope

    DEFF Research Database (Denmark)

    Foged, Isak Worre

    2013-01-01

    The paper presents an architectural concept and design method that investigates the use of dynamic factors in evolutionary form finding processes. The architectural construct, phenotype, is based on a brick assembly and how this can be organized based upon material properties and environmental...... aspects selected from the factors used in the Fanger equations to determine perceived comfort. The work finds that the developed method can be applied as performance oriented driver, while at the same time allowing diversity and variation in the architectural design space....

  13. Data accuracy assessment using enterprise architecture

    Science.gov (United States)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  14. Architectural and Behavioral Systems Design Methodology and Analysis for Optimal Habitation in a Volume-Limited Spacecraft for Long Duration Flights

    Science.gov (United States)

    Kennedy, Kriss J.; Lewis, Ruthan; Toups, Larry; Howard, Robert; Whitmire, Alexandra; Smitherman, David; Howe, Scott

    2016-01-01

    As our human spaceflight missions change as we reach towards Mars, the risk of an adverse behavioral outcome increases, and requirements for crew health, safety, and performance, and the internal architecture, will need to change to accommodate unprecedented mission demands. Evidence shows that architectural arrangement and habitability elements impact behavior. Net habitable volume is the volume available to the crew after accounting for elements that decrease the functional volume of the spacecraft. Determination of minimum acceptable net habitable volume and associated architectural design elements, as mission duration and environment varies, is key to enabling, maintaining, andor enhancing human performance and psychological and behavioral health. Current NASA efforts to derive minimum acceptable net habitable volumes and study the interaction of covariates and stressors, such as sensory stimulation, communication, autonomy, and privacy, and application to internal architecture design layouts, attributes, and use of advanced accommodations will be presented. Furthermore, implications of crew adaptation to available volume as they transfer from Earth accommodations, to deep space travel, to planetary surface habitats, and return, will be discussed.

  15. Interconnection network architectures based on integrated orbital angular momentum emitters

    Science.gov (United States)

    Scaffardi, Mirco; Zhang, Ning; Malik, Muhammad Nouman; Lazzeri, Emma; Klitis, Charalambos; Lavery, Martin; Sorel, Marc; Bogoni, Antonella

    2018-02-01

    Novel architectures for two-layer interconnection networks based on concentric OAM emitters are presented. A scalability analysis is done in terms of devices characteristics, power budget and optical signal to noise ratio by exploiting experimentally measured parameters. The analysis shows that by exploiting optical amplifications, the proposed interconnection networks can support a number of ports higher than 100. The OAM crosstalk induced-penalty, evaluated through an experimental characterization, do not significantly affect the interconnection network performance.

  16. Performance Analysis of Congestion Control Mechanism in Software Defined Network (SDN

    Directory of Open Access Journals (Sweden)

    Rahman M. Z. A.

    2017-01-01

    Full Text Available In the near future, the traditional networks architecture will be difficult to be managed. Hence, Software Defined Network (SDN will be an alternative in the future of programmable networks to replace the conventional network architecture. The main idea of SDN architecture is to separate the forwarding plane and control plane of network system, where network operators can program packet forwarding behaviour to improve the network performance. Congestion control is important mechanism for network traffic to improve network capability and achieve high end Quality of Service (QoS. In this paper, extensive simulation is conducted to analyse the performance of SDN by implementing Link Layer Discovery Protocol (LLDP under congested network. The simulation was conducted on Mininet by creating four different fanout and the result was analysed based on differences of matrix performance. As a result, the packet loss and throughput reduction were observed when number of fanout in the topology was increased. By using LLDP protocol, huge reduction in packet loss rate has been achieved while maximizing percentage packet delivery ratio.

  17. An Autonomous Mobile Agent-Based Distributed Learning Architecture-A Proposal and Analytical Analysis

    Directory of Open Access Journals (Sweden)

    I. Ahmed M. J. SADIIG

    2005-10-01

    Full Text Available An Autonomous Mobile Agent-Based Distributed Learning Architecture-A Proposal and Analytical Analysis Dr. I. Ahmed M. J. SADIIG Department of Electrical & Computer EngineeringInternational Islamic University GombakKuala Lumpur-MALAYSIA ABSTRACT The traditional learning paradigm invoving face-to-face interaction with students is shifting to highly data-intensive electronic learning with the advances in Information and Communication Technology. An important component of the e-learning process is the delivery of the learning contents to their intended audience over a network. A distributed learning system is dependent on the network for the efficient delivery of its contents to the user. However, as the demand of information provision and utilization increases on the Internet, the current information service provision and utilization methods are becoming increasingly inefficient. Although new technologies have been employed for efficient learning methodologies within the context of an e-learning environment, the overall efficiency of the learning system is dependent on the mode of distribution and utilization of its learning contents. It is therefore imperative to employ new techniques to meet the service demands of current and future e-learning systems. In this paper, an architecture based on autonomous mobile agents creating a Faded Information Field is proposed. Unlike the centralized information distribution in a conventional e-learning system, the information is decentralized in the proposed architecture resulting in increased efficiency of the overall system for distribution and utilization of system learning contents efficiently and fairly. This architecture holds the potential to address the heterogeneous user requirements as well as the changing conditions of the underlying network.

  18. Special Issue on Automatic Application Tuning for HPC Architectures

    Directory of Open Access Journals (Sweden)

    Siegfried Benkner

    2014-01-01

    Full Text Available High Performance Computing architectures have become incredibly complex and exploiting their full potential is becoming more and more challenging. As a consequence, automatic performance tuning (autotuning of HPC applications is of growing interest and many research groups around the world are currently involved. Autotuning is still a rapidly evolving research field with many different approaches being taken. This special issue features selected papers presented at the Dagstuhl seminar on “Automatic Application Tuning for HPC Architectures” in October 2013, which brought together researchers from the areas of autotuning and performance analysis in order to exchange ideas and steer future collaborations.

  19. High-performance computing on the Intel Xeon Phi how to fully exploit MIC architectures

    CERN Document Server

    Wang, Endong; Shen, Bo; Zhang, Guangyong; Lu, Xiaowei; Wu, Qing; Wang, Yajuan

    2014-01-01

    The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel® Xeon Phi™ series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors' first-hand optimization experience.The material is organized in three sections. The first section, "Basics of MIC", introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment

  20. Architecture of Environmental Engineering

    DEFF Research Database (Denmark)

    Wenzel, Henrik; Alting, Leo

    2006-01-01

    An architecture of Environmental Engineering has been developed comprising the various disciplines and tools involved. It identifies industry as the major actor and target group, and it builds on the concept of Eco-efficiency. To improve Eco-efficiency, there is a limited number of intervention......-efficiency is the aim of Environmental Engineering, the discipline of synthesis – design and creation of solutions – will form a core pillar of the architecture. Other disciplines of Environmental Engineering exist forming the necessary background and frame for the synthesis. Environmental Engineering, thus, in essence...... comprise the disciplines of: management, system description & inventory, analysis & assessment, prioritisation, synthesis, and communication, each existing at all levels of intervention. The developed architecture of Environmental Engineering, thus, consists of thirty individual disciplines, within each...

  1. Architecture of Environmental Engineering

    DEFF Research Database (Denmark)

    Wenzel, Henrik; Alting, Leo

    2004-01-01

    An architecture of Environmental Engineering has been developed comprising the various disciplines and tools involved. It identifies industry as the major actor and target group, and it builds on the concept of Eco-efficiency. To improve Eco-efficiency, there is a limited number of intervention...... of Eco-efficiency is the aim of Environmental Engineering, the discipline of synthesis – design and creation of solutions – will form a core pillar of the architecture. Other disciplines of Environmental Engineering exist forming the necessary background and frame for the synthesis. Environmental...... Engineering, thus, in essence comprise the disciplines of: management, system description & inventory, analysis & assessment, prioritisation, synthesis, and communication, each existing at all levels of intervention. The developed architecture of Environmental Engineering, thus, consists of thirty individual...

  2. Porous sheet-like and sphere-like nano-architectures of SnO2 nanoparticles via a solvent-thermal approach and their gas-sensing performances

    International Nuclear Information System (INIS)

    Jie Liu; Tang, Xin-Cun; Xiao, Yuan-Hua; Hai Jia,; Gong, Mei-Li; Huang, Fu-Qin

    2013-01-01

    Highlights: • Porous sheet-like and sphere-like nano-architectures of SnO 2 nanoparticles have been prepared. • A solvent-thermal approach without surfactant or polymer templates simply by changing the volume ratio of DMF to water. • The formation mechanism of nano-architectures is proposed in this article. • Porous sphere-like SnO 2 nano-architectures exhibit good sensitivity to the reduce vapors tested. • Sheet-like materials show better selectivity to ethanol. -- Abstract: Porous sheet-like and sphere-like nano-architectures of SnO 2 nanoparticles have been prepared via a solvent-thermal approach in the absence of any surfactant or polymer templates by simply changing the volume ratio of DMF to water. The nano-materials have been characterized by FESEM, XRD, IR, TEM and BET. A mechanism for the formation of nano-architectures is also proposed based on the assembly behaviors of DMF in water. The gas sensors constructed with porous sphere-like SnO 2 nano-architectures exhibit much higher sensitivity to the reduce vapors tested, compared to those from porous sheet-like SnO 2 materials, while the sheet-like materials show better selectivity to ethanol. The nano-architectures fabricated with the facile method are promising candidates for building chemical sensors with tunable performances

  3. ARCHITECT: The architecture-based technology evaluation and capability tradeoff method

    Science.gov (United States)

    Griendling, Kelly A.

    creation of DoDAF products forward in the defense acquisition process, and (3) using DoDAF products for more than documentation by integrating them into the problem definition and analysis of alternatives phases and applying executable architecting. This research proposes and demonstrates the plausibility of a prescriptive methodology for developing executable DoDAF products which will explicitly support decision-making in the early phases of JCIDS. A set of criteria by which CBAs should be judged is proposed, and the methodology is developed with these criteria in mind. The methodology integrates existing tools and techniques for systems engineering and system of systems engineering with several new modeling and simulation tools and techniques developed as part of this research to fill gaps noted in prior CBAs. A suppression of enemy air defenses (SEAD) mission is used to demonstrate the ap- plication of ARCHITECT and to show the plausibility of the approach. For the SEAD study, metrics are derived and a gap analysis is performed. The study then identifies and quantitatively compares system and operational architecture alternatives for performing SEAD. A series of down-selections is performed to identify promising architectures, and these promising solutions are subject to further analysis where the impacts of force structure and network structure are examined. While the numerical results of the SEAD study are notional and could not be applied to an actual SEAD CBA, the example served to highlight many of the salient features of the methodology. The SEAD study presented enabled pre-Milestone A tradeoffs to be performed quantitatively across a large number of architectural alternatives in a traceable and repeatable manner. The alternatives considered included variations on operations, systems, organizational responsibilities (through the assignment of systems to tasks), network (or collaboration) structure, interoperability level, and force structure. All of the

  4. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  5. Quantitative analysis of the epithelial lining architecture in radicular cysts and odontogenic keratocysts

    Directory of Open Access Journals (Sweden)

    Landini Gabriel

    2006-02-01

    Full Text Available Abstract Background This paper describes a quantitative analysis of the cyst lining architecture in radicular cysts (of inflammatory aetiology and odontogenic keratocysts (thought to be developmental or neoplastic including its 2 counterparts: solitary and associated with the Basal Cell Naevus Syndrome (BCNS. Methods Epithelial linings from 150 images (from 9 radicular cysts, 13 solitary keratocysts and 8 BCNS keratocysts were segmented into theoretical cells using a semi-automated partition based on the intensity of the haematoxylin stain which defined exclusive areas relative to each detected nucleus. Various morphometrical parameters were extracted from these "cells" and epithelial layer membership was computed using a systematic clustering routine. Results Statistically significant differences were observed across the 3 cyst types both at the morphological and architectural levels of the lining. Case-wise discrimination between radicular cysts and keratocyst was highly accurate (with an error of just 3.3%. However, the odontogenic keratocyst subtypes could not be reliably separated into the original classes, achieving discrimination rates slightly above random allocations (60%. Conclusion The methodology presented is able to provide new measures of epithelial architecture and may help to characterise and compare tissue spatial organisation as well as provide useful procedures for automating certain aspects of histopathological diagnosis.

  6. Experiences from the Architectural Migration of a Joint Replacement Surgery Information System

    Directory of Open Access Journals (Sweden)

    Samuli Niiranen

    2008-01-01

    Full Text Available The goal of this study is to present the experiences gathered from the migration of an existing and deployed joint replacement surgery information system from a classical 2-tier architecture to a 4-tier architecture. These include discussion on the motivation for the migration and on the technical benefits of the chosen technical migration path and an evaluation of user experiences. The results from the analysis of clinical end-user and administrator experiences show an increase in the perceived performance and maintainability of the system and a high level of acceptance for the new system version.

  7. Architectural approach to the energy performance of buildings in a hot-dry climate with special reference to Egypt

    Energy Technology Data Exchange (ETDEWEB)

    Hamdy, I F

    1986-01-01

    A thesis is presented on the changing approach to architectural design of buildings in a hot, dry climate in view of the increased recognition of the importance of energy efficiency. The thermal performance of buildings in Egypt is used as an example and the nature of the local climate and human requirements are also studied. Other effects on the thermal performance considered include building form, orientation and surrounding conditions. An evaluative computer model is constructed and its applications allow the prediction on the energy performance of changing design parameters.

  8. An architecture for fault tolerant controllers

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2005-01-01

    degradation in the sense of guaranteed degraded performance. A number of fault diagnosis problems, fault tolerant control problems, and feedback control with fault rejection problems are formulated/considered, mainly from a fault modeling point of view. The method is illustrated on a servo example including......A general architecture for fault tolerant control is proposed. The architecture is based on the (primary) YJBK parameterization of all stabilizing compensators and uses the dual YJBK parameterization to quantify the performance of the fault tolerant system. The approach suggested can be applied...

  9. High Performance Flexible Pseudocapacitor based on Nano-architectured Spinel Nickel Cobaltite Anchored Multiwall Carbon Nanotubes

    International Nuclear Information System (INIS)

    Shakir, Imran

    2014-01-01

    Highlights: • Two-step fabrication method for nano-architectured spinel nickel cobaltite (NiCo 2 O 4 ) anchored MWCNTs composite. • High performance flexible energy-storage devices. • The NiCo 2 O 4 anchored MWCNTs Exhibits 2032 Fg −1 capacitance which is 1.62 times greater than pristine NiCo 2 O 4 at 1 Ag −1 . - Abstract: We demonstrate a facile two-step fabrication method for nano-architectured spinel nickel cobaltite (NiCo 2 O 4 ) anchored multiwall carbon nanotubes (MWCNTs) based electrodes for high performance flexible energy-storage devices. As electrode materials for flexible supercapacitors, the NiCo 2 O 4 anchored MWCNTs exhibits a high specific capacitance of 2032 Fg −1 , which is nearly 1.62 times greater than pristine NiCo 2 O 4 nanoflakes at 1 Ag −1 . The synthesized NiCo 2 O 4 anchored MWCNTs composite shows excellent rate performance (83.96% capacity retention at 30 Ag −1 ) and stability with coulombic efficiency over 96% after 5,000 cycles when being fully charged/discharged at 1 Ag −1 . Furthermore, NiCo 2 O 4 anchored MWCNTs achieve a maximum energy density of 48.32 Whkg −1 at a power density of 480 Wkg −1 which is 60% higher than pristine NiCo 2 O 4 electrode and significantly outperformed electrode materials based on NiCo 2 O 4 which are currently used in the state-of-the-art supercapacitors throughout the literature. This superior rate performance and high-capacity value offered by NiCo 2 O 4 anchored MWCNTs is mainly due to enhanced electronic and ionic conductivity, which provides a short diffusion path for ions and an easy access of electrolyte flow to nickel cobaltite redox centers besides the high conductivity of MWCNTs

  10. Organizational architecture of multinational companies

    OpenAIRE

    Sikorová, Lenka

    2009-01-01

    The main goal of the bachelor thesis Organizational Architecture of Multinational Companies is to elaborate the overview of organizational structures that are used by modern global companies. The thesis contains an analysis of such companies development, principles of functioning, pros and cons and the opportunities which these brings. It also contains a description of the basic concepts associated with organizational architecture such as globalization, multinational companies and organizatio...

  11. Tank waste remediation system architecture tree; TOPICAL

    International Nuclear Information System (INIS)

    PECK, L.G.

    1999-01-01

    The TWRS Architecture Tree presented in this document is a hierarchical breakdown to support the TWRS systems engineering analysis of the TWRS physical system, including facilities, hardware and software. The purpose for this systems engineering architecture tree is to describe and communicate the system's selected and existing architecture, to provide a common structure to improve the integration of work and resulting products, and to provide a framework as a basis for TWRS Specification Tree development

  12. Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Bawej, Tomasz; et al.

    2014-01-01

    TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities. During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.

  13. Implementation of MP_Lite for the VI Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Weiyi [Iowa State Univ., Ames, IA (United States)

    2001-01-01

    MP_Lite is a light weight message-passing library designed to deliver the maximum performance to applications in a portable and user friendly manner. The Virtual Interface (VI) architecture is a user-level communication protocol that bypasses the operating system to provide much better performance than traditional network architectures. By combining the high efficiency of MP_Lite and high performance of the VI architecture, they are able to implement a high performance message-passing library that has much lower latency and better throughput. The design and implementation of MP_Lite for M-VIA, which is a modular implementation of the VI architecture on Linux, is discussed in this thesis. By using the eager protocol for sending short messages, MP_Lite M-VIA has much lower latency on both Fast Ethernet and Gigabit Ethernet. The handshake protocol and RDMA mechanism provides double the throughput that MPICH can deliver for long messages. MP_Lite M-VIA also has the ability to channel-bonding multiple network interface cards to increase the potential bandwidth between nodes. Using multiple Fast Ethernet cards can double or even triple the maximum throughput without increasing the cost of a PC cluster greatly.

  14. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  15. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  16. Architectural Optimization of Digital Libraries

    Science.gov (United States)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  17. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  18. A Geo-Distributed System Architecture for Different Domains

    Science.gov (United States)

    Moßgraber, Jürgen; Middleton, Stuart; Tao, Ran

    2013-04-01

    The presentation will describe work on the system-of-systems (SoS) architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". In this project we deal with two use-cases: Natural Crisis Management (e.g. Tsunami Early Warning) and Industrial Subsurface Development (e.g. drilling for oil). These use-cases seem to be quite different at first sight but share a lot of similarities, like managing and looking up available sensors, extracting data from them and annotate it semantically, intelligently manage the data (big data problem), run mathematical analysis algorithms on the data and finally provide decision support on this basis. The main challenge was to create a generic architecture which fits both use-cases. The requirements to the architecture are manifold and the whole spectrum of a modern, geo-distributed and collaborative system comes into play. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. The most important architectural challenges we needed to address are 1. Build a scalable communication layer for a System-of-sytems 2. Build a resilient communication layer for a System-of-sytems 3. Efficiently publish large volumes of semantically rich sensor data 4. Scalable and high performance storage of large distributed datasets 5. Handling federated multi-domain heterogeneous data 6. Discovery of resources in a geo-distributed SoS 7. Coordination of work between geo-distributed systems The design decisions made for each of them will be presented. These developed concepts are also applicable to the requirements of the Future Internet (FI) and Internet of Things (IoT) which will provide services like smart grids, smart metering, logistics and

  19. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders...

  20. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  1. Distributed embedded smart cameras architectures, design and applications

    CERN Document Server

    Velipasalar, Senem

    2014-01-01

    This publication addresses distributed embedded smart cameras –cameras that perform onboard analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for, and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains. •                    Examines energy issues related to wireless communication such as decreasing energy consumption to increase battery-life •                    Discusses processing large volumes of video data on an embedded environment in real-time •                    Covers design of realistic applications of distributed and embedded smart...

  2. Temporal Architecture: Poetic Dwelling in Japanese buildings

    Directory of Open Access Journals (Sweden)

    Michael Lazarin

    2014-07-01

    Full Text Available Heidegger’s thinking about poetic dwelling and Derrida’s impressions of Freudian estrangement are employed to provide a constitutional analysis of the experience of Japanese architecture, in particular, the Japanese vestibule (genkan. This analysis is supplemented by writings by Japanese architects and poets. The principal elements of Japanese architecture are: (1 ma, and (2 en. Ma is usually translated as ‘interval’ because, like the English word, it applies to both space and time.  However, in Japanese thinking, it is not so much an either/or, but rather a both/and. In other words, Japanese architecture emphasises the temporal aspect of dwelling in a way that Western architectural thinking usually does not. En means ‘joint, edge, the in-between’ as an ambiguous, often asymmetrical spanning of interior and exterior, rather than a demarcation of these regions. Both elements are aimed at producing an experience of temporality and transiency.

  3. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  4. Low power adder based auditory filter architecture.

    Science.gov (United States)

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  5. Low Power Adder Based Auditory Filter Architecture

    Directory of Open Access Journals (Sweden)

    P. F. Khaleelur Rahiman

    2014-01-01

    Full Text Available Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  6. An open architecture for medical image workstation

    Science.gov (United States)

    Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun

    2005-04-01

    Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.

  7. Real-time field programmable gate array architecture for computer vision

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar

    2001-01-01

    This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low-level image processing. The field programmable gate array (FPGA)-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and it is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on dedicated very- large-scale-integrated devices to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real-time performance are discussed. Some results are presented and discussed.

  8. High performance matrix inversion based on LU factorization for multicore architectures

    KAUST Repository

    Dongarra, Jack

    2011-01-01

    The goal of this paper is to present an efficient implementation of an explicit matrix inversion of general square matrices on multicore computer architecture. The inversion procedure is split into four steps: 1) computing the LU factorization, 2) inverting the upper triangular U factor, 3) solving a linear system, whose solution yields inverse of the original matrix and 4) applying backward column pivoting on the inverted matrix. Using a tile data layout, which represents the matrix in the system memory with an optimized cache-aware format, the computation of the four steps is decomposed into computational tasks. A directed acyclic graph is generated on the fly which represents the program data flow. Its nodes represent tasks and edges the data dependencies between them. Previous implementations of matrix inversions, available in the state-of-the-art numerical libraries, are suffer from unnecessary synchronization points, which are non-existent in our implementation in order to fully exploit the parallelism of the underlying hardware. Our algorithmic approach allows to remove these bottlenecks and to execute the tasks with loose synchronization. A runtime environment system called QUARK is necessary to dynamically schedule our numerical kernels on the available processing units. The reported results from our LU-based matrix inversion implementation significantly outperform the state-of-the-art numerical libraries such as LAPACK (5x), MKL (5x) and ScaLAPACK (2.5x) on a contemporary AMD platform with four sockets and the total of 48 cores for a matrix of size 24000. A power consumption analysis shows that our high performance implementation is also energy efficient and substantially consumes less power than its competitors. © 2011 ACM.

  9. Architectural Building A Public Key Infrastructure Integrated Information Space

    Directory of Open Access Journals (Sweden)

    Vadim Ivanovich Korolev

    2015-10-01

    Full Text Available The article keeps under consideration the mattersto apply the cryptographic system having a public key to provide information security and to implya digital signature. It performs the analysis of trust models at the formation of certificates and their use. The article describes the relationships between the trust model and the architecture public key infrastructure. It contains conclusions in respect of the options for building the public key infrastructure for integrated informationspace.

  10. Service-Oriented Architecture Approach to MAGTF Logistics Support Systems

    Science.gov (United States)

    2013-09-01

    Support System-Marine Corps IT Information Technology KPI Key Performance Indicators LCE Logistics Command Element ITV In-transit Visibility LCM...building blocks, options, KPI (key performance indicators), design decisions and the corresponding; the physical attributes which is the second attribute... KPI ) that they impact. h. Layer 8 (Information Architecture) The business intelligence layer and information architecture safeguards the inclusion

  11. Staged Event-Driven Architecture As A Micro-Architecture Of Distributed And Pluginable Crawling Platform

    Directory of Open Access Journals (Sweden)

    Leszek Siwik

    2013-01-01

    Full Text Available There are many crawling systems available on the market but they are rather close systems dedicated for performing particular kind and class of tasks with predefined set of scope, strategy etc. In real life however there are meaningful groups of users (e.g. marketing, criminal or governmental analysts requiring not just a yet another crawling system dedicated for performing predefined tasks. They need rather easy-to-use, user friendly all-in-one studio for not only executing and running internet robots and crawlers, but also for (graphical (redefining and (recomposing crawlers according to dynamically changing requirements and use-cases. To realize the above-mentioned idea, Cassiopeia framework has been designed and developed. One has to remember, however, that enormous size and unimaginable structural complexity of WWW network are the reasons that, from a technical and architectural point of view, developing effective internet robots – and the more so developing a framework supporting graphical robots’ composition – becomes a really challenging task. The crucial aspect in the context of crawling efficiency and scalability is concurrency model applied. There are two the most typical concurrency management models i.e. classical concurrency based on the pool of threads and processes and event-driven concurrency. None of them are ideal approaches. That is why, research on alternative models is still conducted to propose efficient and convenient architecture for concurrent and distributed applications. One of promising models is staged event-driven architecture mixing to some extent both of above mentioned classical approaches and providing some additional benefits such as splitting application into separate stages connected by events queues – what is interesting taking requirements about crawler (recomposition into account. The goal of this paper is to present the idea and the PoC  implementation of Cassiopeia framework, with the special

  12. Mars Hybrid Propulsion System Trajectory Analysis. Part I; Crew Missions

    Science.gov (United States)

    Chai, Patrick R.; Merrill, Raymond G.; Qu, Min

    2015-01-01

    NASAs Human spaceflight Architecture team is developing a reusable hybrid transportation architecture in which both chemical and electric propulsion systems are used to send crew and cargo to Mars destinations such as Phobos, Deimos, the surface of Mars, and other orbits around Mars. By combining chemical and electrical propulsion into a single space- ship and applying each where it is more effective, the hybrid architecture enables a series of Mars trajectories that are more fuel-efficient than an all chemical architecture without significant increases in flight times. This paper provides the analysis of the interplanetary segments of the three Evolvable Mars Campaign crew missions to Mars using the hybrid transportation architecture. The trajectory analysis provides departure and arrival dates and propellant needs for the three crew missions that are used by the campaign analysis team for campaign build-up and logistics aggregation analysis. Sensitivity analyses were performed to investigate the impact of mass growth, departure window, and propulsion system performance on the hybrid transportation architecture. The results and system analysis from this paper contribute to analyses of the other human spaceflight architecture team tasks and feed into the definition of the Evolvable Mars Campaign.

  13. Performance evaluation for compressible flow calculations on five parallel computers of different architectures

    International Nuclear Information System (INIS)

    Kimura, Toshiya.

    1997-03-01

    A two-dimensional explicit Euler solver has been implemented for five MIMD parallel computers of different machine architectures in Center for Promotion of Computational Science and Engineering of Japan Atomic Energy Research Institute. These parallel computers are Fujitsu VPP300, NEC SX-4, CRAY T94, IBM SP2, and Hitachi SR2201. The code was parallelized by several parallelization methods, and a typical compressible flow problem has been calculated for different grid sizes changing the number of processors. Their effective performances for parallel calculations, such as calculation speed, speed-up ratio and parallel efficiency, have been investigated and evaluated. The communication time among processors has been also measured and evaluated. As a result, the differences on the performance and the characteristics between vector-parallel and scalar-parallel computers can be pointed, and it will present the basic data for efficient use of parallel computers and for large scale CFD simulations on parallel computers. (author)

  14. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    Science.gov (United States)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  15. Module Architecture for in Situ Space Laboratories

    Science.gov (United States)

    Sherwood, Brent

    2010-01-01

    The paper analyzes internal outfitting architectures for space exploration laboratory modules. ISS laboratory architecture is examined as a baseline for comparison; applicable insights are derived. Laboratory functional programs are defined for seven planet-surface knowledge domains. Necessary and value-added departures from the ISS architecture standard are defined, and three sectional interior architecture options are assessed for practicality and potential performance. Contemporary guidelines for terrestrial analytical laboratory design are found to be applicable to the in-space functional program. Densepacked racks of system equipment, and high module volume packing ratios, should not be assumed as the default solution for exploration laboratories whose primary activities include un-scriptable investigations and experimentation on the system equipment itself.

  16. Students׳ motivation for architecture education in Uganda

    Directory of Open Access Journals (Sweden)

    Mark R.O. Olweny

    2017-09-01

    Full Text Available Understanding the persistence and success of students has gained increasing attention to unravel the “architectural education black-box.” However, the motivation and pre-socialization of incoming students were largely ignored as these factors fell outside the direct control of architecture schools. Motivational factors can affect the educational process given that the values, expectations, and career-related goals of incoming students influence their attitudes to education. This study seeks to uncover the motivational factors of applicants to an architecture program in East Africa and appreciate those factors that lead students into architecture as a career choice. Through qualitative content analysis, the study revealed the motivational factors of applicants, which were classified into four groups: educational, external, personal, and prestige. These factors were comparable with those found in previous studies conducted in Europe and North America, but nevertheless highlight contextual variances unique to the region. The findings raise questions of the role architecture education in engaging incoming students in discourse that aids their understanding of architecture and architectural education.

  17. Architecture and memory

    Directory of Open Access Journals (Sweden)

    Eneida de Almeida

    2015-12-01

    Full Text Available This paper investigates the links between architecture design and restoration, considering the blurry frontier that distinguishes this actions. The study holds in two contemporary architects performance: Lina Bo Bardi (1914-1992 and Aldo Rossi (1931-1997. The analyses of the concrete production, presented here by a work of each architecture – Sesc Pompeia and the Teatro Del Mondo – is based on the ability of reflection on the role of the memory in architecture: not only the memory in the buildings and urban fabrics materiality, but also the memory as an active instrument inside the mental processes adopted by the projects authors. Resorting to architects writings as well as authors who analyses this interventions, they seek to reconstitute the design development path, recognizing the strategy that reinterprets past experiences in order to overcome the traditional contraposition between “old” and “new”, tutorship and innovation.

  18. Architecture of a spatial data service system for statistical analysis and visualization of regional climate changes

    Science.gov (United States)

    Titov, A. G.; Okladnikov, I. G.; Gordov, E. P.

    2017-11-01

    The use of large geospatial datasets in climate change studies requires the development of a set of Spatial Data Infrastructure (SDI) elements, including geoprocessing and cartographical visualization web services. This paper presents the architecture of a geospatial OGC web service system as an integral part of a virtual research environment (VRE) general architecture for statistical processing and visualization of meteorological and climatic data. The architecture is a set of interconnected standalone SDI nodes with corresponding data storage systems. Each node runs a specialized software, such as a geoportal, cartographical web services (WMS/WFS), a metadata catalog, and a MySQL database of technical metadata describing geospatial datasets available for the node. It also contains geospatial data processing services (WPS) based on a modular computing backend realizing statistical processing functionality and, thus, providing analysis of large datasets with the results of visualization and export into files of standard formats (XML, binary, etc.). Some cartographical web services have been developed in a system’s prototype to provide capabilities to work with raster and vector geospatial data based on OGC web services. The distributed architecture presented allows easy addition of new nodes, computing and data storage systems, and provides a solid computational infrastructure for regional climate change studies based on modern Web and GIS technologies.

  19. Enterprise architecture evaluation using architecture framework and UML stereotypes

    Directory of Open Access Journals (Sweden)

    Narges Shahi

    2014-08-01

    Full Text Available There is an increasing need for enterprise architecture in numerous organizations with complicated systems with various processes. Support for information technology, organizational units whose elements maintain complex relationships increases. Enterprise architecture is so effective that its non-use in organizations is regarded as their institutional inability in efficient information technology management. The enterprise architecture process generally consists of three phases including strategic programing of information technology, enterprise architecture programing and enterprise architecture implementation. Each phase must be implemented sequentially and one single flaw in each phase may result in a flaw in the whole architecture and, consequently, in extra costs and time. If a model is mapped for the issue and then it is evaluated before enterprise architecture implementation in the second phase, the possible flaws in implementation process are prevented. In this study, the processes of enterprise architecture are illustrated through UML diagrams, and the architecture is evaluated in programming phase through transforming the UML diagrams to Petri nets. The results indicate that the high costs of the implementation phase will be reduced.

  20. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    Science.gov (United States)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  1. Supporting migration to services using software architecture reconstruction

    OpenAIRE

    O'Brien, Liam; Smith, Dennis; Lewis, Grace

    2005-01-01

    peer-reviewed There are many good reasons why organizations should perform software architecture reconstructions. However, few organizations are willing to pay for the effort. Software architecture reconstruction must be viewed not as an effort on its own but as a contribution in a broader technical context, such as the streamlining of products into a product line or the modernization of systems that hit their architectural borders, that is require major restructuring. In this paper we ...

  2. Reasons for Implementing Movement in Kinetic Architecture

    Science.gov (United States)

    Cudzik, Jan; Nyka, Lucyna

    2017-10-01

    The paper gives insights into different forms of movement in contemporary architecture and examines them based on the reasons for their implementation. The main objective of the paper is to determine: the degree to which the complexity of kinematic architecture results from functional and spatial needs and what other motivations there are. The method adopted to investigate these questions involves theoretical studies and comparative analyses of architectural objects with different forms of movement imbedded in their structure. Using both methods allowed delving into reasons that lie behind the implementation of movement in contemporary kinetic architecture. As research shows, there is a constantly growing range of applications with kinematic solutions inserted in buildings’ structures. The reasons for their implementation are manifold and encompass pursuits of functional qualities, environmental performance, spatial effects, social interactions and new aesthetics. In those early projects based on simple mechanisms, the main motives were focused on functional values and in later experiments - on improving buildings’ environmental performance. Additionally, in recent proposals, a significant quest could be detected toward kinematic solutions that are focused on factors related to alternative aesthetics and innovative spatial effects. Research reveals that the more complicated form of movement, the more often the reason for its implementation goes beyond the traditionally understood “function”. However, research also shows that the effects resulting from investigations on spatial qualities of architecture and new aesthetics often appear to provide creative insights into new functionalities in architecture.

  3. An Overview on SDN Architectures with Multiple Controllers

    Directory of Open Access Journals (Sweden)

    Othmane Blial

    2016-01-01

    Full Text Available Software-defined networking offers several benefits for networking by separating the control plane from the data plane. However, networks’ scalability, reliability, and availability remain as a big issue. Accordingly, multicontroller architectures are important for SDN-enabled networks. This paper gives a comprehensive overview of SDN multicontroller architectures. It presents SDN and its main instantiation OpenFlow. Then, it explains in detail the differences between multiple types of multicontroller architectures, like the distribution method and the communication system. Furthermore, it provides already implemented and under research examples of multicontroller architectures by describing their design, their communication process, and their performance results.

  4. Comparative Study of Bio-implantable Acoustic Generator Architectures

    International Nuclear Information System (INIS)

    Christensen, D; Roundy, S

    2013-01-01

    This paper is a comparative study of the design spaces of two bio-implantable acoustically excited generator architectures: the thickness-stretch-mode circular piezoelectric plate and the bending-mode unimorph piezoelectric diaphragm. The generators are part of an acoustic power transfer system for implanted sensors and medical devices such as glucose monitors, metabolic monitors, drug delivery systems, etc. Our studies indicate that at small sizes the diaphragm architecture outperforms the plate architecture. This paper will present the results of simulation studies and initial experiments that explore the characteristics of the two architectures and compare their performance

  5. Lightweight enterprise architectures

    CERN Document Server

    Theuerkorn, Fenix

    2004-01-01

    STATE OF ARCHITECTUREArchitectural ChaosRelation of Technology and Architecture The Many Faces of Architecture The Scope of Enterprise Architecture The Need for Enterprise ArchitectureThe History of Architecture The Current Environment Standardization Barriers The Need for Lightweight Architecture in the EnterpriseThe Cost of TechnologyThe Benefits of Enterprise Architecture The Domains of Architecture The Gap between Business and ITWhere Does LEA Fit? LEA's FrameworkFrameworks, Methodologies, and Approaches The Framework of LEATypes of Methodologies Types of ApproachesActual System Environmen

  6. SEMICONDUCTOR INTEGRATED CIRCUITS: A high performance 90 nm CMOS SAR ADC with hybrid architecture

    Science.gov (United States)

    Xingyuan, Tong; Jianming, Chen; Zhangming, Zhu; Yintang, Yang

    2010-01-01

    A 10-bit 2.5 MS/s SAR A/D converter is presented. In the circuit design, an R-C hybrid architecture D/A converter, pseudo-differential comparison architecture and low power voltage level shifters are utilized. Design challenges and considerations are also discussed. In the layout design, each unit resistor is sided by dummies for good matching performance, and the capacitors are routed with a common-central symmetry method to reduce the nonlin-earity error. This proposed converter is implemented based on 90 nm CMOS logic process. With a 3.3 V analog supply and a 1.0 V digital supply, the differential and integral nonlinearity are measured to be less than 0.36 LSB and 0.69 LSB respectively. With an input frequency of 1.2 MHz at 2.5 MS/s sampling rate, the SFDR and ENOB are measured to be 72.86 dB and 9.43 bits respectively, and the power dissipation is measured to be 6.62 mW including the output drivers. This SAR A/D converter occupies an area of 238 × 214 μm2. The design results of this converter show that it is suitable for multi-supply embedded SoC applications.

  7. Layered Architecture for Quantum Computing

    Directory of Open Access Journals (Sweden)

    N. Cody Jones

    2012-07-01

    Full Text Available We develop a layered quantum-computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction. In doing so, we propose a new quantum-computer architecture based on optical control of quantum dots. The time scales of physical-hardware operations and logical, error-corrected quantum gates differ by several orders of magnitude. By dividing functionality into layers, we can design and analyze subsystems independently, demonstrating the value of our layered architectural approach. Using this concrete hardware platform, we provide resource analysis for executing fault-tolerant quantum algorithms for integer factoring and quantum simulation, finding that the quantum-dot architecture we study could solve such problems on the time scale of days.

  8. Operations and maintenance performance in oil and gas production assets. Theoretical architecture and capital value theory in perspective

    Energy Technology Data Exchange (ETDEWEB)

    Liyanage, Jayantha P.

    2003-07-01

    to visualize how operations and maintenance performance makes good business sense, and more balanced information and knowledge requirements to support decisions settings. The thesis emphasizes that despite there is a popular demand on this issue, subject matter has not fully been explored within the oil and gas business environment, and even the few of more recent contributions have not adequately addressed the issue. The underlying challenges in this regard are attributed in this thesis to socio-technical complexity and causal ambiguity of operations and maintenance performance within organizational settings. And it furthermore emphasizes that the degree of such complexity and ambiguity are defined by the extent of information and knowledge asymmetries on performance. To address the issue of technical alienation of operations and maintenance performance in oil and gas business terms, the thesis attempt to generate a link between oil and gas business, oil and gas production assets, and operations and maintenance performance. The underlying assertion is that, oil and gas production portfolio of any organization bears a specific role-play in respect of what matters for commercial success of the business, and that role-play in turn is the basis to redefine the mission for operations and maintenance. And also, this mission remains the point of departure for systematic development of operations and maintenance performance architecture. The theoretical architecture that is brought into perspective in this thesis, addresses both the socio-technical complexity by dimensioning operations and maintenance performance into its constituent components, and causal ambiguity simultaneously by incorporating a logic to this dimensioning process. Equally importantly, during this effort, it also pays attention to institute relevance, completeness, and flexibility of the architecture as necessary. Moving further, the thesis elaborates on how this theoretical architecture can be extended

  9. Operations and maintenance performance in oil and gas production assets. Theoretical architecture and capital value theory in perspective

    International Nuclear Information System (INIS)

    Liyanage, Jayantha P.

    2003-01-01

    to visualize how operations and maintenance performance makes good business sense, and more balanced information and knowledge requirements to support decisions settings. The thesis emphasizes that despite there is a popular demand on this issue, subject matter has not fully been explored within the oil and gas business environment, and even the few of more recent contributions have not adequately addressed the issue. The underlying challenges in this regard are attributed in this thesis to socio-technical complexity and causal ambiguity of operations and maintenance performance within organizational settings. And it furthermore emphasizes that the degree of such complexity and ambiguity are defined by the extent of information and knowledge asymmetries on performance. To address the issue of technical alienation of operations and maintenance performance in oil and gas business terms, the thesis attempt to generate a link between oil and gas business, oil and gas production assets, and operations and maintenance performance. The underlying assertion is that, oil and gas production portfolio of any organization bears a specific role-play in respect of what matters for commercial success of the business, and that role-play in turn is the basis to redefine the mission for operations and maintenance. And also, this mission remains the point of departure for systematic development of operations and maintenance performance architecture. The theoretical architecture that is brought into perspective in this thesis, addresses both the socio-technical complexity by dimensioning operations and maintenance performance into its constituent components, and causal ambiguity simultaneously by incorporating a logic to this dimensioning process. Equally importantly, during this effort, it also pays attention to institute relevance, completeness, and flexibility of the architecture as necessary. Moving further, the thesis elaborates on how this theoretical architecture can be extended

  10. 1D Co2.18Ni0.82Si2O5(OH)4 architectures assembled by ultrathin nanoflakes for high-performance flexible solid-state asymmetric supercapacitors

    Science.gov (United States)

    Zhao, Junhong; Zheng, Mingbo; Run, Zhen; Xia, Jing; Sun, Mengjun; Pang, Huan

    2015-07-01

    1D Co2.18Ni0.82Si2O5(OH)4 architectures assembled by ultrathin nanoflakes are synthesized for the first time by a hydrothermal method. We present a self-reacting template method to synthesize 1D Co2.18Ni0.82Si2O5(OH)4 architectures using Ni(SO4)0.3(OH)1.4 nanobelts. A high-performance flexible asymmetric solid-state supercapacitor can be successfully fabricated based on the 1D Co2.18Ni0.82Si2O5(OH)4 architectures and graphene nanosheets. Interestingly, the as-assembled 1D Co2.18Ni0.82Si2O5(OH)4 architectures//Graphene nanosheets asymmetric solid-state supercapacitor can achieve a maximum energy density of 0.496 mWh cm-3, which is higher than most of reported solid state supercapacitors. Additionally, the device shows high cycle stability for 10,000 cycles. These features make the 1D Co2.18Ni0.82Si2O5(OH)4 architectures as one of the most promising candidates for high-performance energy storage devices.

  11. Indigenous architecture as a context-oriented architecture, a look at ...

    African Journals Online (AJOL)

    What has become problematic as the achievement of international style and globalization of architecture during the time has been the purely technological look at architecture, and the architecture without belonging to a place. In recent decades, the topic of sustainable architecture and reconsidering indigenous architecture ...

  12. New Energy Architecture. Myanmar

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-06-15

    from national and international public and private sectors and from civil society. This report is structured as follows. First, the New Energy Architecture methodology is outlined. In Step 1, the performance of the country's current energy architecture is assessed. Step 2 describes the setting of the objectives of the New Energy Architecture. Step 3 outlines insights to support the development of a New Energy Architecture, and highlights potential risks in achieving this. Step 4 then discusses the need for leadership and multistakeholder partnerships to support the implementation of a New Energy Architecture in Myanmar.

  13. New Energy Architecture. Myanmar

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-06-15

    inform their findings, which have come from national and international public and private sectors and from civil society. This report is structured as follows. First, the New Energy Architecture methodology is outlined. In Step 1, the performance of the country's current energy architecture is assessed. Step 2 describes the setting of the objectives of the New Energy Architecture. Step 3 outlines insights to support the development of a New Energy Architecture, and highlights potential risks in achieving this. Step 4 then discusses the need for leadership and multistakeholder partnerships to support the implementation of a New Energy Architecture in Myanmar.

  14. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  15. Architecture for the silvering society

    DEFF Research Database (Denmark)

    Andersson, Jonas E; Rönn, Magnus

    Abstract In the context of the universal ageing process that is currently taking place in western society, the organization of architecture competitions that deals with space for dependent ageing comes of relevance. Based on the welfare regime theory, it could be argued that this type of architec......Abstract In the context of the universal ageing process that is currently taking place in western society, the organization of architecture competitions that deals with space for dependent ageing comes of relevance. Based on the welfare regime theory, it could be argued that this type...... by the Swedish Institute of Assistive Technology (SIAT), which administered the governmental allocation of 50 million SEK. The research material was accumulated by use of internet searches, interviews and questionnaires. The analysis applied pattern seeking and involved close reading, document analysis...... on ageing, eldercare and space. Consequently, architecture competitions that focus on the emerging ageing society could be seen as a restrained type of space for architects to digress. National welfare goals and existing means to achieve these goals act as inhibitors for an innovative spatial preparation...

  16. Three-Dimensional Nanobiocomputing Architectures With Neuronal Hypercells

    Science.gov (United States)

    2007-06-01

    Neumann architectures, and CMOS fabrication. Novel solutions of massive parallel distributed computing and processing (pipelined due to systolic... and processing platforms utilizing molecular hardware within an enabling organization and architecture. The design technology is based on utilizing a...Microsystems and Nanotechnologies investigated a novel 3D3 (Hardware Software Nanotechnology) technology to design super-high performance computing

  17. Architecture in the Islamic Civilization: Muslim Building or Islamic Architecture

    OpenAIRE

    Yassin, Ayat Ali; Utaberta, Dr. Nangkula

    2012-01-01

    The main problem of the theory in the arena of islamic architecture is affected by some of its Westernthoughts, and stereotyping the islamic architecture according to Western thoughts; this leads to the breakdownof the foundations in the islamic architecture. It is a myth that islamic architecture is subjected to theinfluence from foreign architectures. This paper will highlight the dialectical concept of islamic architecture ormuslim buildings and the areas of recognition in islamic architec...

  18. Virtualized cognitive network architecture for 5G cellular networks

    KAUST Repository

    Elsawy, Hesham

    2015-07-17

    Cellular networks have preserved an application agnostic and base station (BS) centric architecture1 for decades. Network functionalities (e.g. user association) are decided and performed regardless of the underlying application (e.g. automation, tactile Internet, online gaming, multimedia). Such an ossified architecture imposes several hurdles against achieving the ambitious metrics of next generation cellular systems. This article first highlights the features and drawbacks of such architectural ossification. Then the article proposes a virtualized and cognitive network architecture, wherein network functionalities are implemented via software instances in the cloud, and the underlying architecture can adapt to the application of interest as well as to changes in channels and traffic conditions. The adaptation is done in terms of the network topology by manipulating connectivities and steering traffic via different paths, so as to attain the applications\\' requirements and network design objectives. The article presents cognitive strategies to implement some of the classical network functionalities, along with their related implementation challenges. The article further presents a case study illustrating the performance improvement of the proposed architecture as compared to conventional cellular networks, both in terms of outage probability and handover rate.

  19. National Launch System comparative economic analysis

    Science.gov (United States)

    Prince, A.

    1992-01-01

    Results are presented from an analysis of economic benefits (or losses), in the form of the life cycle cost savings, resulting from the development of the National Launch System (NLS) family of launch vehicles. The analysis was carried out by comparing various NLS-based architectures with the current Shuttle/Titan IV fleet. The basic methodology behind this NLS analysis was to develop a set of annual payload requirements for the Space Station Freedom and LEO, to design launch vehicle architectures around these requirements, and to perform life-cycle cost analyses on all of the architectures. A SEI requirement was included. Launch failure costs were estimated and combined with the relative reliability assumptions to measure the effects of losses. Based on the analysis, a Shuttle/NLS architecture evolving into a pressurized-logistics-carrier/NLS architecture appears to offer the best long-term cost benefit.

  20. How does Architecture Sound for Different Musical Instrument Performances?

    DEFF Research Database (Denmark)

    Saher, Konca; Rindel, Jens Holger

    2006-01-01

    This paper discusses how consideration of sound _in particular a specific musical instrument_ impacts the design of a room. Properly designed architectural acoustics is fundamental to improve the listening experience of an instrument in rooms in a conservatory. Six discrete instruments (violin, c...... different instruments and the choir experience that could fit into same category of room. For all calculations and the auralizations, a computational model is used: ODEON 7.0....

  1. Design of a Load-Balancing Architecture For Parallel Firewalls

    National Research Council Canada - National Science Library

    Joyner, William

    1999-01-01

    .... This thesis proposes a load-balancing firewall architecture to meet the Navy's needs. It first conducts an architectural analysis of the problem and then presents a high-level system design as a solution...

  2. Program Execution on Reconfigurable Multicore Architectures

    Directory of Open Access Journals (Sweden)

    Sanjiva Prasad

    2016-06-01

    Full Text Available Based on the two observations that diverse applications perform better on different multicore architectures, and that different phases of an application may have vastly different resource requirements, Pal et al. proposed a novel reconfigurable hardware approach for executing multithreaded programs. Instead of mapping a concurrent program to a fixed architecture, the architecture adaptively reconfigures itself to meet the application's concurrency and communication requirements, yielding significant improvements in performance. Based on our earlier abstract operational framework for multicore execution with hierarchical memory structures, we describe execution of multithreaded programs on reconfigurable architectures that support a variety of clustered configurations. Such reconfiguration may not preserve the semantics of programs due to the possible introduction of race conditions arising from concurrent accesses to shared memory by threads running on the different cores. We present an intuitive partial ordering notion on the cluster configurations, and show that the semantics of multithreaded programs is always preserved for reconfigurations "upward" in that ordering, whereas semantics preservation for arbitrary reconfigurations can be guaranteed for well-synchronised programs. We further show that a simple approximate notion of efficiency of execution on the different configurations can be obtained using the notion of amortised bisimulations, and extend it to dynamic reconfiguration.

  3. Ragnarok: An Architecture Based Software Development Environment

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    of the development process. The main contributions presented in the thesis have evolved from work with two of the hypotheses: These address the problems of management of evolution, and overview, comprehension and navigation respectively. The first main contribution is the Architectural Software Configuration...... Management Model: A software configuration management model where the abstractions and hierarchy of the logical aspect of software architecture forms the basis for version control and configuration management. The second main contribution is the Geographic Space Architecture Visualisation Model......: A visualisation model where entities in a software architecture are organised geographically in a two-dimensional plane, their visual appearance determined by processing a subset of the data in the entities, and interaction with the project's underlying data performed by direct manipulation of the landscape...

  4. Kalman filter tracking on parallel architectures

    Science.gov (United States)

    Cerati, G.; Elmer, P.; Krutelyov, S.; Lantz, S.; Lefebvre, M.; McDermott, K.; Riley, D.; Tadel, M.; Wittich, P.; Wurthwein, F.; Yagil, A.

    2017-10-01

    We report on the progress of our studies towards a Kalman filter track reconstruction algorithm with optimal performance on manycore architectures. The combinatorial structure of these algorithms is not immediately compatible with an efficient SIMD (or SIMT) implementation; the challenge for us is to recast the existing software so it can readily generate hundreds of shared-memory threads that exploit the underlying instruction set of modern processors. We show how the data and associated tasks can be organized in a way that is conducive to both multithreading and vectorization. We demonstrate very good performance on Intel Xeon and Xeon Phi architectures, as well as promising first results on Nvidia GPUs.

  5. Rasch family models in e-learning: analyzing architectural sketching with a digital pen.

    Science.gov (United States)

    Scalise, Kathleen; Cheng, Nancy Yen-Wen; Oskui, Nargas

    2009-01-01

    Since architecture students studying design drawing are usually assessed qualitatively on the basis of their final products, the challenges and stages of their learning have remained masked. To clarify the challenges in design drawing, we have been using the BEAR Assessment System and Rasch family models to measure levels of understanding for individuals and groups, in order to correct pedagogical assumptions and tune teaching materials. This chapter discusses the analysis of 81 drawings created by architectural students to solve a space layout problem, collected and analyzed with digital pen-and-paper technology. The approach allows us to map developmental performance criteria and perceive achievement overlaps in learning domains assumed separate, and then re-conceptualize a three-part framework to represent learning in architectural drawing. Results and measurement evidence from the assessment and Rasch modeling are discussed.

  6. Fabrication of monodispersed nickel flower-like architectures via a solvent-thermal process and analysis of their magnetic and electromagnetic properties

    International Nuclear Information System (INIS)

    Kong Jing; Liu Wei; Wang Fenglong; Wang Xinzhen; Luan Liqiang; Liu Jiurong; Wang Yuan; Zhang Zijun; Itoh, Masahiro; Machida, Ken-ichi

    2011-01-01

    Monodispersed Ni flower-like architectures with size of 1-2 μm were synthesized through a facile solvent-thermal process in 1,2-propanediol solution in the presence of polyethylene glycol (PEG) and sodium alkali for electromagnetic absorption application. The Ni architectures are composed of nanoflakes, which assemble to form three dimensional flower-like structure, and the thickness of nanoflakes is about 10-40 nm. A possible formation mechanism for Ni flower-like architectures was proposed and it was confirmed by the control experiments. The Ni architectures exhibited a saturation magnetization (M s ) of 47.7 emu/g and a large coercivity (H cj ) of 332.3 Oe. The epoxy resin composites with 20 vol% Ni sample provided good electromagnetic wave absorption performance (reflection loss cj ) of 332.3 Oe. → Efficient electromagnetic absorption (RL<-20 dB) was provided in 2.8-6.3 GHz.

  7. Proactive Modeling of Market, Product and Production Architectures

    DEFF Research Database (Denmark)

    Mortensen, Niels Henrik; Hansen, Christian Lindschou; Hvam, Lars

    2011-01-01

    This paper presents an operational model that allows description of market, products and production architectures. The main feature of this model is the ability to describe both structural and functional aspect of architectures. The structural aspect is an answer to the question: What constitutes...... the architecture, e.g. standard designs, design units and interfaces? The functional aspect is an answer to the question: What is the behaviour or the architecture, what is it able to do, i.e. which products at which performance levels can be derived from the architecture? Among the most important benefits...... of this model is the explicit ability to describe what the architecture is prepared for, and what it is not prepared for - concerning development of future derivative products. The model has been applied in a large scale global product development project. Among the most important benefits is contribution to...

  8. The Experimental Physics and Industrial Control System architecture: Past, present, and future

    International Nuclear Information System (INIS)

    Dalesio, L.R.; Hill, J.O.; Kraimer, M.; Lewis, S.; Murray, D.; Hunt, S.; Claussen, M.; Watson, W.

    1993-01-01

    The Experimental Physics and Industrial Control System (EPICS), has been used at a number of sites for performing data acquisition, supervisory control, closed-loop control, sequential control, and operational optimization. The EPICS architecture was originally developed by a group with diverse backgrounds in physics and industrial control. The current architecture represents one instance of the ''standard model.'' It provides distributed processing and communication from any LAN device to the front end controllers. This paper will present the genealogy, current architecture, performance envelope, current installations, and planned extensions for requirements not met by the current architecture

  9. Multi-bed patient room architectural evaluation

    Directory of Open Access Journals (Sweden)

    Evangelia Sklavou

    2016-12-01

    Full Text Available Introduction: Leveraging the physical environment’s merits is crucial in healthcare settings towards fostering sustainable healing conditions. In the future, the need to retrofit hospitals already appears more probable than to build new facilities. In Greece, holistic healthcare architecture has significant potential and room to develop. Aim: The architectural research of multi-bed patient room environment. Method: A sample of multi-bed patient rooms of a Greek hospital was studied per architectural documentation and user evaluation survey. Beyond recording the existing situation and user experience, user group differences and the influence of window proximity were studied. The survey sample was based on convenience and comprised 160 patients and 136 visitors. Statistical analysis was performed in SPSS 20, using chi-square exact tests of independence. The chosen level of significance was p < 0.05. Results: Architectural documentation showed that the building morphology had a positive impact in patient rooms, with regard to sunlight penetration and view. Further solar daylight control was deemed necessary, to facilitate overall environmental comfort conditions. High spatial density and considerable disadvantages of the middle patient bed, compared to the one bedside the window and the one further in the back of the room, were also ascertained. User groups did not evaluate their surroundings significantly different, with the exception of ease of access to the view. Window proximity influenced both patients and visitors in evaluating ease of access to the view and visual discomfort. Patients were further affected on window size evaluation and visitors on view related aspects. Conclusions: Synergy between building form and function contributes in creating holistic sustainable healing environments. User evaluation can deviate from objective documentation. Patients and visitors experienced the patient room in a similar manner. The middle bed was

  10. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    Science.gov (United States)

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  11. EMI Security Architecture

    CERN Document Server

    White, J.; Schuller, B.; Qiang, W.; Groep, D.; Koeroo, O.; Salle, M.; Sustr, Z.; Kouril, D.; Millar, P.; Benedyczak, K.; Ceccanti, A.; Leinen, S.; Tschopp, V.; Fuhrmann, P.; Heyman, E.; Konstantinov, A.

    2013-01-01

    This document describes the various architectures of the three middlewares that comprise the EMI software stack. It also outlines the common efforts in the security area that allow interoperability between these middlewares. The assessment of the EMI Security presented in this document was performed internally by members of the Security Area of the EMI project.

  12. Applications of an architecture design and assessment system (ADAS)

    Science.gov (United States)

    Gray, F. Gail; Debrunner, Linda S.; White, Tennis S.

    1988-01-01

    A new Architecture Design and Assessment System (ADAS) tool package is introduced, and a range of possible applications is illustrated. ADAS was used to evaluate the performance of an advanced fault-tolerant computer architecture in a modern flight control application. Bottlenecks were identified and possible solutions suggested. The tool was also used to inject faults into the architecture and evaluate the synchronization algorithm, and improvements are suggested. Finally, ADAS was used as a front end research tool to aid in the design of reconfiguration algorithms in a distributed array architecture.

  13. Genome-wide association analysis reveals distinct genetic architectures for single and combined stress responses in Arabidopsis thaliana

    NARCIS (Netherlands)

    Davila Olivas, Nelson H.; Kruijer, Willem; Gort, Gerrit; Wijnen, Cris L.; Loon, van Joop J.A.; Dicke, Marcel

    2017-01-01

    Plants are commonly exposed to abiotic and biotic stresses. We used 350 Arabidopsis thaliana accessions grown under controlled conditions. We employed genome-wide association analysis to investigate the genetic architecture and underlying loci involved in genetic variation in resistance to: two

  14. Wavy Architecture Thin-Film Transistor for Ultrahigh Resolution Flexible Displays

    KAUST Repository

    Hanna, Amir Nabil; Kutbee, Arwa Talal; Subedi, Ram Chandra; Ooi, Boon S.; Hussain, Muhammad Mustafa

    2017-01-01

    A novel wavy-shaped thin-film-transistor (TFT) architecture, capable of achieving 70% higher drive current per unit chip area when compared with planar conventional TFT architectures, is reported for flexible display application. The transistor, due to its atypical architecture, does not alter the turn-on voltage or the OFF current values, leading to higher performance without compromising static power consumption. The concept behind this architecture is expanding the transistor's width vertically through grooved trenches in a structural layer deposited on a flexible substrate. Operation of zinc oxide (ZnO)-based TFTs is shown down to a bending radius of 5 mm with no degradation in the electrical performance or cracks in the gate stack. Finally, flexible low-power LEDs driven by the respective currents of the novel wavy, and conventional coplanar architectures are demonstrated, where the novel architecture is able to drive the LED at 2 × the output power, 3 versus 1.5 mW, which demonstrates the potential use for ultrahigh resolution displays in an area efficient manner.

  15. Wavy Architecture Thin-Film Transistor for Ultrahigh Resolution Flexible Displays

    KAUST Repository

    Hanna, Amir Nabil

    2017-11-13

    A novel wavy-shaped thin-film-transistor (TFT) architecture, capable of achieving 70% higher drive current per unit chip area when compared with planar conventional TFT architectures, is reported for flexible display application. The transistor, due to its atypical architecture, does not alter the turn-on voltage or the OFF current values, leading to higher performance without compromising static power consumption. The concept behind this architecture is expanding the transistor\\'s width vertically through grooved trenches in a structural layer deposited on a flexible substrate. Operation of zinc oxide (ZnO)-based TFTs is shown down to a bending radius of 5 mm with no degradation in the electrical performance or cracks in the gate stack. Finally, flexible low-power LEDs driven by the respective currents of the novel wavy, and conventional coplanar architectures are demonstrated, where the novel architecture is able to drive the LED at 2 × the output power, 3 versus 1.5 mW, which demonstrates the potential use for ultrahigh resolution displays in an area efficient manner.

  16. Numerical linear algebra on emerging architectures: The PLASMA and MAGMA projects

    International Nuclear Information System (INIS)

    Agullo, Emmanuel; Demmel, Jim; Dongarra, Jack; Hadri, Bilel; Kurzak, Jakub; Langou, Julien; Ltaief, Hatem; Luszczek, Piotr; Tomov, Stanimire

    2009-01-01

    The emergence and continuing use of multi-core architectures and graphics processing units require changes in the existing software and sometimes even a redesign of the established algorithms in order to take advantage of now prevailing parallelism. Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) and Matrix Algebra on GPU and Multics Architectures (MAGMA) are two projects that aims to achieve high performance and portability across a wide range of multi-core architectures and hybrid systems respectively. We present in this document a comparative study of PLASMA's performance against established linear algebra packages and some preliminary results of MAGMA on hybrid multi-core and GPU systems.

  17. 3D ARCHITECTURAL VIDEOMAPPING

    Directory of Open Access Journals (Sweden)

    R. Catanese

    2013-07-01

    Full Text Available 3D architectural mapping is a video projection technique that can be done with a survey of a chosen building in order to realize a perfect correspondence between its shapes and the images in projection. As a performative kind of audiovisual artifact, the real event of the 3D mapping is a combination of a registered video animation file with a real architecture. This new kind of visual art is becoming very popular and its big audience success testifies new expressive chances in the field of urban design. My case study has been experienced in Pisa for the Luminara feast in 2012.

  18. Traditional Wooden Architecture and Landscape in Karelia. Methodological considerations for the analysis and census

    Directory of Open Access Journals (Sweden)

    Sandro Parinello

    2012-11-01

    Full Text Available The survey work on the Karelian landscape and traditional architecture, embedded within the European research project entitled "Wooden Architecture. Karelian Timber Traditional Architecture and Landscape", is intended to enable understanding of how Karelian culture and history have led, over time, unique urban landscapes. The context of transformation, in which they were involved in both the behavioral habits of local populations both traditional architectural ones, which are mixed with models and with the administrative Soviet structures, were analyzed in this scientific field in order to not compromise the conservation and enhancement of the historic, architectural and landscape of this country.

  19. Exploring performance and energy tradeoffs for irregular applications: A case study on the Tilera many-core architecture

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.; Tumeo, Antonino; Halappanavar, Mahantesh

    2017-06-01

    High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structured locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.

  20. FPGA-based architecture for motion recovering in real-time

    Science.gov (United States)

    Arias-Estrada, Miguel; Maya-Rueda, Selene E.; Torres-Huitzil, Cesar

    2002-03-01

    A key problem in the computer vision field is the measurement of object motion in a scene. The main goal is to compute an approximation of the 3D motion from the analysis of an image sequence. Once computed, this information can be used as a basis to reach higher level goals in different applications. Motion estimation algorithms pose a significant computational load for the sequential processors limiting its use in practical applications. In this work we propose a hardware architecture for motion estimation in real time based on FPGA technology. The technique used for motion estimation is Optical Flow due to its accuracy, and the density of velocity estimation, however other techniques are being explored. The architecture is composed of parallel modules working in a pipeline scheme to reach high throughput rates near gigaflops. The modules are organized in a regular structure to provide a high degree of flexibility to cover different applications. Some results will be presented and the real-time performance will be discussed and analyzed. The architecture is prototyped in an FPGA board with a Virtex device interfaced to a digital imager.

  1. Architecture and Programming Models for High Performance Intensive Computation

    Science.gov (United States)

    2016-06-29

    commands from the data processing center to the sensors is needed. It has been noted that the ubiquity of mobile communication devices offers the...commands from a Processing Facility by way of mobile Relay Stations. The activity of each component of this model other than the Merge module can be...evaluation of the initial system implementation. Gao also was in charge of the development of Fresh Breeze architecture backend on new many-core computers

  2. Hierarchical architecture of active knits

    International Nuclear Information System (INIS)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2013-01-01

    Nature eloquently utilizes hierarchical structures to form the world around us. Applying the hierarchical architecture paradigm to smart materials can provide a basis for a new genre of actuators which produce complex actuation motions. One promising example of cellular architecture—active knits—provides complex three-dimensional distributed actuation motions with expanded operational performance through a hierarchically organized structure. The hierarchical structure arranges a single fiber of active material, such as shape memory alloys (SMAs), into a cellular network of interlacing adjacent loops according to a knitting grid. This paper defines a four-level hierarchical classification of knit structures: the basic knit loop, knit patterns, grid patterns, and restructured grids. Each level of the hierarchy provides increased architectural complexity, resulting in expanded kinematic actuation motions of active knits. The range of kinematic actuation motions are displayed through experimental examples of different SMA active knits. The results from this paper illustrate and classify the ways in which each level of the hierarchical knit architecture leverages the performance of the base smart material to generate unique actuation motions, providing necessary insight to best exploit this new actuation paradigm. (paper)

  3. Data management system performance modeling

    Science.gov (United States)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  4. From enterprise architecture to business models and back

    NARCIS (Netherlands)

    Iacob, Maria Eugenia; Meertens, Lucas Onno; Jonkers, H.; Quartel, Dick; Nieuwenhuis, Lambertus Johannes Maria; van Sinderen, Marten J.

    In this study, we argue that important IT change processes affecting an organization’s enterprise architecture are also mirrored by a change in the organization’s business model. An analysis of the business model may establish whether the architecture change has value for the business. Therefore, in

  5. A data acquisition architecture for the SSC

    International Nuclear Information System (INIS)

    Partridge, R.

    1990-01-01

    An SSC data acquisition architecture applicable to high-p T detectors is described. The architecture is based upon a small set of design principles that were chosen to simplify communication between data acquisition elements while providing the required level of flexibility and performance. The architecture features an integrated system for data collection, event building, and communication with a large processing farm. The interface to the front end electronics system is also discussed. A set of design parameters is given for a data acquisition system that should meet the needs of high-p T detectors at the SSC

  6. Architecture for fiber-optic sensors and actuators in aircraft propulsion systems

    Science.gov (United States)

    Glomb, W. L., Jr.

    1990-01-01

    This paper describes a design for fiber-optic sensing and control in advanced aircraft Electronic Engine Control (EEC). The recommended architecture is an on-engine EEC which contains electro-optic interface circuits for fiber-optic sensors. Size and weight are reduced by multiplexing arrays of functionally similar sensors on a pairs of optical fibers to common electro-optical interfaces. The architecture contains interfaces to seven sensor groups. Nine distinct fiber-optic sensor types were found to provide the sensing functions. Analysis revealed no strong discriminator (except reliability of laser diodes and remote electronics) on which to base a selection of preferred common interface type. A hardware test program is recommended to assess the relative maturity of the technologies and to determine real performance in the engine environment.

  7. Architecture related books in Auersperg’s Library

    Directory of Open Access Journals (Sweden)

    Stanislav Južnič

    2008-01-01

    Full Text Available The manuscript catalogue of Ljubljanian Prince Auersperg’s Library was used to describe his architectural book acquisitions.The special concern was put on the Count Volf Engelbert and his brother, Prince Janez Vajkard Auersperg’s architectural and technical taste. The architecture class of Auersperg’s book catalogue (1668 contained several important applied physics, technique and military works.The importance and value of particular items are discussed. The library was used as the basis for the analysis of Auerspergs’ early modern technical interests just after they returned to the Catholic faith.They used their knowledge of architecture design for the palaces in Ljubljana and Kočevje.The books of former Auersperg’s Ljubljana library are today at different libraries in USA and Europe.

  8. Performance analysis of general purpose and digital signal processor kernels for heterogeneous systems-on-chip

    Directory of Open Access Journals (Sweden)

    T. von Sydow

    2003-01-01

    Full Text Available Various reasons like technology progress, flexibility demands, shortened product cycle time and shortened time to market have brought up the possibility and necessity to integrate different architecture blocks on one heterogeneous System-on-Chip (SoC. Architecture blocks like programmable processor cores (DSP- and GPP-kernels, embedded FPGAs as well as dedicated macros will be integral parts of such a SoC. Especially programmable architecture blocks and associated optimization techniques are discussed in this contribution. Design space exploration and thus the choice which architecture blocks should be integrated in a SoC is a challenging task. Crucial to this exploration is the evaluation of the application domain characteristics and the costs caused by individual architecture blocks integrated on a SoC. An ATE-cost function has been applied to examine the performance of the aforementioned programmable architecture blocks. Therefore, representative discrete devices have been analyzed. Furthermore, several architecture dependent optimization steps and their effects on the cost ratios are presented.

  9. Heuristic Analysis In Architecture Of Aqa-Bozorg Mosque-School In Qajar Dynasty

    Directory of Open Access Journals (Sweden)

    Azarafrooz Hosseini

    2016-12-01

    Full Text Available Architecture during Qajar dynasty has witnessed significant developments. The change which was particularly prevalent in advanced form, was the combination of two function: mosque and school. An important issue in the mosque-school typology is the spatial layout of the space, so that the two functions could maintain their independence and do not cause flaws to each other. The aim of this study is to understand how the combination of religious and educational functions in one building is. In this research Aqa-Bozorg mosque-school is analyzed by heuristic analysis method in order to recognize the different factors such as space and quality of human cognition. The result shows that this place with religious function, is not limited to religious ceremonies, vast assemblies with social or political motivation, rather it could be known as set of usual belief or hidden ones which are existed in profound layer of thinking and culture of society. So not only formal speech or sermon, rather customs, architectural features, art sights and even arrangement of main features in a religious building could convey implication to the audience who are consciously or unconsciously affected and make their ideology based on this.

  10. Open architecture design and approach for the Integrated Sensor Architecture (ISA)

    Science.gov (United States)

    Moulton, Christine L.; Krzywicki, Alan T.; Hepp, Jared J.; Harrell, John; Kogut, Michael

    2015-05-01

    Integrated Sensor Architecture (ISA) is designed in response to stovepiped integration approaches. The design, based on the principles of Service Oriented Architectures (SOA) and Open Architectures, addresses the problem of integration, and is not designed for specific sensors or systems. The use of SOA and Open Architecture approaches has led to a flexible, extensible architecture. Using these approaches, and supported with common data formats, open protocol specifications, and Department of Defense Architecture Framework (DoDAF) system architecture documents, an integration-focused architecture has been developed. ISA can help move the Department of Defense (DoD) from costly stovepipe solutions to a more cost-effective plug-and-play design to support interoperability.

  11. Mapping the Intangible: On Adaptivity and Relational Prototyping in Architectural Design

    DEFF Research Database (Denmark)

    Bolbroe, Cameline

    2016-01-01

    In recent years, new computing technologies in architecture have led to the possibility of designing architecture with non-static qualities, which affords the architectural designer with a whole new opportunity space to explore. At the same time, this opportunity space challenges both...... to meet the challenges of designing with adaptivity in architecture, I propose a particular method specifically tailored for adaptive architectural design. The method, relational prototyping, is founded on the idea of inhabitation as an act. Relational prototyping adapts techniques from performance...

  12. Morphology Analysis and Optimization: Crucial Factor Determining the Performance of Perovskite Solar Cells

    Directory of Open Access Journals (Sweden)

    Wenjin Zeng

    2017-03-01

    Full Text Available This review presents an overall discussion on the morphology analysis and optimization for perovskite (PVSK solar cells. Surface morphology and energy alignment have been proven to play a dominant role in determining the device performance. The effect of the key parameters such as solution condition and preparation atmosphere on the crystallization of PVSK, the characterization of surface morphology and interface distribution in the perovskite layer is discussed in detail. Furthermore, the analysis of interface energy level alignment by using X-ray photoelectron spectroscopy and ultraviolet photoelectron spectroscopy is presented to reveals the correlation between morphology and charge generation and collection within the perovskite layer, and its influence on the device performance. The techniques including architecture modification, solvent annealing, etc. were reviewed as an efficient approach to improve the morphology of PVSK. It is expected that further progress will be achieved with more efforts devoted to the insight of the mechanism of surface engineering in the field of PVSK solar cells.

  13. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  14. Ensemble Network Architecture for Deep Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Xi-liang Chen

    2018-01-01

    Full Text Available The popular deep Q learning algorithm is known to be instability because of the Q-value’s shake and overestimation action values under certain conditions. These issues tend to adversely affect their performance. In this paper, we develop the ensemble network architecture for deep reinforcement learning which is based on value function approximation. The temporal ensemble stabilizes the training process by reducing the variance of target approximation error and the ensemble of target values reduces the overestimate and makes better performance by estimating more accurate Q-value. Our results show that this architecture leads to statistically significant better value evaluation and more stable and better performance on several classical control tasks at OpenAI Gym environment.

  15. Validating Avionics Conceptual Architectures with Executable Specifications

    Directory of Open Access Journals (Sweden)

    Nils Fischer

    2012-08-01

    Full Text Available Current avionics systems specifications, developed after conceptual design, have a high degree of uncertainty. Since specifications are not sufficiently validated in the early development process and no executable specification exists at aircraft level, system designers cannot evaluate the impact of their design decisions at aircraft or aircraft application level. At the end of the development process of complex systems, e. g. aircraft, an average of about 65 per cent of all specifications have to be changed because they are incorrect, incomplete or too vaguely described. In this paper, a model-based design methodology together with a virtual test environment is described that makes complex high level system specifications executable and testable during the very early levels of system design. An aircraft communication system and its system context is developed to demonstrate the proposed early validation methodology. Executable specifications for early conceptual system architectures enable system designers to couple functions, architecture elements, resources and performance parameters, often called non-functional parameters. An integrated executable specification at Early Conceptual Architecture Level is developed and used to determine the impact of different system architecture decisions on system behavior and overall performance.

  16. Dynamic logic architecture based on piecewise-linear systems

    International Nuclear Information System (INIS)

    Peng Haipeng; Liu Fei; Li Lixiang; Yang Yixian; Wang Xue

    2010-01-01

    This Letter explores piecewise-linear systems to construct dynamic logic architecture. The proposed schemes can discriminate the two input signals and obtain 16 kinds of logic operations by different combinations of parameters and conditions for determining the output. Each logic cell performs more flexibly, that makes it possible to achieve complex logic operations more simply and construct computing architecture with less logic cells. We also analyze the various performances of our schemes under different conditions and the characteristics of these schemes.

  17. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution, as it req......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  18. On the Architectural Engineering Competences in Architectural Design

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    2007-01-01

    In 1997 a new education in Architecture & Design at Department of Architecture and Design, Aalborg University was started with 50 students. During the recent years this number has increased to approximately 100 new students each year, i.e. approximately 500 students are following the 3 years...... bachelor (BSc) and the 2 years master (MSc) programme. The first 5 semesters are common for all students followed by 5 semesters with specialization into Architectural Design, Urban Design, Industrial Design or Digital Design. The present paper gives a short summary of the architectural engineering...

  19. Introduction on performance analysis and profiling methodologies for KVM on ARM virtualization

    Science.gov (United States)

    Motakis, Antonios; Spyridakis, Alexander; Raho, Daniel

    2013-05-01

    The introduction of hardware virtualization extensions on ARM Cortex-A15 processors has enabled the implementation of full virtualization solutions for this architecture, such as KVM on ARM. This trend motivates the need to quantify and understand the performance impact, emerged by the application of this technology. In this work we start looking into some interesting performance metrics on KVM for ARM processors, which can provide us with useful insight that may lead to potential improvements in the future. This includes measurements such as interrupt latency and guest exit cost, performed on ARM Versatile Express and Samsung Exynos 5250 hardware platforms. Furthermore, we discuss additional methodologies that can provide us with a deeper understanding in the future of the performance footprint of KVM. We identify some of the most interesting approaches in this field, and perform a tentative analysis on how these may be implemented in the KVM on ARM port. These take into consideration hardware and software based counters for profiling, and issues related to the limitations of the simulators which are often used, such as the ARM Fast Models platform.

  20. An architecture pattern for safety critical automated driving applications: Design and analysis

    NARCIS (Netherlands)

    Luo, Y.; Saberi, A.K.; Bijlsma, T.; Lukkien, J.J.; Brand, M. van den

    2017-01-01

    Introduction of automated driving increases complexity of automotive systems. As a result, architecture design becomes a major concern for ensuring non-functional requirements such as safety, and modifiability. In the ISO 26262 standard, architecture patterns are recommended for system development.

  1. An architecture pattern for safety critical automated driving applications : design and analysis

    NARCIS (Netherlands)

    Luo, Y.; Khabbaz Saberi, A.; Bijlsma, T.; Lukkien, J.J.; van den Brand, M.G.J.

    2017-01-01

    Introduction of automated driving increases complexity of automotive systems. As a result, architecture design becomes a major concern for ensuring non-functional requirements such as safety, and modifiability. In the ISO 26262 standard, architecture patterns are recommended for system development.

  2. Multi-Softcore Architecture on FPGA

    Directory of Open Access Journals (Sweden)

    Mouna Baklouti

    2014-01-01

    Full Text Available To meet the high performance demands of embedded multimedia applications, embedded systems are integrating multiple processing units. However, they are mostly based on custom-logic design methodology. Designing parallel multicore systems using available standards intellectual properties yet maintaining high performance is also a challenging issue. Softcore processors and field programmable gate arrays (FPGAs are a cheap and fast option to develop and test such systems. This paper describes a FPGA-based design methodology to implement a rapid prototype of parametric multicore systems. A study of the viability of making the SoC using the NIOS II soft-processor core from Altera is also presented. The NIOS II features a general-purpose RISC CPU architecture designed to address a wide range of applications. The performance of the implemented architecture is discussed, and also some parallel applications are used for testing speedup and efficiency of the system. Experimental results demonstrate the performance of the proposed multicore system, which achieves better speedup than the GPU (29.5% faster for the FIR filter and 23.6% faster for the matrix-matrix multiplication.

  3. Architecture Level Safety Analyses for Safety-Critical Systems

    Directory of Open Access Journals (Sweden)

    K. S. Kushal

    2017-01-01

    Full Text Available The dependency of complex embedded Safety-Critical Systems across Avionics and Aerospace domains on their underlying software and hardware components has gradually increased with progression in time. Such application domain systems are developed based on a complex integrated architecture, which is modular in nature. Engineering practices assured with system safety standards to manage the failure, faulty, and unsafe operational conditions are very much necessary. System safety analyses involve the analysis of complex software architecture of the system, a major aspect in leading to fatal consequences in the behaviour of Safety-Critical Systems, and provide high reliability and dependability factors during their development. In this paper, we propose an architecture fault modeling and the safety analyses approach that will aid in identifying and eliminating the design flaws. The formal foundations of SAE Architecture Analysis & Design Language (AADL augmented with the Error Model Annex (EMV are discussed. The fault propagation, failure behaviour, and the composite behaviour of the design flaws/failures are considered for architecture safety analysis. The illustration of the proposed approach is validated by implementing the Speed Control Unit of Power-Boat Autopilot (PBA system. The Error Model Annex (EMV is guided with the pattern of consideration and inclusion of probable failure scenarios and propagation of fault conditions in the Speed Control Unit of Power-Boat Autopilot (PBA. This helps in validating the system architecture with the detection of the error event in the model and its impact in the operational environment. This also provides an insight of the certification impact that these exceptional conditions pose at various criticality levels and design assurance levels and its implications in verifying and validating the designs.

  4. Optimizing Engineering Tools Using Modern Ground Architectures

    Science.gov (United States)

    2017-12-01

    ENGINEERING TOOLS USING MODERN GROUND ARCHITECTURES by Ryan P. McArdle December 2017 Thesis Advisor: Marc Peters Co-Advisor: I.M. Ross...Master’s thesis 4. TITLE AND SUBTITLE OPTIMIZING ENGINEERING TOOLS USING MODERN GROUND ARCHITECTURES 5. FUNDING NUMBERS 6. AUTHOR(S) Ryan P. McArdle 7... engineering tools. First, the effectiveness of MathWorks’ Parallel Computing Toolkit is assessed when performing somewhat basic computations in

  5. Knit as bespoke material practice for architecture

    DEFF Research Database (Denmark)

    Ramsgaard Thomsen, Mette; Tamke, Martin; Karmon, Ayelet

    2016-01-01

    This paper presents an inquiry into how to inform material systems that allow for a high degree of variation and gradation of their material composition. Presenting knit as a particular system of material fabrication, we discuss how new practices that integrate material design into the architectu......This paper presents an inquiry into how to inform material systems that allow for a high degree of variation and gradation of their material composition. Presenting knit as a particular system of material fabrication, we discuss how new practices that integrate material design...... into the architectural design chain present new opportunities and challenges for how we understand and create cycles of design, analysis, specification and fabrication. By tracing current interdisciplinary efforts to establish simulation methods for knitted textiles, our aim is to question how these efforts can...... be understood and extended in the context of knitted architectural textiles. The paper draws on a number of projects that prototype methods for using simulation and sensing as grounds for informing the design of complex, heterogeneous and performative materials. It asks how these methods can allow feedback...

  6. Comparison of different artificial neural network architectures in modeling of Chlorella sp. flocculation.

    Science.gov (United States)

    Zenooz, Alireza Moosavi; Ashtiani, Farzin Zokaee; Ranjbar, Reza; Nikbakht, Fatemeh; Bolouri, Oberon

    2017-07-03

    Biodiesel production from microalgae feedstock should be performed after growth and harvesting of the cells, and the most feasible method for harvesting and dewatering of microalgae is flocculation. Flocculation modeling can be used for evaluation and prediction of its performance under different affective parameters. However, the modeling of flocculation in microalgae is not simple and has not performed yet, under all experimental conditions, mostly due to different behaviors of microalgae cells during the process under different flocculation conditions. In the current study, the modeling of microalgae flocculation is studied with different neural network architectures. Microalgae species, Chlorella sp., was flocculated with ferric chloride under different conditions and then the experimental data modeled using artificial neural network. Neural network architectures of multilayer perceptron (MLP) and radial basis function architectures, failed to predict the targets successfully, though, modeling was effective with ensemble architecture of MLP networks. Comparison between the performances of the ensemble and each individual network explains the ability of the ensemble architecture in microalgae flocculation modeling.

  7. Service Oriented Architecture for High Level Applications

    International Nuclear Information System (INIS)

    Chu, P.

    2012-01-01

    Standalone high level applications often suffer from poor performance and reliability due to lengthy initialization, heavy computation and rapid graphical update. Service-oriented architecture (SOA) is trying to separate the initialization and computation from applications and to distribute such work to various service providers. Heavy computation such as beam tracking will be done periodically on a dedicated server and data will be available to client applications at all time. Industrial standard service architecture can help to improve the performance, reliability and maintainability of the service. Robustness will also be improved by reducing the complexity of individual client applications.

  8. A modular microfluidic architecture for integrated biochemical analysis.

    Science.gov (United States)

    Shaikh, Kashan A; Ryu, Kee Suk; Goluch, Edgar D; Nam, Jwa-Min; Liu, Juewen; Thaxton, C Shad; Chiesl, Thomas N; Barron, Annelise E; Lu, Yi; Mirkin, Chad A; Liu, Chang

    2005-07-12

    Microfluidic laboratory-on-a-chip (LOC) systems based on a modular architecture are presented. The architecture is conceptualized on two levels: a single-chip level and a multiple-chip module (MCM) system level. At the individual chip level, a multilayer approach segregates components belonging to two fundamental categories: passive fluidic components (channels and reaction chambers) and active electromechanical control structures (sensors and actuators). This distinction is explicitly made to simplify the development process and minimize cost. Components belonging to these two categories are built separately on different physical layers and can communicate fluidically via cross-layer interconnects. The chip that hosts the electromechanical control structures is called the microfluidic breadboard (FBB). A single LOC module is constructed by attaching a chip comprised of a custom arrangement of fluid routing channels and reactors (passive chip) to the FBB. Many different LOC functions can be achieved by using different passive chips on an FBB with a standard resource configuration. Multiple modules can be interconnected to form a larger LOC system (MCM level). We demonstrated the utility of this architecture by developing systems for two separate biochemical applications: one for detection of protein markers of cancer and another for detection of metal ions. In the first case, free prostate-specific antigen was detected at 500 aM concentration by using a nanoparticle-based bio-bar-code protocol on a parallel MCM system. In the second case, we used a DNAzyme-based biosensor to identify the presence of Pb(2+) (lead) at a sensitivity of 500 nM in <1 nl of solution.

  9. Democratic management and architecture school

    Directory of Open Access Journals (Sweden)

    Silvana Aparecida de Souza

    2011-10-01

    Full Text Available It is a conceptual and theoretical research on school organization and its democratization, focusing on one aspect of an objective nature: its architecture. The study was based on the academic literature on democratization and theoretical contribution of Michel Foucault, with regard to the analysis of space as a resourcecontrol, surveillance and training, going through a historical review of the modelconstruction of school buildings in Brazil. It is therefore a sociological analysis of the school environment, in relation to the democratization process of basic education, understood as ensuring that the conditions of access and permanence to a universalquality education, and conceived and gestated from collective interests of its users.We conclude that the architecture of public schools in Brazil do not provides democratic management, either by format controller of buildings constructed in the republican period, either by the current economic priority for the construction of public school buildings, which includes little or no space for collective activities. The character of the buildings remains controller, no more for its architecture, but made possible by technological development, which allows monitoring by video cameras, which is made with the permission and support of community.

  10. Successful Architectural Knowledge Sharing: Beware of Emotions

    Science.gov (United States)

    Poort, Eltjo R.; Pramono, Agung; Perdeck, Michiel; Clerc, Viktor; van Vliet, Hans

    This chapter presents the analysis and key findings of a survey on architectural knowledge sharing. The responses of 97 architects working in the Dutch IT Industry were analyzed by correlating practices and challenges with project size and success. Impact mechanisms between project size, project success, and architectural knowledge sharing practices and challenges were deduced based on reasoning, experience and literature. We find that architects run into numerous and diverse challenges sharing architectural knowledge, but that the only challenges that have a significant impact are the emotional challenges related to interpersonal relationships. Thus, architects should be careful when dealing with emotions in knowledge sharing.

  11. Preindustrial versus postindustrial Architecture and Building Techniques

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2014-01-01

    How can preindustrial architecture inspire sustainable thinking in postindustrial architectural design? How can we learn from experience and how can social, economic and environmental conditions give perspectives and guide a knowledge based evolution of basic experience towards modern industriali......How can preindustrial architecture inspire sustainable thinking in postindustrial architectural design? How can we learn from experience and how can social, economic and environmental conditions give perspectives and guide a knowledge based evolution of basic experience towards modern...... industrialized building processes? Identification of sustainable parameters related to change in society, to building technique and to comfort are illustrated through two Danish building types, which are different in time, but similar in function. One representing evolution and experience based countryside...... fisherman’s house built around year 1700; and second a frontrunner suburban family house built year 2008. The analysis involves architectural, technical and comfort matters and will state the levels of design, social conditions, sustainable and energy efficient parameters. Results will show lessons learned...

  12. TEACHING CAD PROGRAMMING TO ARCHITECTURE STUDENTS

    Directory of Open Access Journals (Sweden)

    Maria Gabriela Caffarena CELANI

    2008-11-01

    Full Text Available The objective of this paper is to discuss the relevance of including the discipline of computer programming in the architectural curriculum. To do so I start by explaining how computer programming has been applied in other educational contexts with pedagogical success, describing Seymour Papert's principles. After that, I summarize the historical development of CAD and provide three historical examples of educational applications of computer programming in architecture, followed by a contemporary case that I find of particular relevance. Next, I propose a methodology for teaching programming for architects that aims at improving the quality of designs by making their concepts more explicit. This methodology is based on my own experience teaching computer programming for architecture students at undergraduate and graduate levels at the State University of Campinas, Brazil. The paper ends with a discussion about the role of programming nowadays, when most CAD software are user-friendly and do not require any knowledge of programming for improving performance. I conclude that the introduction of programming in the CAD curriculum within a proper conceptual framework may transform the concept of architectural education. Key-words: Computer programming; computer-aided design; architectural education.

  13. Optimized Architectural Approaches in Hardware and Software Enabling Very High Performance Shared Storage Systems

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    There are issues encountered in high performance storage systems that normally lead to compromises in architecture. Compute clusters tend to have compute phases followed by an I/O phase that must move data from the entire cluster in one operation. That data may then be shared by a large number of clients creating unpredictable read and write patterns. In some cases the aggregate performance of a server cluster must exceed 100 GB/s to minimize the time required for the I/O cycle thus maximizing compute availability. Accessing the same content from multiple points in a shared file system leads to the classical problems of data "hot spots" on the disk drive side and access collisions on the data connectivity side. The traditional method for increasing apparent bandwidth usually includes data replication which is costly in both storage and management. Scaling a model that includes replicated data presents additional management challenges as capacity and bandwidth expand asymmetrically while the system is scaled. ...

  14. Genesis and Evolution of Interfaces in Product Architecture

    DEFF Research Database (Denmark)

    Donmez, Mehmet; Hsuan, Juliana

    Interfaces are elements of the product architecture that facilitates innovation and enables an organization to leverage the trade-off between cost and performance of its products. Despite the importance of interfaces for organizations, little is known about their genesis and evolution. In this st......Interfaces are elements of the product architecture that facilitates innovation and enables an organization to leverage the trade-off between cost and performance of its products. Despite the importance of interfaces for organizations, little is known about their genesis and evolution...

  15. T-CREST: Time-predictable multi-core architecture for embedded systems

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Abbaspourseyedi, Sahar; Jordan, Alexander

    2015-01-01

    -core architectures that are optimized for the WCET instead of the average-case execution time. The resulting time-predictable resources (processors, interconnect, memory arbiter, and memory controller) and tools (compiler, WCET analysis) are designed to ease WCET analysis and to optimize WCET performance. Compared...... domain shows that the WCET can be reduced for computation-intensive tasks when distributing the tasks on several cores and using the network-on-chip for communication. With three cores the WCET is improved by a factor of 1.8 and with 15 cores by a factor of 5.7.The T-CREST project is the result...

  16. How organisation of architecture documentation affects architectural knowledge retrieval

    NARCIS (Netherlands)

    de Graaf, K.A.; Liang, P.; Tang, A.; Vliet, J.C.

    A common approach to software architecture documentation in industry projects is the use of file-based documents. This approach offers a single-dimensional arrangement of the architectural knowledge. Knowledge retrieval from file-based architecture documentation is efficient if the organisation of

  17. Cinematic collage as architectural design research

    OpenAIRE

    Carless, T.; Troiani, I.

    2018-01-01

    This chapter argues that cinematic representation can, and must, be understood as a method of developing a form of critical architectural enquiry and thinking in the same manner as text - a textual analysis and a communication means for practice-based research. The proposition is that cinematic architectural drawing and the discourse of occupied space are inseparable and that the limits of both are products of specific ideological and cultural practices. In this chapter, two different bodies ...

  18. Experimental model for architectural systematization and its basic thermal performance. Part 1. Research on architectural systematization of energy conversion devices; Kenchiku system ka model no gaiyo to kihon seino ni tsuite. 1. Energy henkan no kenchiku system ka ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Sunaga, N; Ito, N; Kimura, G; Fukao, S; Shimizu, T; Tsunoda, M; Muro, K [Tokyo Metropolitan University, Tokyo (Japan)

    1996-10-27

    The outline of a model for architectural systematization of natural energy conversion and the experiment result on the basic thermal performance in winter are described. The model is about 20 m{sup 2} in floor space. Foam polystyrene of 100 mm and 200 mm thick was used for the outer wall as heat insulating materials. The model has a solar battery and air conditioner and uses red brick as a heat reservoir. An experiment was made on seven modes obtained when three elements (heating, heat storage, and night insulated door) are combined. The information obtained by the experiment showed that a model for architectural systematization has high heat insulation and tightness and can be used as an energy element or an evaluation model for architectural systematization. In this model for architectural systematization, the power consumption of an air conditioner in winter can be fully supplied by only the power generation based on a solar battery. In an architectural element, the heating energy consumption can be remarkably reduced and the indoor thermal environment can be greatly improved, by the combination of a heat reservoir and night heat insulated door. 1 ref., 6 figs., 3 tabs.

  19. Design and construction principles in nature and architecture

    International Nuclear Information System (INIS)

    Knippers, Jan; Speck, Thomas

    2012-01-01

    This paper will focus on how the emerging scientific discipline of biomimetics can bring new insights into the field of architecture. An analysis of both architectural and biological methodologies will show important aspects connecting these two. The foundation of this paper is a case study of convertible structures based on elastic plant movements.

  20. Design and construction principles in nature and architecture.

    Science.gov (United States)

    Knippers, Jan; Speck, Thomas

    2012-03-01

    This paper will focus on how the emerging scientific discipline of biomimetics can bring new insights into the field of architecture. An analysis of both architectural and biological methodologies will show important aspects connecting these two. The foundation of this paper is a case study of convertible structures based on elastic plant movements.