WorldWideScience

Sample records for modeling technique architecture

  1. Biliary System Architecture: Experimental Models and Visualization Techniques

    Czech Academy of Sciences Publication Activity Database

    Sarnová, Lenka; Gregor, Martin

    2017-01-01

    Roč. 66, č. 3 (2017), s. 383-390 ISSN 0862-8408 R&D Projects: GA MŠk(CZ) LQ1604; GA ČR GA15-23858S Institutional support: RVO:68378050 Keywords : Biliary system * Mouse model * Cholestasis * Visualisation * Morphology Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Cell biology Impact factor: 1.461, year: 2016

  2. IMAGE-BASED MODELING TECHNIQUES FOR ARCHITECTURAL HERITAGE 3D DIGITALIZATION: LIMITS AND POTENTIALITIES

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2013-07-01

    Full Text Available 3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS, the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases to large scale buildings for practitioner purpose.

  3. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    Science.gov (United States)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  4. Modeling Architectural Patterns Using Architectural Primitives

    NARCIS (Netherlands)

    Zdun, Uwe; Avgeriou, Paris

    2005-01-01

    Architectural patterns are a key point in architectural documentation. Regrettably, there is poor support for modeling architectural patterns, because the pattern elements are not directly matched by elements in modeling languages, and, at the same time, patterns support an inherent variability that

  5. Models in architectural design

    OpenAIRE

    Pauwels, Pieter

    2017-01-01

    Whereas architects and construction specialists used to rely mainly on sketches and physical models as representations of their own cognitive design models, they rely now more and more on computer models. Parametric models, generative models, as-built models, building information models (BIM), and so forth, they are used daily by any practitioner in architectural design and construction. Although processes of abstraction and the actual architectural model-based reasoning itself of course rema...

  6. Modeling Architectural Patterns’ Behavior Using Architectural Primitives

    NARCIS (Netherlands)

    Waqas Kamal, Ahmad; Avgeriou, Paris

    2008-01-01

    Architectural patterns have an impact on both the structure and the behavior of a system at the architecture design level. However, it is challenging to model patterns’ behavior in a systematic way because modeling languages do not provide the appropriate abstractions and because each pattern

  7. Consistent model driven architecture

    Science.gov (United States)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  8. A Systematic Review of Software Architecture Visualization Techniques

    NARCIS (Netherlands)

    Shahin, M.; Liang, P.; Ali Babar, M.

    2014-01-01

    Context Given the increased interest in using visualization techniques (VTs) to help communicate and understand software architecture (SA) of large scale complex systems, several VTs and tools have been reported to represent architectural elements (such as architecture design, architectural

  9. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, W.; Jonkers, Henk; van Sinderen, Marten J.

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for

  10. Preindustrial versus postindustrial Architecture and Building Techniques

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2014-01-01

    house built around 1700 and the other a frontrunner suburban family house built year 2010. The aim is to show how preindustrial architecture can inspire sustainable thinking in postindustrial architectural design, how we can learn from the experience and how the nowadays social, economic......The paper will identify the sustainable parameters related to the change in society, building technique and comfort demands illustrated through 2 Danish building types, which are very different in time, but similar in function. The one representing evolution and experience based countryside farm...... out how living conditions, landscape and topology, how climate and the possibility to use local materials for construction and how actual building technology influences the design, the economy, the comfort and the energy use. Analysis involves architectural, technical and comfort matters...

  11. Generative Algorithmic Techniques for Architectural Design

    DEFF Research Database (Denmark)

    Larsen, Niels Martin

    2012-01-01

    Architectural design methodology is expanded through the ability to create bespoke computational methods as integrated parts of the design process. The rapid proliferation of digital production techniques within building industry provides new means for establishing seamless flows between digital...... form-generation and the realisation process. A tendency in recent practice shows an increased focus on developing unique tectonic solutions as a crucial ingredient in the design solution. These converging trajectories form the contextual basis for this thesis. In architectural design, digital tools....... The principles are further developed to form new modes of articulation in architectural design. Certain methods are contributions, which suggest a potential for future use and development. Thus, a method is directed towards bottom-up generation of surface topology through the use of an agentbased logic. Another...

  12. Preindustrial versus postindustrial Architecture and Building Techniques

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2014-01-01

    How can preindustrial architecture inspire sustainable thinking in postindustrial architectural design? How can we learn from experience and how can social, economic and environmental conditions give perspectives and guide a knowledge based evolution of basic experience towards modern industriali......How can preindustrial architecture inspire sustainable thinking in postindustrial architectural design? How can we learn from experience and how can social, economic and environmental conditions give perspectives and guide a knowledge based evolution of basic experience towards modern...... industrialized building processes? Identification of sustainable parameters related to change in society, to building technique and to comfort are illustrated through two Danish building types, which are different in time, but similar in function. One representing evolution and experience based countryside...... fisherman’s house built around year 1700; and second a frontrunner suburban family house built year 2008. The analysis involves architectural, technical and comfort matters and will state the levels of design, social conditions, sustainable and energy efficient parameters. Results will show lessons learned...

  13. Audit Techniques for Service Oriented Architecture Applications

    Directory of Open Access Journals (Sweden)

    Liviu Adrian COTFAS

    2010-01-01

    Full Text Available The Service Oriented Architecture (SOA approach enables the development of flexible distributed applications. Auditing such applications implies several specific challenges related to interoperability, performance and security. The service oriented architecture model is described and the advantages of this approach are analyzed. We also highlight several quality attributes and potential risks in SOA applications that an architect should be aware when designing a distributed system. Key risk factors are identified and a model for risk evaluation is introduced. The top reasons for auditing SOA applications are presented as well as the most important standards. The steps for a successful audit process are given and discussed.

  14. An architecture for integration of multidisciplinary models

    DEFF Research Database (Denmark)

    Belete, Getachew F.; Voinov, Alexey; Holst, Niels

    2014-01-01

    Integrating multidisciplinary models requires linking models: that may operate at different temporal and spatial scales; developed using different methodologies, tools and techniques; different levels of complexity; calibrated for different ranges of inputs and outputs, etc. On the other hand......, Enterprise Application Integration, and Integration Design Patterns. We developed an architecture of a multidisciplinary model integration framework that brings these three aspects of integration together. Service-oriented-based platform independent architecture that enables to establish loosely coupled...

  15. Digital Architecture Planning Model

    International Nuclear Information System (INIS)

    Oxstrand, Johanna Helene; Al Rashdan, Ahmad Yahya Mohammad; Bly, Aaron Douglas; Rice, Brandon Charles; Fitzgerald, Kirk; Wilson, Keith Leon

    2016-01-01

    As part of the U.S. Department of Energy's Light Water Reactor Sustainability Program, the Digital Architecture (DA) Project focuses on providing a model that nuclear utilities can refer to when planning deployment of advanced technologies. The digital architecture planning model (DAPM) is the methodology for mapping power plant operational and support activities into a DA that unifies all data sources needed by the utilities to operate their plants. The DA is defined as a collection of information technology capabilities needed to support and integrate a wide spectrum of real-time digital capabilities for performance improvements of nuclear power plants. DA can be thought of as integration of the separate instrumentation and control and information systems already in place in nuclear power plants, which are brought together for the purpose of creating new levels of automation in plant work activities. A major objective in DAPM development was to survey all key areas that needed to be reviewed in order for a utility to make knowledgeable decisions regarding needs and plans to implement a DA at the plant. The development was done in two steps. First, researchers surveyed the nuclear industry in order to learn their near-term plans for adopting new advanced capabilities and implementing a network (i.e., wireless and wire) infrastructure throughout the plant, including the power block. Secondly, a literature review covering regulatory documents, industry standards, and technical research reports and articles was conducted. The objective of the review was to identify key areas to be covered by the DAPM, which included the following: 1. The need for a DA and its benefits to the plant 2. Resources required to implement the DA 3. Challenges that need to be addressed and resolved to implement the DA 4. Roles and responsibilities of the DA implementation plan. The DAPM was developed based on results from the survey and the literature review. Model development

  16. Digital Architecture Planning Model

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States). Light Water Reactor Sustainability Program (LWRS); Al Rashdan, Ahmad Yahya Mohammad [Idaho National Lab. (INL), Idaho Falls, ID (United States). Light Water Reactor Sustainability Program (LWRS); Bly, Aaron Douglas [Idaho National Lab. (INL), Idaho Falls, ID (United States). Light Water Reactor Sustainability Program (LWRS); Rice, Brandon Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States). Light Water Reactor Sustainability Program (LWRS); Fitzgerald, Kirk [Idaho National Lab. (INL), Idaho Falls, ID (United States). Light Water Reactor Sustainability Program (LWRS); Wilson, Keith Leon [Idaho National Lab. (INL), Idaho Falls, ID (United States). Light Water Reactor Sustainability Program (LWRS)

    2016-03-01

    As part of the U.S. Department of Energy’s Light Water Reactor Sustainability Program, the Digital Architecture (DA) Project focuses on providing a model that nuclear utilities can refer to when planning deployment of advanced technologies. The digital architecture planning model (DAPM) is the methodology for mapping power plant operational and support activities into a DA that unifies all data sources needed by the utilities to operate their plants. The DA is defined as a collection of information technology capabilities needed to support and integrate a wide spectrum of real-time digital capabilities for performance improvements of nuclear power plants. DA can be thought of as integration of the separate instrumentation and control and information systems already in place in nuclear power plants, which are brought together for the purpose of creating new levels of automation in plant work activities. A major objective in DAPM development was to survey all key areas that needed to be reviewed in order for a utility to make knowledgeable decisions regarding needs and plans to implement a DA at the plant. The development was done in two steps. First, researchers surveyed the nuclear industry in order to learn their near-term plans for adopting new advanced capabilities and implementing a network (i.e., wireless and wire) infrastructure throughout the plant, including the power block. Secondly, a literature review covering regulatory documents, industry standards, and technical research reports and articles was conducted. The objective of the review was to identify key areas to be covered by the DAPM, which included the following: 1. The need for a DA and its benefits to the plant 2. Resources required to implement the DA 3. Challenges that need to be addressed and resolved to implement the DA 4. Roles and responsibilities of the DA implementation plan. The DAPM was developed based on results from the survey and the literature review. Model development, including

  17. D Digital Simulation of Minnan Temple Architecture CAISSON'S Craft Techniques

    Science.gov (United States)

    Lin, Y. C.; Wu, T. C.; Hsu, M. F.

    2013-07-01

    Caisson is one of the important representations of the Minnan (southern Fujian) temple architecture craft techniques and decorative aesthetics. The special component design and group building method present the architectural thinking and personal characteristics of great carpenters of Minnan temple architecture. In late Qing Dynasty, the appearance and style of caissons of famous temples in Taiwan apparently presented the building techniques of the great carpenters. However, as the years went by, the caisson design and craft techniques were not fully inherited, which has been a great loss of cultural assets. Accordingly, with the caisson of Fulong temple, a work by the well-known great carpenter in Tainan as an example, this study obtained the thinking principles of the original design and the design method at initial period of construction through interview records and the step of redrawing the "Tng-Ko" (traditional design, stakeout and construction tool). We obtained the 3D point cloud model of the caisson of Fulong temple using 3D laser scanning technology, and established the 3D digital model of each component of the caisson. Based on the caisson component procedure obtained from interview records, this study conducted the digital simulation of the caisson component to completely recode and present the caisson design, construction and completion procedure. This model of preserving the craft techniques for Minnan temple caisson by using digital technology makes specific contribution to the heritage of the craft techniques while providing an important reference for the digital preservation of human cultural assets.

  18. A new technique for dynamic load distribution when two manipulators mutually lift a rigid object. Part 2, Derivation of entire system model and control architecture

    Energy Technology Data Exchange (ETDEWEB)

    Unseren, M.A.

    1994-04-01

    A rigid body model for the entire system which accounts for the load distribution scheme proposed in Part 1 as well as for the dynamics of the manipulators and the kinematic constraints is derived in the joint space. A technique is presented for expressing the object dynamics in terms of the joint variables of both manipulators which leads to a positive definite and symmetric inertia matrix. The model is then transformed to obtain reduced order equations of motion and a separate set of equations which govern the behavior of the internal contact forces. The control architecture is applied to the model which results in the explicit decoupling of the position and internal contact force-controlled degrees of freedom (DOF).

  19. An architectural model for network interconnection

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Vissers, C.A.; Kalin, T.

    1983-01-01

    This paper presents a technique of successive decomposition of a common users' activity to illustrate the problems of network interconnection. The criteria derived from this approach offer a structuring principle which is used to develop an architectural model that embeds heterogeneous subnetworks

  20. Models for Evaluating and Improving Architecture Competence

    National Research Council Canada - National Science Library

    Bass, Len; Clements, Paul; Kazman, Rick; Klein, Mark

    2008-01-01

    ... producing high-quality architectures. This report lays out the basic concepts of software architecture competence and describes four models for explaining, measuring, and improving the architecture competence of an individual...

  1. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  2. Model-centric software architecture reconstruction

    NARCIS (Netherlands)

    Stoermer, C.; Rowe, A.; O'Brien, L.; Verhoef, C.

    2006-01-01

    Much progress has been achieved in defining methods, techniques, and tools for software architecture reconstruction (SAR). However, less progress has been achieved in constructing reasoning frameworks from existing systems that support organizations in architecture analysis and design decisions.

  3. Modelling Approach In Islamic Architectural Designs

    Directory of Open Access Journals (Sweden)

    Suhaimi Salleh

    2014-06-01

    Full Text Available Architectural designs contribute as one of the main factors that should be considered in minimizing negative impacts in planning and structural development in buildings such as in mosques. In this paper, the ergonomics perspective is revisited which hence focuses on the conditional factors involving organisational, psychological, social and population as a whole. This paper tries to highlight the functional and architectural integration with ecstatic elements in the form of decorative and ornamental outlay as well as incorporating the building structure such as wall, domes and gates. This paper further focuses the mathematical aspects of the architectural designs such as polar equations and the golden ratio. These designs are modelled into mathematical equations of various forms, while the golden ratio in mosque is verified using two techniques namely, the geometric construction and the numerical method. The exemplary designs are taken from theSabah Bandaraya Mosque in Likas, Kota Kinabalu and the Sarawak State Mosque in Kuching,while the Universiti Malaysia Sabah Mosque is used for the Golden Ratio. Results show thatIslamic architectural buildings and designs have long had mathematical concepts and techniques underlying its foundation, hence, a modelling approach is needed to rejuvenate these Islamic designs.

  4. Location Privacy Techniques in Client-Server Architectures

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yiu, Man Lung

    2009-01-01

    A typical location-based service returns nearby points of interest in response to a user location. As such services are becoming increasingly available and popular, location privacy emerges as an important issue. In a system that does not offer location privacy, users must disclose their exact...... locations in order to receive the desired services. We view location privacy as an enabling technology that may lead to increased use of location-based services. In this chapter, we consider location privacy techniques that work in traditional client-server architectures without any trusted components other....... Third, their effectiveness is independent of the distribution of other users, unlike the k-anonymity approach. The chapter characterizes the privacy models assumed by existing techniques and categorizes these according to their approach. The techniques are then covered in turn according...

  5. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  6. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  7. The PASS project architectural model

    International Nuclear Information System (INIS)

    Day, C.T.; Loken, S.; Macfarlane, J.F.

    1994-01-01

    The PASS project has as its goal the implementation of solutions to the foreseen data access problems of the next generation of scientific experiments. The architectural model results from an evaluation of the operational and technical requirements and is described in terms of an abstract reference model, an implementation model and a discussion of some design aspects. The abstract reference model describes a system that matches the requirements in terms of its components and the mechanisms by which they communicate, but does not discuss policy or design issues that would be necessary to match the model to an actual implementation. Some of these issues are discussed, but more detailed design and simulation work will be necessary before choices can be made

  8. Enhancing Architectural Drawings and Models with Photoshop

    CERN Document Server

    Onstott, Scott

    2010-01-01

    Transform your CAD drawings into powerful presentationThis one-of-a-kind book shows you how to use Photoshop to turn CAD drawings and BIM models into artistic presentations with captivating animations, videos, and dynamic 3D imagery. The techniques apply to all leading architectural design software including AutoCAD, Revit, and 3ds Max Design. Video tutorials on the DVD improve your learning curve and let you compare your work with the author's.Turn CAD drawings and BIM models into powerful presentations featuring animation, videos, and 3D imagery for enhanced client appealCraft interactive pa

  9. SaaS architecture and pricing models

    OpenAIRE

    Laatikainen, Gabriella; Ojala, Arto

    2014-01-01

    In the new era of computing, SaaS software with different architectural characteristics might be priced in different ways. Even though both pricing and architectural characteristics are responsible for the success of the offering; the relationship between architectural and pricing characteristics has not been studied before. The present study fills this gap by employing a multi-case research. The findings accentuate that flexible and well-designed architecture enables different pricing models...

  10. Fast, Accurate Memory Architecture Simulation Technique Using Memory Access Characteristics

    OpenAIRE

    小野, 貴継; 井上, 弘士; 村上, 和彰

    2007-01-01

    This paper proposes a fast and accurate memory architecture simulation technique. To design memory architecture, the first steps commonly involve using trace-driven simulation. However, expanding the design space makes the evaluation time increase. A fast simulation is achieved by a trace size reduction, but it reduces the simulation accuracy. Our approach can reduce the simulation time while maintaining the accuracy of the simulation results. In order to evaluate validity of proposed techniq...

  11. Dynamic Analysis Techniques for the Reconstruction of Architectural Views

    NARCIS (Netherlands)

    Cornelissen, B.

    2007-01-01

    Gaining an understanding of software systems is an important discipline in many software engineering contexts. It is essential that software engineers are assisted as much as possible during this task, e.g., by using tools and techniques that provide architectural views on the software at hand. This

  12. Relating business modelling and enterprise architecture

    NARCIS (Netherlands)

    Meertens, Lucas Onno

    2013-01-01

    This thesis proposes a methodology for creating business models, evaluating them, and relating them to enterprise architecture. The methodology consists of several steps, leading from an organization’s current situation to a target situation, via business models and enterprise architecture.

  13. The Ragnarok Architectural Software Configuration Management Model

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    1999-01-01

    The architecture is the fundamental framework for designing and implementing large scale software, and the ability to trace and control its evolution is essential. However, many traditional software configuration management tools view 'software' merely as a set of files, not as an architecture....... This introduces an unfortunate impedance mismatch between the design domain (architecture level) and configuration management domain (file level.) This paper presents a software configuration management model that allows tight version control and configuration management of the architecture of a software system...

  14. A cognitive model for software architecture complexity

    NARCIS (Netherlands)

    Bouwers, E.; Lilienthal, C.; Visser, J.; Van Deursen, A.

    2010-01-01

    Evaluating the complexity of the architecture of a softwaresystem is a difficult task. Many aspects have to be considered to come to a balanced assessment. Several architecture evaluation methods have been proposed, but very few define a quality model to be used during the evaluation process. In

  15. A model for architectural comparison

    Science.gov (United States)

    Ho, Sam; Snyder, Larry

    1988-04-01

    Recently, architectures for sequential computers became a topic of much discussion and controversy. At the center of this storm is the Reduced Instruction Set Computer, or RISC, first described at Berkeley in 1980. While the merits of the RISC architecture cannot be ignored, its opponents have tried to do just that, while its proponents have expanded and frequently exaggerated them. This state of affairs has persisted to this day. No attempt is made to settle this controversy, since indeed there is likely no one answer. A qualitative framework is provided for a rational discussion of the issues.

  16. Extending enterprise architecture modelling with business goals and requirements

    Science.gov (United States)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  17. Relating business modelling and enterprise architecture

    OpenAIRE

    Meertens, Lucas Onno

    2013-01-01

    This thesis proposes a methodology for creating business models, evaluating them, and relating them to enterprise architecture. The methodology consists of several steps, leading from an organization’s current situation to a target situation, via business models and enterprise architecture. Currently, increasing amounts of businesses rely on IT systems to do their business. However, success rates of IT implementations projects are low. Difficulties exist in aligning existing IT systems with b...

  18. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  19. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  20. A technique system for the measurement, reconstruction and character extraction of rice plant architecture.

    Directory of Open Access Journals (Sweden)

    Xumeng Li

    Full Text Available This study developed a technique system for the measurement, reconstruction, and trait extraction of rice canopy architectures, which have challenged functional-structural plant modeling for decades and have become the foundation of the design of ideo-plant architectures. The system uses the location-separation-measurement method (LSMM for the collection of data on the canopy architecture and the analytic geometry method for the reconstruction and visualization of the three-dimensional (3D digital architecture of the rice plant. It also uses the virtual clipping method for extracting the key traits of the canopy architecture such as the leaf area, inclination, and azimuth distribution in spatial coordinates. To establish the technique system, we developed (i simple tools to measure the spatial position of the stem axis and azimuth of the leaf midrib and to capture images of tillers and leaves; (ii computer software programs for extracting data on stem diameter, leaf nodes, and leaf midrib curves from the tiller images and data on leaf length, width, and shape from the leaf images; (iii a database of digital architectures that stores the measured data and facilitates the reconstruction of the 3D visual architecture and the extraction of architectural traits; and (iv computation algorithms for virtual clipping to stratify the rice canopy, to extend the stratified surface from the horizontal plane to a general curved surface (including a cylindrical surface, and to implement in silico. Each component of the technique system was quantitatively validated and visually compared to images, and the sensitivity of the virtual clipping algorithms was analyzed. This technique is inexpensive and accurate and provides high throughput for the measurement, reconstruction, and trait extraction of rice canopy architectures. The technique provides a more practical method of data collection to serve functional-structural plant models of rice and for the

  1. Synchronous Modeling of Modular Avionics Architectures using the SIGNAL Language

    OpenAIRE

    Gamatié , Abdoulaye; Gautier , Thierry

    2002-01-01

    This document presents a study on the modeling of architecture components for avionics applications. We consider the avionics standard ARINC 653 specifications as basis, as well as the synchronous language SIGNAL to describe the modeling. A library of APEX object models (partition, process, communication and synchronization services, etc.) has been implemented. This should allow to describe distributed real-time applications using POLYCHRONY, so as to access formal tools and techniques for ar...

  2. Network interconnections: an architectural reference model

    NARCIS (Netherlands)

    Butscher, B.; Lenzini, L.; Morling, R.; Vissers, C.A.; Popescu-Zeletin, R.; van Sinderen, Marten J.; Heger, D.; Krueger, G.; Spaniol, O.; Zorn, W.

    1985-01-01

    One of the major problems in understanding the different approaches in interconnecting networks of different technologies is the lack of reference to a general model. The paper develops the rationales for a reference model of network interconnection and focuses on the architectural implications for

  3. Model Driven Architecture: Foundations and Applications

    NARCIS (Netherlands)

    Rensink, Arend

    The OMG's Model Driven Architecture (MDA) initiative has been the focus of much attention in both academia and industry, due to its promise of more rapid and consistent software development through the increased use of models. In order for MDA to reach its full potential, the ability to manipulate

  4. 3D DIGITAL SIMULATION OF MINNAN TEMPLE ARCHITECTURE CAISSON'S CRAFT TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Y. C. Lin

    2013-07-01

    Full Text Available Caisson is one of the important representations of the Minnan (southern Fujian temple architecture craft techniques and decorative aesthetics. The special component design and group building method present the architectural thinking and personal characteristics of great carpenters of Minnan temple architecture. In late Qing Dynasty, the appearance and style of caissons of famous temples in Taiwan apparently presented the building techniques of the great carpenters. However, as the years went by, the caisson design and craft techniques were not fully inherited, which has been a great loss of cultural assets. Accordingly, with the caisson of Fulong temple, a work by the well-known great carpenter in Tainan as an example, this study obtained the thinking principles of the original design and the design method at initial period of construction through interview records and the step of redrawing the "Tng-Ko" (traditional design, stakeout and construction tool. We obtained the 3D point cloud model of the caisson of Fulong temple using 3D laser scanning technology, and established the 3D digital model of each component of the caisson. Based on the caisson component procedure obtained from interview records, this study conducted the digital simulation of the caisson component to completely recode and present the caisson design, construction and completion procedure. This model of preserving the craft techniques for Minnan temple caisson by using digital technology makes specific contribution to the heritage of the craft techniques while providing an important reference for the digital preservation of human cultural assets.

  5. Architecture Descriptions. A Contribution to Modeling of Production System Architecture

    DEFF Research Database (Denmark)

    Jepsen, Allan Dam; Hvam, Lars

    a proper understanding of the architecture phenomenon and the ability to describe it in a manner that allow the architecture to be communicated to and handled by stakeholders throughout the company. Despite the existence of several design philosophies in production system design such as Lean, that focus...... a diverse set of stakeholder domains and tools in the production system life cycle. To support such activities, a contribution is made to the identification and referencing of production system elements within architecture descriptions as part of the reference architecture framework. The contribution...

  6. The Software Architecture of Global Climate Models

    Science.gov (United States)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  7. Organizational Learning Supported by Reference Architecture Models

    DEFF Research Database (Denmark)

    Nardello, Marco; Møller, Charles; Gøtze, John

    2017-01-01

    of an emerging technical standard specific for the manufacturing industry. Global manufacturing experts consider the Reference Architecture Model Industry 4.0 (RAMI4.0) as one of the corner stones for the implementation of Industry 4.0. The instantiation contributed to organizational learning in the laboratory...

  8. Utilizing Rapid Prototyping for Architectural Modeling

    Science.gov (United States)

    Kirton, E. F.; Lavoie, S. D.

    2006-01-01

    This paper will discuss our approach to, success with and future direction in rapid prototyping for architectural modeling. The premise that this emerging technology has broad and exciting applications in the building design and construction industry will be supported by visual and physical evidence. This evidence will be presented in the form of…

  9. On Realism of Architectural Procedural Models

    Czech Academy of Sciences Publication Activity Database

    Beneš, J.; Kelly, T.; Děchtěrenko, Filip; Křivánek, J.; Müller, P.

    2017-01-01

    Roč. 36, č. 2 (2017), s. 225-234 ISSN 0167-7055 Grant - others:AV ČR(CZ) StrategieAV21/14 Program:StrategieAV Institutional support: RVO:68081740 Keywords : realism * procedural modeling * architecture Subject RIV: IN - Informatics, Computer Science OBOR OECD: Cognitive sciences Impact factor: 1.611, year: 2016

  10. Modeling Operations Costs for Human Exploration Architectures

    Science.gov (United States)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  11. Process Models for Security Architectures

    Directory of Open Access Journals (Sweden)

    Floarea NASTASE

    2006-01-01

    Full Text Available This paper presents a model for an integrated security system, which can be implemented in any organization. It is based on security-specific standards and taxonomies as ISO 7498-2 and Common Criteria. The functionalities are derived from the classes proposed in the Common Criteria document. In the paper we present the process model for each functionality and also we focus on the specific components.

  12. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  13. An Evaluation of ADLs on Modeling Patterns for Software Architecture

    NARCIS (Netherlands)

    Waqas Kamal, Ahmad; Avgeriou, Paris

    2007-01-01

    Architecture patterns provide solutions to recurring design problems at the architecture level. In order to model patterns during software architecture design, one may use a number of existing Architecture Description Languages (ADLs), including the UML, a generic language but also a de facto

  14. Model-Driven Development of Safety Architectures

    Science.gov (United States)

    Denney, Ewen; Pai, Ganesh; Whiteside, Iain

    2017-01-01

    We describe the use of model-driven development for safety assurance of a pioneering NASA flight operation involving a fleet of small unmanned aircraft systems (sUAS) flying beyond visual line of sight. The central idea is to develop a safety architecture that provides the basis for risk assessment and visualization within a safety case, the formal justification of acceptable safety required by the aviation regulatory authority. A safety architecture is composed from a collection of bow tie diagrams (BTDs), a practical approach to manage safety risk by linking the identified hazards to the appropriate mitigation measures. The safety justification for a given unmanned aircraft system (UAS) operation can have many related BTDs. In practice, however, each BTD is independently developed, which poses challenges with respect to incremental development, maintaining consistency across different safety artifacts when changes occur, and in extracting and presenting stakeholder specific information relevant for decision making. We show how a safety architecture reconciles the various BTDs of a system, and, collectively, provide an overarching picture of system safety, by considering them as views of a unified model. We also show how it enables model-driven development of BTDs, replete with validations, transformations, and a range of views. Our approach, which we have implemented in our toolset, AdvoCATE, is illustrated with a running example drawn from a real UAS safety case. The models and some of the innovations described here were instrumental in successfully obtaining regulatory flight approval.

  15. 3D DIGITAL SIMULATION OF MINNAN TEMPLE ARCHITECTURE CAISSON'S CRAFT TECHNIQUES

    OpenAIRE

    Y. C. Lin; T. C. Wu; M. F. Hsu

    2013-01-01

    Caisson is one of the important representations of the Minnan (southern Fujian) temple architecture craft techniques and decorative aesthetics. The special component design and group building method present the architectural thinking and personal characteristics of great carpenters of Minnan temple architecture. In late Qing Dynasty, the appearance and style of caissons of famous temples in Taiwan apparently presented the building techniques of the great carpenters. However, as the y...

  16. Modelling and using product architectures in mechatronic product development

    DEFF Research Database (Denmark)

    Bruun, Hans Peter Lomholt; Mortensen, Niels Henrik

    , experiences by using the architecture representation in a mechatronic development project, and the scope of using the architecture model as a skeleton for a data structure in a PLM system. The fundamental idea for planning and modeling holistic architectures is that an improved understanding of the whole...

  17. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  18. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  19. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  20. Models and prototypes of biomimetic devices to architectural purposes

    Directory of Open Access Journals (Sweden)

    Silvia Titotto

    2014-12-01

    Full Text Available This paper presents some results of an ongoing interdisciplinary research about models and prototypes of biomimetic devices via installations and the focus of this paper is to outline this research role in architectural purposes as it perpasses the cultural and heritage contexts by being a way of understanding and living in the world as well as taking place in the world as devices or environments that pass on to future generations to use, learn from and be inspired by. Both the theoretical and the experimental work done so far point out that installations built with association of laser cutting and rapid prototyping techniques might be on the best feasible ways for developing and testing new technologies involved in biomimetic devices to architectural purposes that put both tectonics and nature as their central theme. 

  1. Models for Evaluating and Improving Architecture Competence

    National Research Council Canada - National Science Library

    Bass, Len; Clements, Paul; Kazman, Rick; Klein, Mark

    2008-01-01

    Software architecture competence is the ability of an individual or organization to acquire, use, and sustain the skills and knowledge necessary to carry out software architecture-centric practices...

  2. Business model driven service architecture design for enterprise application integration

    OpenAIRE

    Gacitua-Decar, Veronica; Pahl, Claus

    2008-01-01

    Increasingly, organisations are using a Service-Oriented Architecture (SOA) as an approach to Enterprise Application Integration (EAI), which is required for the automation of business processes. This paper presents an architecture development process which guides the transition from business models to a service-based software architecture. The process is supported by business reference models and patterns. Firstly, the business process models are enhanced with domain model elements, applicat...

  3. Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts

    Science.gov (United States)

    hong, Zhou; Wenhua, Lu

    2017-01-01

    Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.

  4. Integrating acoustic analysis in the architectural design process using parametric modelling

    DEFF Research Database (Denmark)

    Peters, Brady

    2011-01-01

    This paper discusses how parametric modeling techniques can be used to provide architectural designers with a better understanding of the acoustic performance of their designs and provide acoustic engineers with models that can be analyzed using computational acoustic analysis software. Architects......, acoustic performance can inform the geometry and material logic of the design. In this way, the architectural design and the acoustic analysis model become linked....

  5. Loop overhead reduction techniques for coarse grained reconfigurable architectures

    NARCIS (Netherlands)

    Vadivel, K.; Wijtvliet, M.; Jordans, R.; Corporaal, H.

    2017-01-01

    Due to their flexibility and high performance, Coarse Grained Reconfigurable Array (CGRA) are a topic of increasing research interest. However, CGRAs also have the potential to achieve very high energy efficiency in comparison to other reconfigurable architectures when hardware optimizations are

  6. A Technique For Developing Strategic Differentiation For Small Architectural Firms

    NARCIS (Netherlands)

    Heintz, John Linke; Aranda-Mena, Guillermo; Saari, Arto; Houvinen, Pekka

    2016-01-01

    Since the crash in 2007, the number of small architectural firms has risen dramatically as both recently graduated and recently laid off architects decide to go out on their own. In such a crowded market firms will need to find some way to distinguish themselves from their many competitors. Arguing

  7. Managing changes in the enterprise architecture modelling context

    Science.gov (United States)

    Khanh Dam, Hoa; Lê, Lam-Son; Ghose, Aditya

    2016-07-01

    Enterprise architecture (EA) models the whole enterprise in various aspects regarding both business processes and information technology resources. As the organisation grows, the architecture of its systems and processes must also evolve to meet the demands of the business environment. Evolving an EA model may involve making changes to various components across different levels of the EA. As a result, an important issue before making a change to an EA model is assessing the ripple effect of the change, i.e. change impact analysis. Another critical issue is change propagation: given a set of primary changes that have been made to the EA model, what additional secondary changes are needed to maintain consistency across multiple levels of the EA. There has been however limited work on supporting the maintenance and evolution of EA models. This article proposes an EA description language, namely ChangeAwareHierarchicalEA, integrated with an evolution framework to support both change impact analysis and change propagation within an EA model. The core part of our framework is a technique for computing the impact of a change and a new method for generating interactive repair plans from Alloy consistency rules that constrain the EA model.

  8. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  9. Organizational Learning Supported by Reference Architecture Models

    DEFF Research Database (Denmark)

    Nardello, Marco; Møller, Charles; Gøtze, John

    2017-01-01

    The wave of the fourth industrial revolution (Industry 4.0) is bringing a new vision of the manufacturing industry. In manufacturing, one of the buzzwords of the moment is “Smart production”. Smart production involves manufacturing equipment with many sensors that can generate and transmit large...... amounts of data. These data and information from manufacturing operations are however not shared in the organization. Therefore the organization is not using them to learn and improve their operations. To address this problem, the authors implemented in an Industry 4.0 laboratory an instance...... of an emerging technical standard specific for the manufacturing industry. Global manufacturing experts consider the Reference Architecture Model Industry 4.0 (RAMI4.0) as one of the corner stones for the implementation of Industry 4.0. The instantiation contributed to organizational learning in the laboratory...

  10. Extending enterprise architecture modelling with business goals and requirements

    NARCIS (Netherlands)

    Engelsman, W.; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten J.

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling

  11. Modelling of control system architecture for next-generation accelerators

    International Nuclear Information System (INIS)

    Liu, Shi-Yao; Kurokawa, Shin-ichi

    1990-01-01

    Functional, hardware and software system architectures define the fundamental structure of control systems. Modelling is a protocol of system architecture used in system design. This paper reviews various modellings adopted in past ten years and suggests a new modelling for next generation accelerators. (author)

  12. Proactive Modeling of Market, Product and Production Architectures

    DEFF Research Database (Denmark)

    Mortensen, Niels Henrik; Hansen, Christian Lindschou; Hvam, Lars

    2011-01-01

    This paper presents an operational model that allows description of market, products and production architectures. The main feature of this model is the ability to describe both structural and functional aspect of architectures. The structural aspect is an answer to the question: What constitutes...... the architecture, e.g. standard designs, design units and interfaces? The functional aspect is an answer to the question: What is the behaviour or the architecture, what is it able to do, i.e. which products at which performance levels can be derived from the architecture? Among the most important benefits...... of this model is the explicit ability to describe what the architecture is prepared for, and what it is not prepared for - concerning development of future derivative products. The model has been applied in a large scale global product development project. Among the most important benefits is contribution to...

  13. Software architecture 2

    CERN Document Server

    Oussalah, Mourad Chabanne

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural templa

  14. Software architecture 1

    CERN Document Server

    Oussalah , Mourad Chabane

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural template

  15. Building energy modeling for green architecture and intelligent dashboard applications

    Science.gov (United States)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  16. Application of Data Architecture Model in Enterprise Management

    Directory of Open Access Journals (Sweden)

    Shi Song

    2017-01-01

    Full Text Available Today is in the era of rapid development of information, data volume of high-speed expansion, it is difficult in the previous system for communication, sharing and integration. In order to integrate data resources, eliminate the “information island”, build enterprise development blueprints, people gradually realize the importance of top design. Many enterprises for their own development to establish their own enterprise architecture of the top design, and as its core data architecture model is also reflected in different industries according to different development. This paper mainly studies the data architecture model, expounds the role of data architecture model and its relationship.

  17. Architecture

    OpenAIRE

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  18. 3D-models in landscape architecture

    NARCIS (Netherlands)

    Nijhuis, S.; Stellingwerff, M.C.

    2011-01-01

    Landscape architecture consists of a basic attitude that involves four principles of study and practice. These are: anamnesis (palimpsest), process, three-dimensional space and scale-continuum (relational context). The core of landscape architecture as a design discipline is the construction and

  19. Architecture of the Product State Model Environment

    DEFF Research Database (Denmark)

    Holm Larsen, Michael; Lynggaard, Hans Jørgen B.

    2003-01-01

    on thedevelopment activities of the PSM architecture. An example discusses how to handle product relatedinformation on the shop floor in a manufacturing company and focuses on how dynamically updatedproduct data can improve control of production activities. This prototype example of welding a jointbetween two steel...... plates serves as proof of concept for the PSM architecture....

  20. Lightweight and Continuous Architectural Software Quality Assurance using the aSQA Technique

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius; Lindstrøm, Bo

    2010-01-01

    In this paper, we present a novel technique for assessing and prioritizing architectural quality in large-scale software development projects. The technique can be applied with relatively little effort by software architects and thus suited for agile development in which quality attributes can...... be assessed and prioritized, e.g., within each development sprint. We outline the processes and metrics embodied in the technique, and report initial experiences on the benefits and liabilities. In conclusion, the technique is considered valuable and a viable tool, and has benefits in an architectural...

  1. Unifying approach for model transformations in the MOF metamodeling architecture

    NARCIS (Netherlands)

    Ivanov, Ivan; van den Berg, Klaas

    2004-01-01

    In the Meta Object Facility (MOF) metamodeling architecture a number of model transformation scenarios can be identified. It could be expected that a metamodeling architecture will be accompanied by a transformation technology supporting the model transformation scenarios in a uniform way. Despite

  2. Defining Generic Architecture for Cloud Infrastructure as a Service model

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.

    2011-01-01

    Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

  3. Defining generic architecture for Cloud IaaS provisioning model

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Mavrin, A.; Leymann, F.; Ivanov, I.; van Sinderen, M.; Shishkov, B.

    2011-01-01

    Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

  4. From enterprise architecture to business models and back

    NARCIS (Netherlands)

    Iacob, Maria Eugenia; Meertens, Lucas Onno; Jonkers, H.; Quartel, Dick; Nieuwenhuis, Lambertus Johannes Maria; van Sinderen, Marten J.

    In this study, we argue that important IT change processes affecting an organization’s enterprise architecture are also mirrored by a change in the organization’s business model. An analysis of the business model may establish whether the architecture change has value for the business. Therefore, in

  5. Predicting the academic success of architecture students by pre-enrolment requirement: using machine-learning techniques

    Directory of Open Access Journals (Sweden)

    Ralph Olusola Aluko

    2016-12-01

    Full Text Available In recent years, there has been an increase in the number of applicants seeking admission into architecture programmes. As expected, prior academic performance (also referred to as pre-enrolment requirement is a major factor considered during the process of selecting applicants. In the present study, machine learning models were used to predict academic success of architecture students based on information provided in prior academic performance. Two modeling techniques, namely K-nearest neighbour (k-NN and linear discriminant analysis were applied in the study. It was found that K-nearest neighbour (k-NN outperforms the linear discriminant analysis model in terms of accuracy. In addition, grades obtained in mathematics (at ordinary level examinations had a significant impact on the academic success of undergraduate architecture students. This paper makes a modest contribution to the ongoing discussion on the relationship between prior academic performance and academic success of undergraduate students by evaluating this proposition. One of the issues that emerges from these findings is that prior academic performance can be used as a predictor of academic success in undergraduate architecture programmes. Overall, the developed k-NN model can serve as a valuable tool during the process of selecting new intakes into undergraduate architecture programmes in Nigeria.

  6. Techniques and software architectures for medical visualisation and image processing

    NARCIS (Netherlands)

    Botha, C.P.

    2005-01-01

    This thesis presents a flexible software platform for medical visualisation and image processing, a technique for the segmentation of the shoulder skeleton from CT data and three techniques that make contributions to the field of direct volume rendering. Our primary goal was to investigate the use

  7. Architecture design in global and model-centric software development

    NARCIS (Netherlands)

    Heijstek, Werner

    2012-01-01

    This doctoral dissertation describes a series of empirical investigations into representation, dissemination and coordination of software architecture design in the context of global software development. A particular focus is placed on model-centric and model-driven software development.

  8. Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes

    OpenAIRE

    Koziolek, Anne

    2013-01-01

    Quality attributes, such as performance or reliability, are crucial for the success of a software system and largely influenced by the software architecture. Their quantitative prediction supports systematic, goal-oriented software design and forms a base of an engineering approach to software design. This thesis proposes a method and tool to automatically improve component-based software architecture (CBA) models based on such quantitative quality prediction techniques.

  9. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    Science.gov (United States)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  10. Between architecture and model: Strategies for cognitive control

    NARCIS (Netherlands)

    Taatgen, Niels

    One major limitation of current cognitive architectures is that models are typically constructed in an “empty” architecture, and that the knowledge specifications (typically production rules) are specific to the particular task. This means that general cognitive control strategies have to be

  11. Model-based safety architecture framework for complex systems

    NARCIS (Netherlands)

    Schuitemaker, Katja; Rajabali Nejad, Mohammadreza; Braakhuis, J.G.; Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico; Kröger, Wolfgang

    2015-01-01

    The shift to transparency and rising need of the general public for safety, together with the increasing complexity and interdisciplinarity of modern safety-critical Systems of Systems (SoS) have resulted in a Model-Based Safety Architecture Framework (MBSAF) for capturing and sharing architectural

  12. Architecture-based Model for Preventive and Operative Crisis Management

    National Research Council Canada - National Science Library

    Jungert, Erland; Derefeldt, Gunilla; Hallberg, Jonas; Hallberg, Niklas; Hunstad, Amund; Thuren, Ronny

    2004-01-01

    .... A system that should support activities of this type must not only have a high capacity, with respect to the dataflow, but also have suitable tools for decision support. To overcome these problems, an architecture for preventive and operative crisis management is proposed. The architecture is based on models for command and control, but also for risk analysis.

  13. A Concept Transformation Learning Model for Architectural Design Learning Process

    Science.gov (United States)

    Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming

    2016-01-01

    Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…

  14. Advanced quality prediction model for software architectural knowledge sharing

    NARCIS (Netherlands)

    Liang, Peng; Jansen, Anton; Avgeriou, Paris; Tang, Antony; Xu, Lai

    In the field of software architecture, a paradigm shift is occurring from describing the outcome of architecting process to describing the Architectural Knowledge (AK) created and used during architecting. Many AK models have been defined to represent domain concepts and their relationships, and

  15. Chemical Transport Models on Accelerator Architectures

    Science.gov (United States)

    Linford, J.; Sandu, A.

    2008-12-01

    Heterogeneous multicore chipsets with many layers of polymorphic parallelism are becoming increasingly common in high-performance computing systems. Homogeneous co-processors with many streaming processors also offer unprecedented peak floating-point performance. Effective use of parallelism in these new chipsets is paramount. We present optimization techniques for 3D chemical transport models to take full advantage of emerging Cell Broadband Engine and graphical processing unit (GPU) technology. Our techniques achieve 2.15x the per-node performance of an IBM BlueGene/P on the Cell Broadband Engine, and a strongly-scalable 1.75x the per-node performance of an IBM BlueGene/P on an NVIDIA GeForce 8600.

  16. an architecture-based technique to mobile contact recommendation

    African Journals Online (AJOL)

    user

    Aside being able to store the name of contacts and their phone numbers, there are ... the artificial neural network technique [21], along with ... Recommendation is part of everyday life. This concept ... However, to use RSs some level of intelligence must be ...... [3] Min J.-K. & Cho S.-B.Mobile Human Network Management.

  17. A Model of Trusted Connection Architecture

    Directory of Open Access Journals (Sweden)

    Zhang Xun

    2017-01-01

    Full Text Available According to that traditional trusted network connection architecture (TNC has limitations on dynamic network environment and the user behavior support, we develop TCA to propose a trusted connection architecture supporting behavior measurement (TCA-SBM, besides, the structure diagram of network architecture is given. Through introducing user behavior measure elements, TCA-SBM can conduct measurement on the whole network in time dimension periodically, and refine the measurement on network behavior in measure dimension to conduct fine-grained dynamic trusted measurement. As a result, TCA-SBM enhances the TCA’s ability to adapt to the dynamic change of network and makes up the deficiency of trusted computing framework in the network connection.

  18. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  19. Multicore technology architecture, reconfiguration, and modeling

    CERN Document Server

    Qadri, Muhammad Yasir

    2013-01-01

    The saturation of design complexity and clock frequencies for single-core processors has resulted in the emergence of multicore architectures as an alternative design paradigm. Nowadays, multicore/multithreaded computing systems are not only a de-facto standard for high-end applications, they are also gaining popularity in the field of embedded computing. The start of the multicore era has altered the concepts relating to almost all of the areas of computer architecture design, including core design, memory management, thread scheduling, application support, inter-processor communication, debu

  20. Understanding Enterprise Architecture with Topic Modeling

    DEFF Research Database (Denmark)

    Nardello, Marco; Møller, Charles; Gøtze, John

    2018-01-01

    The next 3 years will be more important than the last 50 due to the digital transformation across industries. Enterprise Architecture (EA), the discipline that should lead enterprise responses to disruptive forces, is far from ready to drive the next wave of change. The state of the art in the di......The next 3 years will be more important than the last 50 due to the digital transformation across industries. Enterprise Architecture (EA), the discipline that should lead enterprise responses to disruptive forces, is far from ready to drive the next wave of change. The state of the art...

  1. Control system architecture: The standard and non-standard models

    International Nuclear Information System (INIS)

    Thuot, M.E.; Dalesio, L.R.

    1993-01-01

    Control system architecture development has followed the advances in computer technology through mainframes to minicomputers to micros and workstations. This technology advance and increasingly challenging accelerator data acquisition and automation requirements have driven control system architecture development. In summarizing the progress of control system architecture at the last International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) B. Kuiper asserted that the system architecture issue was resolved and presented a ''standard model''. The ''standard model'' consists of a local area network (Ethernet or FDDI) providing communication between front end microcomputers, connected to the accelerator, and workstations, providing the operator interface and computational support. Although this model represents many present designs, there are exceptions including reflected memory and hierarchical architectures driven by requirements for widely dispersed, large channel count or tightly coupled systems. This paper describes the performance characteristics and features of the ''standard model'' to determine if the requirements of ''non-standard'' architectures can be met. Several possible extensions to the ''standard model'' are suggested including software as well as the hardware architectural feature

  2. Boundary representation modelling techniques

    CERN Document Server

    2006-01-01

    Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

  3. Constructing Business Models around Identity : Tensions in Architectural Firms

    NARCIS (Netherlands)

    Bos-De Vos, M.; Volker, L.; Chan, Paul W; Neilson, Christopher J.

    2017-01-01

    Architectural firms experience difficulties to establish healthy and sustainable business models as they have to reconcile the often-competing value systems that they are based upon. Organizational members continuously negotiate professional values and beliefs with the firm's commercial goals,

  4. RF Sub-sampling Receiver Architecture based on Milieu Adapting Techniques

    DEFF Research Database (Denmark)

    Behjou, Nastaran; Larsen, Torben; Jensen, Ole Kiel

    2012-01-01

    A novel sub-sampling based architecture is proposed which has the ability of reducing the problem of image distortion and improving the signal to noise ratio significantly. The technique is based on sensing the environment and adapting the sampling rate of the receiver to the best possible...

  5. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  6. Control architecture of power systems: Modeling of purpose and function

    DEFF Research Database (Denmark)

    Heussen, Kai; Saleem, Arshad; Lind, Morten

    2009-01-01

    Many new technologies with novel control capabilities have been developed in the context of “smart grid” research. However, often it is not clear how these capabilities should best be integrated in the overall system operation. New operation paradigms change the traditional control architecture...... of power systems and it is necessary to identify requirements and functions. How does new control architecture fit with the old architecture? How can power system functions be specified independent of technology? What is the purpose of control in power systems? In this paper, a method suitable...... for semantically consistent modeling of control architecture is presented. The method, called Multilevel Flow Modeling (MFM), is applied to the case of system balancing. It was found that MFM is capable of capturing implicit control knowledge, which is otherwise difficult to formalize. The method has possible...

  7. Control system architecture: The standard and non-standard models

    International Nuclear Information System (INIS)

    Thuot, M.E.; Dalesio, L.R.

    1993-01-01

    Control system architecture development has followed the advances in computer technology through mainframes to minicomputers to micros and workstations. This technology advance and increasingly challenging accelerator data acquisition and automation requirements have driven control system architecture development. In summarizing the progress of control system architecture at the last International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) B. Kuiper asserted that the system architecture issue was resolved and presented a open-quotes standard modelclose quotes. The open-quotes standard modelclose quotes consists of a local area network (Ethernet or FDDI) providing communication between front end microcomputers, connected to the accelerator, and workstations, providing the operator interface and computational support. Although this model represents many present designs, there are exceptions including reflected memory and hierarchical architectures driven by requirements for widely dispersed, large channel count or tightly coupled systems. This paper describes the performance characteristics and features of the open-quotes standard modelclose quotes to determine if the requirements of open-quotes non-standardclose quotes architectures can be met. Several possible extensions to the open-quotes standard modelclose quotes are suggested including software as well as the hardware architectural features

  8. Evaluating the Effectiveness of Reference Models in Federating Enterprise Architectures

    Science.gov (United States)

    Wilson, Jeffery A.

    2012-01-01

    Agencies need to collaborate with each other to perform missions, improve mission performance, and find efficiencies. The ability of individual government agencies to collaborate with each other for mission and business success and efficiency is complicated by the different techniques used to describe their Enterprise Architectures (EAs).…

  9. The architecture and prototype implementation of the Model Environment system

    Science.gov (United States)

    Donchyts, G.; Treebushny, D.; Primachenko, A.; Shlyahtun, N.; Zheleznyak, M.

    2007-01-01

    An approach that simplifies software development of the model based decision support systems for environmental management has been introduced. The approach is based on definition and management of metadata and data related to computational model without losing data semantics and proposed methods of integration of the new modules into the information system and their management. An architecture of the integrated modelling system is presented. The proposed architecture has been implemented as a prototype of integrated modelling system using. NET/Gtk{#} and is currently being used to re-design European Decision Support System for Nuclear Emergency Management RODOS (http://www.rodos.fzk.de) using Java/Swing.

  10. New techniques for subdivision modelling

    OpenAIRE

    BEETS, Koen

    2006-01-01

    In this dissertation, several tools and techniques for modelling with subdivision surfaces are presented. Based on the huge amount of theoretical knowledge about subdivision surfaces, we present techniques to facilitate practical 3D modelling which make subdivision surfaces even more useful. Subdivision surfaces have reclaimed attention several years ago after their application in full-featured 3D animation movies, such as Toy Story. Since then and due to their attractive properties an ever i...

  11. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  12. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    Energy Technology Data Exchange (ETDEWEB)

    Wijayasekara, Dumidu, E-mail: wija2589@vandals.uidaho.edu [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Manic, Milos [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Sabharwall, Piyush [Idaho National Laboratory, Idaho Falls, ID (United States); Utgikar, Vivek [Department of Chemical Engineering, University of Idaho, Idaho Falls, ID 83402 (United States)

    2011-07-15

    Highlights: > Performance prediction of PCHE using artificial neural networks. > Evaluating artificial neural network performance for PCHE modeling. > Selection of over-training resilient artificial neural networks. > Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the testing

  13. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    International Nuclear Information System (INIS)

    Wijayasekara, Dumidu; Manic, Milos; Sabharwall, Piyush; Utgikar, Vivek

    2011-01-01

    Highlights: → Performance prediction of PCHE using artificial neural networks. → Evaluating artificial neural network performance for PCHE modeling. → Selection of over-training resilient artificial neural networks. → Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the

  14. Reduced-Complexity Wireless Transceiver Architectures and Techniques for Space-Time Communications

    DEFF Research Database (Denmark)

    Tsakalaki, Elpiniki

    2012-01-01

    The dissertation sheds light on the performance gains of multi-antenna systems when the antenna aspects and the associated signal processing and coding aspects are integrated together in a multidisciplinary approach, addressing a variety of challenging tasks pertaining to the joint design of smart...... wireless transceivers and communication techniques. These tasks are at the intersection of different scientific disciplines including signal processing, communications, antennas and propagation. Specifically, the thesis deals with reduced-complexity space-time wireless transceiver architectures...... and associated communication techniques for multi-input multi-output (MIMO) and cognitive radio (CR) systems as well as wireless sensor networks (WSNs). The low-complexity architectures are obtained by equipping the wireless transceiver with passive control ports which require the minimum amount of RF hardware...

  15. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  16. Drawing-Based Procedural Modeling of Chinese Architectures.

    Science.gov (United States)

    Fei Hou; Yue Qi; Hong Qin

    2012-01-01

    This paper presents a novel modeling framework to build 3D models of Chinese architectures from elevation drawing. Our algorithm integrates the capability of automatic drawing recognition with powerful procedural modeling to extract production rules from elevation drawing. First, different from the previous symbol-based floor plan recognition, based on the novel concept of repetitive pattern trees, small horizontal repetitive regions of the elevation drawing are clustered in a bottom-up manner to form architectural components with maximum repetition, which collectively serve as building blocks for 3D model generation. Second, to discover the global architectural structure and its components' interdependencies, the components are structured into a shape tree in a top-down subdivision manner and recognized hierarchically at each level of the shape tree based on Markov Random Fields (MRFs). Third, shape grammar rules can be derived to construct 3D semantic model and its possible variations with the help of a 3D component repository. The salient contribution lies in the novel integration of procedural modeling with elevation drawing, with a unique application to Chinese architectures.

  17. MISTRAL : A Language for Model Transformations in the MOF Meta-modeling Architecture

    NARCIS (Netherlands)

    Kurtev, Ivan; van den Berg, Klaas; Aßmann, Uwe; Aksit, Mehmet; Rensink, Arend

    2005-01-01

    n the Meta Object Facility (MOF) meta-modeling architecture a number of model transformation scenarios can be identified. It could be expected that a meta-modeling architecture will be accompanied by a transformation technology supporting the model transformation scenarios in a uniform way. Despite

  18. Modeling and Verification of Dependable Electronic Power System Architecture

    Science.gov (United States)

    Yuan, Ling; Fan, Ping; Zhang, Xiao-fang

    The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.

  19. Evaluation of a server-client architecture for accelerator modeling and simulation

    International Nuclear Information System (INIS)

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-01-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands

  20. Human Spaceflight Architecture Model (HSFAM) Data Dictionary

    Science.gov (United States)

    Shishko, Robert

    2016-01-01

    HSFAM is a data model based on the DoDAF 2.02 data model with some for purpose extensions. These extensions are designed to permit quantitative analyses regarding stakeholder concerns about technical feasibility, configuration and interface issues, and budgetary and/or economic viability.

  1. RT 24 - Architecture, Modeling & Simulation, and Software Design

    Science.gov (United States)

    2010-11-01

    focus on tool extensions (UPDM, SysML, SoaML, BPMN ) Leverage “best of breed” architecture methodologies Provide tooling to support the methodology DoDAF...Capability 10 Example: BPMN 11 DoDAF 2.0 MetaModel BPMN MetaModel Mapping SysML to DoDAF 2.0 12 DoDAF V2.0 Models OV-2 SysML Diagrams Requirement

  2. Best of Three Worlds : Towards Sound Architectural Dependability Models

    NARCIS (Netherlands)

    Boudali, Hichem; Haverkort, Boudewijn R.; Kuntz, Matthias; Stoelinga, Mariëlle

    This paper surveys the most prominent formalisms for availability and reliability analysis and discusses the pros and cons of these approaches. Based on our findings, we outline a solution that unites the merits of the existing approaches into a sound architectural dependability model.

  3. A dynamical model for plant cell wall architecture formation.

    NARCIS (Netherlands)

    Mulder, B.M.; Emons, A.M.C.

    2001-01-01

    We discuss a dynamical mathematical model to explain cell wall architecture in plant cells. The highly regular textures observed in cell walls reflect the spatial organisation of the cellulose microfibrils (CMFs), the most important structural component of cell walls. Based on a geometrical theory

  4. Improving Project Management Using Formal Models and Architectures

    Science.gov (United States)

    Kahn, Theodore; Sturken, Ian

    2011-01-01

    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  5. Unraveling Supply-Driven Business Models of Architectural Firms

    NARCIS (Netherlands)

    Bos-De Vos, M.; Volker, L.; Wamelink, J.W.F.; Kaminsky, Jessica; Zerjav, Vedran

    2016-01-01

    Architectural firms deliver services for various, unique projects that are all characterized by a high level of uncertainty. To successfully propose, create and capture value, they need business models that are able to deal with this variety and uncertainty. So far, little is known about the

  6. Model architecture of intelligent data mining oriented urban transportation information

    Science.gov (United States)

    Yang, Bogang; Tao, Yingchun; Sui, Jianbo; Zhang, Feizhou

    2007-06-01

    Aiming at solving practical problems in urban traffic, the paper presents model architecture of intelligent data mining from hierarchical view. With artificial intelligent technologies used in the framework, the intelligent data mining technology improves, which is more suitable for the change of real-time road condition. It also provides efficient technology support for the urban transport information distribution, transmission and display.

  7. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    Science.gov (United States)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular

  8. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  9. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  10. Preservation vs Innovation. Sustainable rehabilitation in architectural preservation contexts: knowledge, techniques, languages

    Directory of Open Access Journals (Sweden)

    Riccardo Gulli

    2012-12-01

    Full Text Available The theme of the preservation of the characteristics of protected architectural heritage must nowadays be correlated to new requirements for the adaptation of existing buildings to performance standards. This subject raises new questions about the theoretical assumptions and tools to be adopted to coherently answer that request. Focusing on the theme of energy requalification of heritage buildings - the primary focus of interest for the reduction of pollution emissions, according to Horizon 2020 objectives - the preservation of the meanings of an architecture work and of its linguistic, typological and material characteristics proofs to be essential for protection interventions on buildings. However this can’t be considered exhaustive, as the raised issue necessarily requires to be further addressed within the speculative domain of Technique, or rather to open out to the contribution that innovation processes and methods belonging to the field of scientific knowledge could offer.

  11. A cognitive architecture-based model of graph comprehension

    OpenAIRE

    Peebles, David

    2012-01-01

    I present a model of expert comprehension performance for 2 × 2 "interaction" graphs typically used to present data from two-way factorial research designs. Developed using the ACT-R cognitive architecture, the model simulates the cognitive and perceptual operations involved in interpreting interaction graphs and provides a detailed characterisation of the information extracted from the diagram, the prior knowledge required to interpret interaction graphs, and the knowledge generated during t...

  12. System Architecture Modeling for Technology Portfolio Management using ATLAS

    Science.gov (United States)

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  13. Economic assessment model architecture for AGC/AVLIS selection

    International Nuclear Information System (INIS)

    Hoglund, R.L.

    1984-01-01

    The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint

  14. A modeling process to understand complex system architectures

    Science.gov (United States)

    Robinson, Santiago Balestrini

    2009-12-01

    In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two

  15. Reduction of false positives in the detection of architectural distortion in mammograms by using a geometrically constrained phase portrait model

    International Nuclear Information System (INIS)

    Ayres, Fabio J.; Rangayyan, Rangaraj M.

    2007-01-01

    Objective One of the commonly missed signs of breast cancer is architectural distortion. We have developed techniques for the detection of architectural distortion in mammograms, based on the analysis of oriented texture through the application of Gabor filters and a linear phase portrait model. In this paper, we propose constraining the shape of the general phase portrait model as a means to reduce the false-positive rate in the detection of architectural distortion. Material and methods The methods were tested with one set of 19 cases of architectural distortion and 41 normal mammograms, and with another set of 37 cases of architectural distortion. Results Sensitivity rates of 84% with 4.5 false positives per image and 81% with 10 false positives per image were obtained for the two sets of images. Conclusion The adoption of a constrained phase portrait model with a symmetric matrix and the incorporation of its condition number in the analysis resulted in a reduction in the false-positive rate in the detection of architectural distortion. The proposed techniques, dedicated for the detection and localization of architectural distortion, should lead to efficient detection of early signs of breast cancer. (orig.)

  16. Reply to "Comments on Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes"

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available This is a reply to the comments by Gunnam et al. "Comments on 'Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes'", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 704174 on our recent work "Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 723465.

  17. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  18. Subadventitial techniques for chronic total occlusion percutaneous coronary intervention: The concept of "vessel architecture".

    Science.gov (United States)

    Azzalini, Lorenzo; Carlino, Mauro; Brilakis, Emmanouil S; Vo, Minh; Rinfret, Stéphane; Uretsky, Barry F; Karmpaliotis, Dimitri; Colombo, Antonio

    2018-03-01

    Despite improvements in guidewire technologies, the traditional antegrade wire escalation approach to chronic total occlusion (CTO) recanalization is successful in only 60-80% of selected cases. In particular, long, calcified, and tortuous occlusions are less successfully approached with a true-to-true lumen approach. Frequently, the guidewire tracks into the subadventitial space, with no guarantee of distal re-entry into the true lumen. The ability to manage the subadventitial space has been a key step in the tremendous improvement in success rates of contemporary CTO percutaneous coronary intervention (PCI), whether operating antegradely or retrogradely. A modern approach to CTO PCI involves understanding the concept of "vessel architecture," which is based on the distinction between coronary structures (occlusive plaque, comprising the disrupted intima and media, and the outer adventitia) and extravascular space. The vessel architecture represents a safe work environment for guidewire and device manipulation. This review provides an anatomy-based description of the concept of vessel architecture, along with a historical perspective of subadventitial techniques for CTO PCI, and outcome data of CTO PCI utilizing the subadventitial space. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Analyzing dynamic fault trees derived from model-based system architectures

    International Nuclear Information System (INIS)

    Dehlinger, Josh; Dugan, Joanne Bechta

    2008-01-01

    Dependability-critical systems, such as digital instrumentation and control systems in nuclear power plants, necessitate engineering techniques and tools to provide assurances of their safety and reliability. Determining system reliability at the architectural design phase is important since it may guide design decisions and provide crucial information for trade-off analysis and estimating system cost. Despite this, reliability and system engineering remain separate disciplines and engineering processes by which the dependability analysis results may not represent the designed system. In this article we provide an overview and application of our approach to build architecture-based, dynamic system models for dependability-critical systems and then automatically generate Dynamic Fault Trees (DFT) for comprehensive, toolsupported reliability analysis. Specifically, we use the Architectural Analysis and Design Language (AADL) to model the structural, behavioral and failure aspects of the system in a composite architecture model. From the AADL model, we seek to derive the DFT(s) and use Galileo's automated reliability analyses to estimate system reliability. This approach alleviates the dependability engineering - systems engineering knowledge expertise gap, integrates the dependability and system engineering design and development processes and enables a more formal, automated and consistent DFT construction. We illustrate this work using an example based on a dynamic digital feed-water control system for a nuclear reactor

  20. Architecture and Programming Models for High Performance Intensive Computation

    Science.gov (United States)

    2016-06-29

    commands from the data processing center to the sensors is needed. It has been noted that the ubiquity of mobile communication devices offers the...commands from a Processing Facility by way of mobile Relay Stations. The activity of each component of this model other than the Merge module can be...evaluation of the initial system implementation. Gao also was in charge of the development of Fresh Breeze architecture backend on new many-core computers

  1. A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling

    Science.gov (United States)

    Jaxa-Rozen, M.

    2016-12-01

    The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).

  2. The Architectural Designs of a Nanoscale Computing Model

    Directory of Open Access Journals (Sweden)

    Mary M. Eshaghian-Wilner

    2004-08-01

    Full Text Available A generic nanoscale computing model is presented in this paper. The model consists of a collection of fully interconnected nanoscale computing modules, where each module is a cube of cells made out of quantum dots, spins, or molecules. The cells dynamically switch between two states by quantum interactions among their neighbors in all three dimensions. This paper includes a brief introduction to the field of nanotechnology from a computing point of view and presents a set of preliminary architectural designs for fabricating the nanoscale model studied.

  3. Effects of various event building techniques on data acquisition system architectures

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.

    1990-04-01

    The preliminary specifications for various new detectors throughout the world including those at the Superconducting Super Collider (SSC) already make it clear that existing event building techniques will be inadequate for the high trigger and data rates anticipated for these detectors. In the world of high-energy physics many approaches have been taken to solving the problem of reading out data from a whole detector and presenting a complete event to the physicist, while simultaneously keeping deadtime to a minimum. This paper includes a review of multiprocessor and telecommunications interconnection networks and how these networks relate to event building in general, illustrating advantages of the various approaches. It presents a more detailed study of recent research into new event building techniques which incorporate much greater parallelism to better accommodate high data rates. The future in areas such as front-end electronics architectures, high speed data links, event building and online processor arrays is also examined. Finally, details of a scalable parallel data acquisition system architecture being developed at Fermilab are given. 35 refs., 31 figs., 1 tab

  4. Simple and cost-effective fabrication of size-tunable zinc oxide architectures by multiple size reduction technique

    Directory of Open Access Journals (Sweden)

    Hyeong-Ho Park, Xin Zhang, Seon-Yong Hwang, Sang Hyun Jung, Semin Kang, Hyun-Beom Shin, Ho Kwan Kang, Hyung-Ho Park, Ross H Hill and Chul Ki Ko

    2012-01-01

    Full Text Available We present a simple size reduction technique for fabricating 400 nm zinc oxide (ZnO architectures using a silicon master containing only microscale architectures. In this approach, the overall fabrication, from the master to the molds and the final ZnO architectures, features cost-effective UV photolithography, instead of electron beam lithography or deep-UV photolithography. A photosensitive Zn-containing sol–gel precursor was used to imprint architectures by direct UV-assisted nanoimprint lithography (UV-NIL. The resulting Zn-containing architectures were then converted to ZnO architectures with reduced feature sizes by thermal annealing at 400 °C for 1 h. The imprinted and annealed ZnO architectures were also used as new masters for the size reduction technique. ZnO pillars of 400 nm diameter were obtained from a silicon master with pillars of 1000 nm diameter by simply repeating the size reduction technique. The photosensitivity and contrast of the Zn-containing precursor were measured as 6.5 J cm−2 and 16.5, respectively. Interesting complex ZnO patterns, with both microscale pillars and nanoscale holes, were demonstrated by the combination of dose-controlled UV exposure and a two-step UV-NIL.

  5. Simple and cost-effective fabrication of size-tunable zinc oxide architectures by multiple size reduction technique

    International Nuclear Information System (INIS)

    Park, Hyeong-Ho; Hwang, Seon-Yong; Jung, Sang Hyun; Kang, Semin; Shin, Hyun-Beom; Kang, Ho Kwan; Ko, Chul Ki; Zhang Xin; Hill, Ross H; Park, Hyung-Ho

    2012-01-01

    We present a simple size reduction technique for fabricating 400 nm zinc oxide (ZnO) architectures using a silicon master containing only microscale architectures. In this approach, the overall fabrication, from the master to the molds and the final ZnO architectures, features cost-effective UV photolithography, instead of electron beam lithography or deep-UV photolithography. A photosensitive Zn-containing sol–gel precursor was used to imprint architectures by direct UV-assisted nanoimprint lithography (UV-NIL). The resulting Zn-containing architectures were then converted to ZnO architectures with reduced feature sizes by thermal annealing at 400 °C for 1 h. The imprinted and annealed ZnO architectures were also used as new masters for the size reduction technique. ZnO pillars of 400 nm diameter were obtained from a silicon master with pillars of 1000 nm diameter by simply repeating the size reduction technique. The photosensitivity and contrast of the Zn-containing precursor were measured as 6.5 J cm −2 and 16.5, respectively. Interesting complex ZnO patterns, with both microscale pillars and nanoscale holes, were demonstrated by the combination of dose-controlled UV exposure and a two-step UV-NIL.

  6. Tactile Architectural Models as Universal ‘Urban Furniture’

    Science.gov (United States)

    Kłopotowska, Agnieszka

    2017-10-01

    Tactile architectural models and maquettes have been built in the external public spaces of Polish cities since the latter half of the 00s of the 21st century. These objects are designed for the blind, but also other people - tourists, children, and those who arrive in wheelchairs. This collection has got currently more than 70 implements, which places Poland in the group of European leaders. Unfortunately, this “furniture”, is not always “convenient” and safe for all recipients. Studies, which have been conducted together with Maciej Kłopotowski since 2016 across the country, show a number of serious design and executive mistakes or examples of misuse. The purpose of this article is drawing attention to these issues and pointing out ways how they can be avoided. These objects may become completely valuable, universal tool for learning and a great way of studying architecture in an alternative way.

  7. Recreation of architectural structures using procedural modeling based on volumes

    Directory of Open Access Journals (Sweden)

    Santiago Barroso Juan

    2013-11-01

    Full Text Available While the procedural modeling of buildings and other architectural structures has evolved very significantly in recent years, there is noticeable absence of high-level tools that allow a designer, an artist or an historian, creating important buildings or architectonic structures in a particular city. In this paper we present a tool for creating buildings in a simple and clear, following rules that use the language and methodology of creating their own buildings, and hiding the user the algorithmic details of the creation of the model.

  8. Hypermedia Genes An Evolutionary Perspective on Concepts, Models, and Architectures

    CERN Document Server

    Guimarães, Nuno

    2009-01-01

    The design space of information services evolved from seminal works through a set of prototypical hypermedia systems and matured in open and widely accessible web-based systems. The original concepts of hypermedia systems are now expressed in different forms and shapes. The first works on hypertext invented the term itself, laid out the foundational concept of association or link, and highlighted navigation as the core paradigm for the future information systems. The first engineered systems demonstrated architectural requirements and models and fostered the emergence of the conceptual model r

  9. Building information modeling in the architectural design phases

    DEFF Research Database (Denmark)

    Hermund, Anders

    2009-01-01

    The overall economical benefits of Building Information Modeling are generally comprehensible, but are there other problems with the implementation of BIM as a formulized system in a field that ultimately is dependant on a creative input? Is optimization and economic benefit really contributing...... with an architectural quality? In Denmark the implementation of the digital working methods related to BIM has been introduced by government law in 2007. Will the important role of the architect as designer change in accordance with these new methods, and does the idea of one big integrated model represent a paradox...... in relation to designing? The BIM mindset requires changes on many levels....

  10. SpaceWire model development technology for satellite architecture.

    Energy Technology Data Exchange (ETDEWEB)

    Eldridge, John M.; Leemaster, Jacob Edward; Van Leeuwen, Brian P.

    2011-09-01

    Packet switched data communications networks that use distributed processing architectures have the potential to simplify the design and development of new, increasingly more sophisticated satellite payloads. In addition, the use of reconfigurable logic may reduce the amount of redundant hardware required in space-based applications without sacrificing reliability. These concepts were studied using software modeling and simulation, and the results are presented in this report. Models of the commercially available, packet switched data interconnect SpaceWire protocol were developed and used to create network simulations of data networks containing reconfigurable logic with traffic flows for timing system distribution.

  11. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  12. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  13. Modular Architecture for Integrated Model-Based Decision Support.

    Science.gov (United States)

    Gaebel, Jan; Schreiber, Erik; Oeser, Alexander; Oeltze-Jafra, Steffen

    2018-01-01

    Model-based decision support systems promise to be a valuable addition to oncological treatments and the implementation of personalized therapies. For the integration and sharing of decision models, the involved systems must be able to communicate with each other. In this paper, we propose a modularized architecture of dedicated systems for the integration of probabilistic decision models into existing hospital environments. These systems interconnect via web services and provide model sharing and processing capabilities for clinical information systems. Along the lines of IHE integration profiles from other disciplines and the meaningful reuse of routinely recorded patient data, our approach aims for the seamless integration of decision models into hospital infrastructure and the physicians' daily work.

  14. Architecture in motion: A model for music composition

    Science.gov (United States)

    Variego, Jorge Elias

    2011-12-01

    Speculations regarding the relationship between music and architecture go back to the very origins of these disciplines. Throughout history, these links have always reaffirmed that music and architecture are analogous art forms that only diverge in their object of study. In the 1 st c. BCE Vitruvius conceived Architecture as "one of the most inclusive and universal human activities" where the architect should be educated in all the arts, having a vast knowledge in history, music and philosophy. In the 18th c., the German thinker Johann Wolfgang von Goethe, described Architecture as "frozen music". More recently, in the 20th c., Iannis Xenakis studied the similar structuring principles between Music and Architecture creating his own "models" of musical composition based on mathematical principles and geometric constructions. The goal of this document is to propose a compositional method that will function as a translator between the acoustical properties of a room and music, to facilitate the creation of musical works that will not only happen within an enclosed space but will also intentionally interact with the space. Acoustical measurements of rooms such as reverberation time, frequency response and volume will be measured and systematically organized in correspondence with orchestrational parameters. The musical compositions created after the proposed model are evocative of the spaces on which they are based. They are meant to be performed in any space, not exclusively in the one where the acoustical measurements were obtained. The visual component of architectural design is disregarded; the room is considered a musical instrument, with its particular sound qualities and resonances. Compositions using the proposed model will not result as sonified shapes, they will be musical works literally "tuned" to a specific space. This Architecture in motion is an attempt to adopt scientific research to the service of a creative activity and to let the aural properties of

  15. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    Science.gov (United States)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  16. INTEGRATING PHYSIOLOGY AND ARCHITECTURE IN MODELS OF FRUIT EXPANSION

    Directory of Open Access Journals (Sweden)

    Mikolaj Cieslak

    2016-11-01

    Full Text Available Architectural properties of a fruit, such as its shape, vascular patterns, and skin morphology, play a significant role in determining the distributions of water, carbohydrates, and nutrients inside the fruit. Understanding the impact of these properties on fruit quality is difficult because they develop over time and are highly dependent on both genetic and environmental controls. We present a 3D functional-structural fruit model that can be used to investigate effects of the principle architectural properties on fruit quality. We use a three step modeling pipeline in the OpenAlea platform: (1 creating a 3D volumetric mesh representation of the internal and external fruit structure, (2 generating a complex network of vasculature that is embedded within this mesh, and (3 integrating aspects of the fruit’s function, such as water and dry matter transport, with the fruit’s structure. We restrict our approach to the phase where fruit growth is mostly due to cell expansion and the fruit has already differentiated into different tissue types. We show how fruit shape affects vascular patterns and, as a consequence, the distribution of sugar/water in tomato fruit. Furthermore, we show that strong interaction between tomato fruit shape and vessel density induces, independently of size, an important and contrasted gradient of water supply from the pedicel to the blossom end of the fruit. We also demonstrate how skin morphology related to microcracking distribution affects the distribution of water and sugars inside nectarine fruit. Our results show that such a generic model permits detailed studies of various, unexplored architectural features affecting fruit quality development.

  17. Building Structure Design as an Integral Part of Architecture: A Teaching Model for Students of Architecture

    Science.gov (United States)

    Unay, Ali Ihsan; Ozmen, Cengiz

    2006-01-01

    This paper explores the place of structural design within undergraduate architectural education. The role and format of lecture-based structure courses within an education system, organized around the architectural design studio is discussed with its most prominent problems and proposed solutions. The fundamental concept of the current teaching…

  18. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  19. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  20. An architecture model for multiple disease management information systems.

    Science.gov (United States)

    Chen, Lichin; Yu, Hui-Chu; Li, Hao-Chun; Wang, Yi-Van; Chen, Huang-Jen; Wang, I-Ching; Wang, Chiou-Shiang; Peng, Hui-Yu; Hsu, Yu-Ling; Chen, Chi-Huang; Chuang, Lee-Ming; Lee, Hung-Chang; Chung, Yufang; Lai, Feipei

    2013-04-01

    Disease management is a program which attempts to overcome the fragmentation of healthcare system and improve the quality of care. Many studies have proven the effectiveness of disease management. However, the case managers were spending the majority of time in documentation, coordinating the members of the care team. They need a tool to support them with daily practice and optimizing the inefficient workflow. Several discussions have indicated that information technology plays an important role in the era of disease management. Whereas applications have been developed, it is inefficient to develop information system for each disease management program individually. The aim of this research is to support the work of disease management, reform the inefficient workflow, and propose an architecture model that enhance on the reusability and time saving of information system development. The proposed architecture model had been successfully implemented into two disease management information system, and the result was evaluated through reusability analysis, time consumed analysis, pre- and post-implement workflow analysis, and user questionnaire survey. The reusability of the proposed model was high, less than half of the time was consumed, and the workflow had been improved. The overall user aspect is positive. The supportiveness during daily workflow is high. The system empowers the case managers with better information and leads to better decision making.

  1. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  2. Modeling the Office of Science ten year facilities plan: The PERI Architecture Tiger Team

    International Nuclear Information System (INIS)

    Supinski, Bronis R de; Gamblin, Todd; Schulz, Martin

    2009-01-01

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  3. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    Science.gov (United States)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  4. The caBIG® Life Science Business Architecture Model.

    Science.gov (United States)

    Boyd, Lauren Becnel; Hunicke-Smith, Scott P; Stafford, Grace A; Freund, Elaine T; Ehlman, Michele; Chandran, Uma; Dennis, Robert; Fernandez, Anna T; Goldstein, Stephen; Steffen, David; Tycko, Benjamin; Klemm, Juli D

    2011-05-15

    Business Architecture Models (BAMs) describe what a business does, who performs the activities, where and when activities are performed, how activities are accomplished and which data are present. The purpose of a BAM is to provide a common resource for understanding business functions and requirements and to guide software development. The cancer Biomedical Informatics Grid (caBIG®) Life Science BAM (LS BAM) provides a shared understanding of the vocabulary, goals and processes that are common in the business of LS research. LS BAM 1.1 includes 90 goals and 61 people and groups within Use Case and Activity Unified Modeling Language (UML) Diagrams. Here we report on the model's current release, LS BAM 1.1, its utility and usage, and plans for future use and continuing development for future releases. The LS BAM is freely available as UML, PDF and HTML (https://wiki.nci.nih.gov/x/OFNyAQ).

  5. Architectural design of experience based factory model for software ...

    African Journals Online (AJOL)

    architectural design. Automation features are incorporated in the design in which workflow system and intelligent agents are integrated, and the facilitation of cloud environment is empowered to further support the automation. Keywords: architectural design; knowledge management; experience factory; workflow;

  6. Deformable three-dimensional model architecture for interactive augmented reality in minimally invasive surgery.

    Science.gov (United States)

    Vemuri, Anant S; Wu, Jungle Chi-Hsiang; Liu, Kai-Che; Wu, Hurng-Sheng

    2012-12-01

    Surgical procedures have undergone considerable advancement during the last few decades. More recently, the availability of some imaging methods intraoperatively has added a new dimension to minimally invasive techniques. Augmented reality in surgery has been a topic of intense interest and research. Augmented reality involves usage of computer vision algorithms on video from endoscopic cameras or cameras mounted in the operating room to provide the surgeon additional information that he or she otherwise would have to recognize intuitively. One of the techniques combines a virtual preoperative model of the patient with the endoscope camera using natural or artificial landmarks to provide an augmented reality view in the operating room. The authors' approach is to provide this with the least number of changes to the operating room. Software architecture is presented to provide interactive adjustment in the registration of a three-dimensional (3D) model and endoscope video. Augmented reality including adrenalectomy, ureteropelvic junction obstruction, and retrocaval ureter and pancreas was used to perform 12 surgeries. The general feedback from the surgeons has been very positive not only in terms of deciding the positions for inserting points but also in knowing the least change in anatomy. The approach involves providing a deformable 3D model architecture and its application to the operating room. A 3D model with a deformable structure is needed to show the shape change of soft tissue during the surgery. The software architecture to provide interactive adjustment in registration of the 3D model and endoscope video with adjustability of every 3D model is presented.

  7. ARCHITECTURES AND ALGORITHMS FOR COGNITIVE NETWORKS ENABLED BY QUALITATIVE MODELS

    DEFF Research Database (Denmark)

    Balamuralidhar, P.

    2013-01-01

    traditional limitations and potentially achieving better performance. The vision is that, networks should be able to monitor themselves, reason upon changes in self and environment, act towards the achievement of specific goals and learn from experience. The concept of a Cognitive Engine (CE) supporting...... cognitive functions, as part of network elements, enabling above said autonomic capabilities is gathering attention. Awareness of the self and the world is an important aspect of the cognitive engine to be autonomic. This is achieved through embedding their models in the engine, but the complexity...... of the cognitive engine that incorporates a context space based information structure to its knowledge model. I propose a set of guiding principles behind a cognitive system to be autonomic and use them with additional requirements to build a detailed architecture for the cognitive engine. I define a context space...

  8. 3D model tools for architecture and archaeology reconstruction

    Science.gov (United States)

    Vlad, Ioan; Herban, Ioan Sorin; Stoian, Mircea; Vilceanu, Clara-Beatrice

    2016-06-01

    The main objective of architectural and patrimonial survey is to provide a precise documentation of the status quo of the surveyed objects (monuments, buildings, archaeological object and sites) for preservation and protection, for scientific studies and restoration purposes, for the presentation to the general public. Cultural heritage documentation includes an interdisciplinary approach having as purpose an overall understanding of the object itself and an integration of the information which characterize it. The accuracy and the precision of the model are directly influenced by the quality of the measurements realized on field and by the quality of the software. The software is in the process of continuous development, which brings many improvements. On the other side, compared to aerial photogrammetry, close range photogrammetry and particularly architectural photogrammetry is not limited to vertical photographs with special cameras. The methodology of terrestrial photogrammetry has changed significantly and various photographic acquisitions are widely in use. In this context, the present paper brings forward a comparative study of TLS (Terrestrial Laser Scanner) and digital photogrammetry for 3D modeling. The authors take into account the accuracy of the 3D models obtained, the overall costs involved for each technology and method and the 4th dimension - time. The paper proves its applicability as photogrammetric technologies are nowadays used at a large scale for obtaining the 3D model of cultural heritage objects, efficacious in their assessment and monitoring, thus contributing to historic conservation. Its importance also lies in highlighting the advantages and disadvantages of each method used - very important issue for both the industrial and scientific segment when facing decisions such as in which technology to invest more research and funds.

  9. Architectures drawn / digital models: the Venices (impossible on line

    Directory of Open Access Journals (Sweden)

    Malvina Borgherini

    2011-12-01

    Full Text Available A contemporary city representation talks not only about architecture and landscape but also about the effects that political institutions, cultural traditions and economic enterprises have on the urban community. Methods and ways traditionally used to present the complexity and, at the same time, the personality of a town, or on a smaller scale of one of his monument, are not suitable with the contemporary reality. A very famous panel presented at Venice Biennale in 1976, Aldo Rossi’s «Città Analoga», within real and ideal architectures, ancient monuments and contemporary landscapes, individual and collective memories, human presences and empty spaces, can be taken as an example for the preparation of a new ‘story’ or a new ‘map’ for a city like Venice. The space of a digital model can become a place for discussion and analysis, a place where to see together historical records and projects never realized, where to put subjective and objective visions, which overlap daily and occasional tracks.

  10. Control software architecture and operating modes of the Model M-2 maintenance system

    Energy Technology Data Exchange (ETDEWEB)

    Satterlee, P.E. Jr.; Martin, H.L.; Herndon, J.N.

    1984-04-01

    The Model M-2 maintenance system is the first completely digitally controlled servomanipulator. The M-2 system allows dexterous operations to be performed remotely using bilateral force-reflecting master/slave techniques, and its integrated operator interface takes advantage of touch-screen-driven menus to allow selection of all possible operating modes. The control system hardware for this system has been described previously. This paper describes the architecture of the overall control system. The system's various modes of operation are identified, the software implementation of each is described, system diagnostic routines are described, and highlights of the computer-augmented operator interface are discussed. 3 references, 5 figures.

  11. Control software architecture and operating modes of the Model M-2 maintenance system

    International Nuclear Information System (INIS)

    Satterlee, P.E. Jr.; Martin, H.L.; Herndon, J.N.

    1984-04-01

    The Model M-2 maintenance system is the first completely digitally controlled servomanipulator. The M-2 system allows dexterous operations to be performed remotely using bilateral force-reflecting master/slave techniques, and its integrated operator interface takes advantage of touch-screen-driven menus to allow selection of all possible operating modes. The control system hardware for this system has been described previously. This paper describes the architecture of the overall control system. The system's various modes of operation are identified, the software implementation of each is described, system diagnostic routines are described, and highlights of the computer-augmented operator interface are discussed. 3 references, 5 figures

  12. Java Architecture for Detect and Avoid Extensibility and Modeling

    Science.gov (United States)

    Santiago, Confesor; Mueller, Eric Richard; Johnson, Marcus A.; Abramson, Michael; Snow, James William

    2015-01-01

    Unmanned aircraft will equip with a detect-and-avoid (DAA) system that enables them to comply with the requirement to "see and avoid" other aircraft, an important layer in the overall set of procedural, strategic and tactical separation methods designed to prevent mid-air collisions. This paper describes a capability called Java Architecture for Detect and Avoid Extensibility and Modeling (JADEM), developed to prototype and help evaluate various DAA technological requirements by providing a flexible and extensible software platform that models all major detect-and-avoid functions. Figure 1 illustrates JADEM's architecture. The surveillance module can be actual equipment on the unmanned aircraft or simulators that model the process by which sensors on-board detect other aircraft and provide track data to the traffic display. The track evaluation function evaluates each detected aircraft and decides whether to provide an alert to the pilot and its severity. Guidance is a combination of intruder track information, alerting, and avoidance/advisory algorithms behind the tools shown on the traffic display to aid the pilot in determining a maneuver to avoid a loss of well clear. All these functions are designed with a common interface and configurable implementation, which is critical in exploring DAA requirements. To date, JADEM has been utilized in three computer simulations of the National Airspace System, three pilot-in-the-loop experiments using a total of 37 professional UAS pilots, and two flight tests using NASA's Predator-B unmanned aircraft, named Ikhana. The data collected has directly informed the quantitative separation standard for "well clear", safety case, requirements development, and the operational environment for the DAA minimum operational performance standards. This work was performed by the Separation Assurance/Sense and Avoid Interoperability team under NASA's UAS Integration in the NAS project.

  13. Implementation of Model View Controller (Mvc) Architecture on Building Web-based Information System

    OpenAIRE

    'Uyun, Shofwatul; Ma'arif, Muhammad Rifqi

    2010-01-01

    The purpose of this paper is to introduce the use of MVC architecture in web-based information systemsdevelopment. MVC (Model-View-Controller) architecture is a way to decompose the application into threeparts: model, view and controller. Originally applied to the graphical user interaction model of input,processing and output. Expected to use the MVC architecture, applications can be built maintenance of moremodular, rusable, and easy and migrate. We have developed a management system of sch...

  14. IMPLEMENTATION OF MODEL VIEW CONTROLLER (MVC) ARCHITECTURE ON BUILDING WEB-BASED INFORMATION SYSTEM

    OpenAIRE

    'Uyun, Shofwatul; Ma'arif, Muhammad Rifqi

    2010-01-01

    The purpose of this paper is to introduce the use of MVC architecture in web-based information systemsdevelopment. MVC (Model-View-Controller) architecture is a way to decompose the application into threeparts: model, view and controller. Originally applied to the graphical user interaction model of input,processing and output. Expected to use the MVC architecture, applications can be built maintenance of moremodular, rusable, and easy and migrate. We have developed a management system of sch...

  15. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  16. AUTOMATIC TEXTURE MAPPING OF ARCHITECTURAL AND ARCHAEOLOGICAL 3D MODELS

    Directory of Open Access Journals (Sweden)

    T. P. Kersten

    2012-07-01

    Full Text Available Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  17. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    Science.gov (United States)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  18. The concept of a model of plastic bodily image in architecture

    Directory of Open Access Journals (Sweden)

    Malakhov Sergey

    2017-01-01

    Full Text Available One of the key problems of architectural mastery is lack of acute feeling of the plastic image and bodily self-determination of an architectural object at the initial (and the subsequent stage when the design model author is trying to see and understand the context where the future object should appear, literally be born out of thin air. This sensuous amnesia is caused, among other reasons, by the lack of experience in “sculptural modeling”, “hand molding” and today’s common practice of facilitated transfer to analytical computer design. The loss of intense bodily experience, mental connection of one’s own body with an imaginary object, has an effect as well. In its turn, it deprives the object of sensuous corporeal nature, transforms it into a mechanistic conglomerate. The article deals with the body concept, bodily and plastic categories in relation to the architectural shaping and suggests the concept of mediator models linking reality and the designer’s imagination integrated into a new typology of “models of plastic bodily images” (MPBI. The principal medium of these models and the procedures for their creation are based on synthesis of form, interpretation of the “bodily experience”, tactile contact with model material, sculptural forming techniques and sensory evaluations of subject-environment interaction. The proposed typology of models has been piloted by the author in numerous educational and conceptual projects. The results of experiments and developed theoretical principles related to models of plastic bodily images will help achieve better results in the course of basic design and special composition training of architects.

  19. Architectural capability analysis using a model-checking technique

    Directory of Open Access Journals (Sweden)

    Darío José Delgado-Quintero

    2017-01-01

    Full Text Available Este trabajo describe un enfoque matemático basado en una técnica de validación de modelos para analizar capacidades en arquitecturas empresariales construidas utilizando los marcos arquitecturales DoDAF y TOGAF. La base de este enfoque es la validación de requerimientos relacionados con las capacidades empresariales empleando artefactos arquitecturales operacionales o de negocio asociados con el comportamiento dinámico de los procesos. Se muestra cómo este enfoque puede ser utilizado para verificar, de forma cuantitativa, si los modelos operacionales en una arquitectura empresarial pueden satisfacer las capacidades empresariales. Para ello, se utiliza un estudio de caso relacionado con un problema de integración de capacidades.

  20. Mosque as a Model of Learning Principles of Sustainable Architecture

    Directory of Open Access Journals (Sweden)

    Swambodo Murdariatmo Adi

    2016-06-01

    Full Text Available The mosque is an integral part of the circuit-worship rituals of Islam. For Muslims in Indonesia, the role of the mosque as a place of worship, examines religion and some other activities occupy a strategic position not only as a religious symbol but more emphasis on the function of the space as a public building. Utilization of space in public buildings as well as space-ritual-social space will have meaning for the people in view of adaptation space used. Awareness of the importance of effective space utilization and management of water resources wisely in support of the ritual apply the principles of sustainable architecture will have a positive impact for the people to give directions as to how the principle of austerity-not wasteful in Islam can be applied. This paper will discuss about the process of continuous learning from the essence of understanding of the mosque as a model in implementing the process of life, taking into account the principles of simplicity, functional and wisdom, especially in the efficiency of utilization of local resources. The method used in this research is qualitative descriptive, which is explained the theory and based on literature and accompanied by case study that have implemented the principles. The output of this application of the principles of sustainable architecture in the planning and use of mosques as a place in the relationship with God and with fellow human relations can be a model for the faithful to deal with wisely challenge natural resource constraints, especially for future generations.

  1. Architectural and compiler techniques for energy reduction in high-performance microprocessors

    Science.gov (United States)

    Bellas, Nikolaos

    1999-11-01

    The microprocessor industry has started viewing power, along with area and performance, as a decisive design factor in today's microprocessors. The increasing cost of packaging and cooling systems poses stringent requirements on the maximum allowable power dissipation. Most of the research in recent years has focused on the circuit, gate, and register-transfer (RT) levels of the design. In this research, we focus on the software running on a microprocessor and we view the program as a power consumer. Our work concentrates on the role of the compiler in the construction of "power-efficient" code, and especially its interaction with the hardware so that unnecessary processor activity is saved. We propose techniques that use extra hardware features and compiler-driven code transformations that specifically target activity reduction in certain parts of the CPU which are known to be large power and energy consumers. Design for low power/energy at this level of abstraction entails larger energy gains than in the lower stages of the design hierarchy in which the design team has already made the most important design commitments. The role of the compiler in generating code which exploits the processor organization is also fundamental in energy minimization. Hence, we propose a hardware/software co-design paradigm, and we show what code transformations are necessary by the compiler so that "wasted" power in a modern microprocessor can be trimmed. More specifically, we propose a technique that uses an additional mini cache located between the instruction cache (I-Cache) and the CPU core; the mini cache buffers instructions that are nested within loops and are continuously fetched from the I-Cache. This mechanism can create very substantial energy savings, since the I-Cache unit is one of the main power consumers in most of today's high-performance microprocessors. Results are reported for the SPEC95 benchmarks in the R-4400 processor which implements the MIPS2 instruction

  2. Organizational information assets classification model and security architecture methodology

    Directory of Open Access Journals (Sweden)

    Mostafa Tamtaji

    2015-12-01

    Full Text Available Today's, Organizations are exposed with huge and diversity of information and information assets that are produced in different systems shuch as KMS, financial and accounting systems, official and industrial automation sysytems and so on and protection of these information is necessary. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released.several benefits of this model cuses that organization has a great trend to implementing Cloud computing. Maintaining and management of information security is the main challenges in developing and accepting of this model. In this paper, at first, according to "design science research methodology" and compatible with "design process at information systems research", a complete categorization of organizational assets, including 355 different types of information assets in 7 groups and 3 level, is presented to managers be able to plan corresponding security controls according to importance of each groups. Then, for directing of organization to architect it’s information security in cloud computing environment, appropriate methodology is presented. Presented cloud computing security architecture , resulted proposed methodology, and presented classification model according to Delphi method and expers comments discussed and verified.

  3. Survey on efficient linear solvers for porous media flow models on recent hardware architectures

    International Nuclear Information System (INIS)

    Anciaux-Sedrakian, Ani; Gratien, Jean-Marc; Guignon, Thomas; Gottschling, Peter

    2014-01-01

    In the past few years, High Performance Computing (HPC) technologies led to General Purpose Processing on Graphics Processing Units (GPGPU) and many-core architectures. These emerging technologies offer massive processing units and are interesting for porous media flow simulators may used for CO 2 geological sequestration or Enhanced Oil Recovery (EOR) simulation. However the crucial point is 'are current algorithms and software able to use these new technologies efficiently?' The resolution of large sparse linear systems, almost ill-conditioned, constitutes the most CPU-consuming part of such simulators. This paper proposes a survey on various solver and pre-conditioner algorithms, analyzes their efficiency and performance regarding these distinct architectures. Furthermore it proposes a novel approach based on a hybrid programming model for both GPU and many-core clusters. The proposed optimization techniques are validated through a Krylov subspace solver; BiCGStab and some pre-conditioners like ILU0 on GPU, multi-core and many-core architectures, on various large real study cases in EOR simulation. (authors)

  4. Developing a scalable modeling architecture for studying survivability technologies

    Science.gov (United States)

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  5. Simulation Architecture for Modelling Interaction Between User and Elbow-articulated Exoskeleton

    NARCIS (Netherlands)

    Kruif, B.J. de; Schmidhauser, E.; Stadler, K.S.; O'Sullivan, L.W.

    2017-01-01

    The aim of our work is to improve the existing user-exoskeleton models by introducing a simulation architecture that can simulate its dynamic interaction, thereby altering the initial motion of the user. A simulation architecture is developed that uses the musculoskeletal models from OpenSim, and

  6. Real Estate Development by Architectural Firms : Is the Business Model Future-Proof?

    NARCIS (Netherlands)

    Bos-De Vos, M.; Volker, L.; Wamelink, J.W.F.; Chan, P.W.; Neilson, C.J.

    2016-01-01

    Architectural firms need business models that are able to deal with the diversity and uncertainty of their work to run a successful business over time. Little is known about the business models that are used in architectural service delivery and how they enable or constrain firms to create and

  7. Green IT engineering concepts, models, complex systems architectures

    CERN Document Server

    Kondratenko, Yuriy; Kacprzyk, Janusz

    2017-01-01

    This volume provides a comprehensive state of the art overview of a series of advanced trends and concepts that have recently been proposed in the area of green information technologies engineering as well as of design and development methodologies for models and complex systems architectures and their intelligent components. The contributions included in the volume have their roots in the authors’ presentations, and vivid discussions that have followed the presentations, at a series of workshop and seminars held within the international TEMPUS-project GreenCo project in United Kingdom, Italy, Portugal, Sweden and the Ukraine, during 2013-2015 and at the 1st - 5th Workshops on Green and Safe Computing (GreenSCom) held in Russia, Slovakia and the Ukraine. The book presents a systematic exposition of research on principles, models, components and complex systems and a description of industry- and society-oriented aspects of the green IT engineering. A chapter-oriented structure has been adopted for this book ...

  8. Policy improvement by a model-free Dyna architecture.

    Science.gov (United States)

    Hwang, Kao-Shing; Lo, Chia-Yue

    2013-05-01

    The objective of this paper is to accelerate the process of policy improvement in reinforcement learning. The proposed Dyna-style system combines two learning schemes, one of which utilizes a temporal difference method for direct learning; the other uses relative values for indirect learning in planning between two successive direct learning cycles. Instead of establishing a complicated world model, the approach introduces a simple predictor of average rewards to actor-critic architecture in the simulation (planning) mode. The relative value of a state, defined as the accumulated differences between immediate reward and average reward, is used to steer the improvement process in the right direction. The proposed learning scheme is applied to control a pendulum system for tracking a desired trajectory to demonstrate its adaptability and robustness. Through reinforcement signals from the environment, the system takes the appropriate action to drive an unknown dynamic to track desired outputs in few learning cycles. Comparisons are made between the proposed model-free method, a connectionist adaptive heuristic critic, and an advanced method of Dyna-Q learning in the experiments of labyrinth exploration. The proposed method outperforms its counterparts in terms of elapsed time and convergence rate.

  9. Intrinsic coincident full-Stokes polarimeter using stacked organic photovoltaics and architectural comparison of polarimeter techniques

    Science.gov (United States)

    Yang, Ruonan; Sen, Pratik; O'Connor, B. T.; Kudenov, M. W.

    2017-08-01

    An intrinsic coincident full-Stokes polarimeter is demonstrated by using stain-aligned polymer-based organic photovoltaics (OPVs) which can preferentially absorb certain polarized states of incident light. The photovoltaic-based polarimeter is capable of measuring four stokes parameters by cascading four semitransparent OPVs in series along the same optical axis. Two wave plates were incorporated into the system to modulate the S3 stokes parameter so as to reduce the condition number of the measurement matrix. The model for the full-Stokes polarimeter was established and validated, demonstrating an average RMS error of 0.84%. The optimization, based on minimizing the condition number of the 4-cell OPV design, showed that a condition number of 2.4 is possible. Performance of this in-line polarimeter concept was compared to other polarimeter architectures, including Division of Time (DoT), Division of Amplitude (DoAm), Division of Focal Plane (DoFP), and Division of Aperture (DoA) from signal-to-noise ratio (SNR) perspective. This in-line polarimeter concept has the potential to enable both high temporal (as compared with a DoT polarimeter) and high spatial resolution (as compared with DoFP and DoA polarimeters). We conclude that the intrinsic design has the same √2 SNR advantage as the DoAm polarimeter, but with greater compactness.

  10. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  11. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  12. Dynamic information architecture system (DIAS) : multiple model simulation management

    International Nuclear Information System (INIS)

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-01-01

    Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers can schedule other events; create or remove Entities from the

  13. Dynamic information architecture system (DIAS) : multiple model simulation management.

    Energy Technology Data Exchange (ETDEWEB)

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-05-13

    Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers

  14. Definition of an Object-Oriented Modeling Language for Enterprise Architecture

    OpenAIRE

    Lê, Lam Son; Wegmann, Alain

    2005-01-01

    In enterprise architecture, the goal is to integrate business resources and IT resources in order to improve an enterprises competitiveness. In an enterprise architecture project, the development team usually constructs a model that represents the enterprise: the enterprise model. In this paper, we present a modeling language for building such enterprise models. Our enterprise models are hierarchical object-oriented representations of the enterprises. This paper presents the foundations of o...

  15. Modelling the International Climate Change Negotiations: A Non-Technical Outline of Model Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Underdal, Arild

    1997-12-31

    This report discusses in non-technical terms the overall architecture of a model that will be designed to enable the user to (1) explore systematically the political feasibility of alternative policy options and (2) to determine the set of politically feasible solutions in the global climate change negotiations. 25 refs., 2 figs., 1 tab.

  16. Visual product architecture modelling for structuring data in a PLM system

    DEFF Research Database (Denmark)

    Bruun, Hans Peter Lomholt; Mortensen, Niels Henrik

    2012-01-01

    The goal of this paper is to determine the role of a product architecture model to support communication and to form the basis for developing and maintaining information of product structures in a PLM system. This paper contains descriptions of a modelling tool to represent a product architecture....... Moreover, it is discussed how the sometimes intangible elements and phenomena within an architecture model can be visually modeled in order to form the basis for a data model in a PLM system. © 2012 International Federation for Information Processing....

  17. Mathematical model validation of a thermal architecture system connecting east/west radiators by flight data

    International Nuclear Information System (INIS)

    Torres, Alejandro; Mishkinis, Donatas; Kaya, Tarik

    2014-01-01

    A novel satellite thermal architecture connecting the east and west radiators of a geostationary telecommunication satellite via loop heat pipes (LHPs) is flight tested on board the satellite Hispasat 1E. The LHP operating temperature is regulated by using pressure regulating valves (PRVs). The flight data demonstrated the successful operation of the proposed concept. A transient numerical model specifically developed for the design of this system satisfactorily simulated the flight data. The validated mathematical model can be used to design and analyze the thermal behavior of more complex architectures. - Highlights: •A novel spacecraft thermal control architecture is presented. •The east–west radiators of a GEO communications satellite are connected using LHPs. •A transient mathematical model is validated with flight data. •The space flight data proved successful in-orbit operation of the novel architecture. •The model can be used to design/analyze LHP based complex thermal architectures

  18. Comments on “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes”

    Directory of Open Access Journals (Sweden)

    Mark B. Yeary

    2009-01-01

    Full Text Available This is a comment article on the publication “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes” Rovini et al. (2009. We mention that there has been similar work reported in the literature before, and the previous work has not been cited correctly, for example Gunnam et al. (2006, 2007. This brief note serves to clarify these issues.

  19. Reconstruction, modeling, animation and digital fabrication of 'architectures on paper'. Two ideal houses by Carlo Mollino

    Directory of Open Access Journals (Sweden)

    Roberta Spallone

    2015-07-01

    Full Text Available This paper develops some consideration about the issues raised by the reconstruction of 'architectures on paper' of contemporary masters.Generally archival drawings are patchy and fragmented and refer to different ideative moments and paths of inspiration that lend themselves to numerous and different interpretative readings. Moreover it's necessary a careful analysis of the author's poetics and significance of his work. Digital methods and techniques of representation, ranging from 3D modeling, video producing and digital fabrication, should be carefully selected and adapted to the characteristics identified through the interpretation of the project and what it is intended to communicate. In the cases studies of Mollino's 'ideal houses' were tested the capabilities of BIM modeling for this aims.

  20. Comparing Transformation Possibilities of Topological Functioning Model and BPMN in the Context of Model Driven Architecture

    Directory of Open Access Journals (Sweden)

    Solomencevs Artūrs

    2016-05-01

    Full Text Available The approach called “Topological Functioning Model for Software Engineering” (TFM4SE applies the Topological Functioning Model (TFM for modelling the business system in the context of Model Driven Architecture. TFM is a mathematically formal computation independent model (CIM. TFM4SE is compared to an approach that uses BPMN as a CIM. The comparison focuses on CIM modelling and on transformation to UML Sequence diagram on the platform independent (PIM level. The results show the advantages and drawbacks the formalism of TFM brings into the development.

  1. In-Depth Modeling of the UNIX Operating System for Architectural Cyber Security Analysis

    OpenAIRE

    Vernotte, Alexandre; Johnson, Pontus; Ekstedt, Mathias; Lagerström, Robert

    2017-01-01

    ICT systems have become an integral part of business and life. At the same time, these systems have become extremely complex. In such systems exist numerous vulnerabilities waiting to be exploited by potential threat actors. pwnPr3d is a novel modelling approach that performs automated architectural analysis with the objective of measuring the cyber security of the modeled architecture. Its integrated modelling language allows users to model software and hardware components with great level o...

  2. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  3. Selecting a High-Quality Central Model for Sharing Architectural Knowledge

    NARCIS (Netherlands)

    Liang, Peng; Jansen, Anton; Avgeriou, Paris; Zhu, H

    2008-01-01

    In the field of software architecture, there has been a paradigm shift front describing the outcome of architecting process to documenting Architectural Knowledge (AK), such as design decisions and rationale. To this end, a series of domain models have been proposed for defining the concepts and

  4. Molecular Physiology of Root System Architecture in Model Grasses

    Science.gov (United States)

    Hixson, K.; Ahkami, A. H.; Anderton, C.; Veličković, D.; Myers, G. L.; Chrisler, W.; Lindenmaier, R.; Fang, Y.; Yabusaki, S.; Rosnow, J. J.; Farris, Y.; Khan, N. E.; Bernstein, H. C.; Jansson, C.

    2017-12-01

    Unraveling the molecular and physiological mechanisms involved in responses of Root System Architecture (RSA) to abiotic stresses and shifts in microbiome structure is critical to understand and engineer plant-microbe-soil interactions in the rhizosphere. In this study, accessions of Brachypodium distachyon Bd21 (C3 model grass) and Setaria viridis A10.1 (C4 model grass) were grown in phytotron chambers under current and elevated CO2 levels. Detailed growth stage-based phenotypic analysis revealed different above- and below-ground morphological and physiological responses in C3 and C4 grasses to enhanced CO2 levels. Based on our preliminary results and by screening values of total biomass, water use efficiency, root to shoot ratio, RSA parameters and net assimilation rates, we postulated a three-phase physiological mechanism, i.e. RootPlus, BiomassPlus and YieldPlus phases, for grass growth under elevated CO2 conditions. Moreover, this comprehensive set of morphological and process-based observations are currently in use to develop, test, and calibrate biophysical whole-plant models and in particular to simulate leaf-level photosynthesis at various developmental stages of C3 and C4 using the model BioCro. To further link the observed phenotypic traits at the organismal level to tissue and molecular levels, and to spatially resolve the origin and fate of key metabolites involved in primary carbohydrate metabolism in different root sections, we complement root phenotypic observations with spatial metabolomics data using mass spectrometry imaging (MSI) methods. Focusing on plant-microbe interactions in the rhizosphere, six bacterial strains with plant growth promoting features are currently in use in both gel-based and soil systems to screen root growth and development in Brachypodium. Using confocal microscopy, GFP-tagged bacterial systems are utilized to study the initiation of different root types of RSA, including primary root (PR), coleoptile node axile root (CNR

  5. Electromagnetic Vibration Energy Harvesting Devices Architectures, Design, Modeling and Optimization

    CERN Document Server

    Spreemann, Dirk

    2012-01-01

    Electromagnetic vibration transducers are seen as an effective way of harvesting ambient energy for the supply of sensor monitoring systems. Different electromagnetic coupling architectures have been employed but no comprehensive comparison with respect to their output performance has been carried out up to now. Electromagnetic Vibration Energy Harvesting Devices introduces an optimization approach which is applied to determine optimal dimensions of the components (magnet, coil and back iron). Eight different commonly applied coupling architectures are investigated. The results show that correct dimensions are of great significance for maximizing the efficiency of the energy conversion. A comparison yields the architectures with the best output performance capability which should be preferably employed in applications. A prototype development is used to demonstrate how the optimization calculations can be integrated into the design–flow. Electromagnetic Vibration Energy Harvesting Devices targets the design...

  6. Modelling and visualising modular product architectures for mass customisation

    DEFF Research Database (Denmark)

    Mortensen, Niels Henrik; Pedersen, Rasmus; Kvist, Morten

    2008-01-01

    that the companies are striving for variety from a commercial- and simplicity from a manufacturing one. A conscious structuring of product architectures and/or the use of product platforms can help overcome this challenge. This paper presents a new method for the synthesis and visualisation of product architecture...... concepts that puts emphasis on variety in markets while also treating the consequences in the manufacturing set-up. The work is based on the assumption that a graphical overview of a given solution space and relations between market demands, product architecture and manufacturing layout can support......Companies following a mass customisation strategy have to observe two prerequisites for success: they have to fulfil a wide variety of customer needs and demands, and to harvest the benefits from economies of scale within their organisation and supply chain. This leads to the situation...

  7. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  8. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  9. When Video Games Tell Stories: A Model of Video Game Narrative Architectures

    Directory of Open Access Journals (Sweden)

    Marcello Arnaldo Picucci

    2014-11-01

    Full Text Available In the present study a model is proposed offering a comprehensive categorization of video game narrative structures intended as the methods and techniques used by game designers and allowed by the medium to deliver the story content throughout the gameplay in collaboration with the players. A case is first made for the presence of narrative in video games and its growth of importance as a central component in game design. An in-depth analysis ensues focusing on how games tell stories, guided by the criteria of linearity/nonlinearity, interactivity and randomness. Light is shed upon the fundamental architectures through which stories are told as well as the essential boundaries posed by the close link between narrative and game AI.

  10. Financial architecture and industrial technology: A co-evolutionary model

    NARCIS (Netherlands)

    Negriu, A.

    2013-01-01

    Empirical evidence points to a relation between the financial architecture of an economy and industrial technology: market-based financial systems support the development of industries where innovation is typically radical whereas incremental innovation thrives in association with bank-based

  11. Spectrum emission considerations for baseband-modeled CALLUM architectures

    DEFF Research Database (Denmark)

    Strandberg, Roland; Andreani, Pietro; Sundström, Lars

    2005-01-01

    Linear-transmitters based on combined analog locked loop universal modulator (CALLUM) architectures are attractive, as they promise both high efficiency and high linearity. To date, it has not been possible to analyze a CALLUM transmitter as a linear feedback network due to the nonlinear nature o...

  12. Model & scale as conceptual devices in architectural representation

    NARCIS (Netherlands)

    Stellingwerff, M.C.; Koorstra, P.A.

    2013-01-01

    This year we celebrate the tenth anniversary of our Computer Aided Manufacturing laboratory (CAMlab, http://www.camlab-bk.nl). From the start we provide laser cutting, CNC-milling and 3D-print facilities for the students and the researchers at the Faculty of Architecture in Delft. Over the past ten

  13. Enterprise Architecture-Based Risk and Security Modelling and Analysis

    NARCIS (Netherlands)

    Jonkers, Henk; Quartel, Dick; Kordy, Barbara; Ekstedt, Mathias; Seong Kim, Deng

    2016-01-01

    The growing complexity of organizations and the increasing number of sophisticated cyber attacks asks for a systematic and integral approach to Enterprise Risk and Security Management (ERSM). As enterprise architecture offers the necessary integral perspective, including the business and IT aspects

  14. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Science.gov (United States)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  15. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    Science.gov (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  16. A Review of Das Architekturmodell: Werkzeug, Fetische, Kleine Utopie / The Architectural Model: Tool, Fetish, Small Utopia

    OpenAIRE

    Miller, Wallis

    2013-01-01

    In the summer of 2012 the German Architecture Museum (DAM) in Frankfurt was filled from top to bottom with models. Three hundred of them, give or take a few, clamored for visitors’ attention, asking them to shift their thinking about architecture from buildings to the artifacts of the design process. The models came from museums near – one-third were from the German Architecture Museum’s own collection – and far, including the Museum of Modern Art in New York; they came from private collectio...

  17. Fast tracking ICT infrastructure requirements and design, based on Enterprise Reference Architecture and matching Reference Models

    DEFF Research Database (Denmark)

    Bernus, Peter; Baltrusch, Rob; Vesterager, Johan

    2002-01-01

    The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales...

  18. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  19. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  20. Techniques for Modeling Human Performance in Synthetic Environments: A Supplementary Review

    National Research Council Canada - National Science Library

    Ritter, Frank E; Shadbolt, Nigel R; Elliman, David; Young, Richard M; Gobet, Fernand; Baxter, Gordon D

    2003-01-01

    ... architectures including hybrid architectures, and agent and Belief, Desires and Intentions (BDI) architectures. A list of projects with high payoff for modeling human performance in synthetic environments is provided as a conclusion.

  1. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  2. A self-organized internal models architecture for coding sensory-motor schemes

    Directory of Open Access Journals (Sweden)

    Esaú eEscobar Juárez

    2016-04-01

    Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.

  3. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  4. Roofline model toolkit: A practical tool for architectural and program analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Yu Jung [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Van Straalen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ligocki, Terry J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cordery, Matthew J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wright, Nicholas J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hall, Mary W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-18

    We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measure sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.

  5. From digital photography to the 3D model of the historical architecture

    Directory of Open Access Journals (Sweden)

    Sandro Parrinello

    2013-10-01

    Full Text Available The Structure from motion procedures have, in recent years, refined to the point of an interesting data acquisition system, also in response to needs for reliable documentation of the different architectural contexts. Experiments concerning the development of its integrated survey of alternative methodologies have led to interesting results concerning methodological protocols to check the reliability of these processes of architectural significance through photography, and lay the foundations for further developments relating to connections that certain models can to generate knowledge, documentation and enhancement of historic architecture.

  6. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Science.gov (United States)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  7. Maya Lime Mortars—Relationship between Archaeomagnetic Dating, Manufacturing Technique, and Architectural Function—The Dzibanché Case

    Directory of Open Access Journals (Sweden)

    Luisa Straulino Mainou

    2016-11-01

    Full Text Available Researchers have related the manufacturing technique of plasters and stucco in the Maya area with their period of production but not with their architectural function. In this paper, we establish a relationship between those three features (manufacturing technique, age, and architectural function in the plasters of the Maya site of Dzibanché in southern Quintana Roo. Dzibanché has abundant remains of stuccos and plasters found mainly in three buildings (Plaza Pom, Pequeña Acrópolis, and Structure 2. We used thin sections, SEM and XRD, and archaeomagnetic dating processes. The pictorial layer of Structure 2 was the earliest (AD 274–316 and the stuccoes and plasters of the other two buildings were dated to the Middle Classic (AD 422–531, but we obtained different archaeomagnetic dates for the red pigment layers found in the buildings of the Pequeña Acrópolis and thus we were able to determine their chronological order of construction. The raw materials and proportions were carefully chosen to fulfil the mechanical necessities of the architectonic function: different proportions were found in plasters of floors, in the external walls, and inside the buildings; differences between earlier and later plasters were also detected.

  8. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2017-11-22

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.

  9. Approaches Regarding Business Logic Modeling in Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2011-01-01

    Full Text Available As part of the Service Oriented Computing (SOC, Service Oriented Architecture (SOA is a technology that has been developing for almost a decade and during this time there have been published many studies, papers and surveys that are referring to the advantages of projects using it. In this article we discuss some ways of using SOA in the business environment, as a result of the need to reengineer the internal business processes with the scope of moving forward towards providing and using standardized services and achieving enterprise interoperability.

  10. Enterprise Architecture Modeling of Core Administrative Systems at KTH : A Modifiability Analysis

    OpenAIRE

    Rosell, Peter

    2012-01-01

    This project presents a case study of modifiability analysis on the Information Systems which are central to the core business processes of Royal Institution of Technology in Stockholm, Sweden by creating, updating and using models. The case study was limited to modifiability regarding only specified Information Systems. The method selected was Enterprise Architecture together with Enterprise Architecture Analysis research results and tools from the Industrial Information and Control Systems ...

  11. Performance evaluation of enterprise architecture with a formal fuzzy model (FPN

    Directory of Open Access Journals (Sweden)

    Ashkan Marahel

    2012-10-01

    Full Text Available Preparing enterprise architecture is complicated procedure, which uses framework as structure regularity and style as the behavior director for controlling complexity. As in architecture behavior, precedence over structure, for better diagnosis of a behavior than other behaviors, there is a need to evaluate the architecture performance. Enterprise architecture cannot be organized without the benefit of the logical structure. Framework provides a logical structure for classifying architectural output. Among the common architectural framework, the C4ISR is one of the most appropriate frameworks because of the methodology of its production and the level of aggregation capability and minor revisions. C4ISR framework, in three views and by using some documents called product, describes the architecture. In this paper, for developing the systems, there are always uncertainties in information systems and we may use new version of UML called FUZZY-UML, which includes structure and behavior of the system. The proposed model of this paper also uses Fuzzy Petri nets to analyze the developed system.

  12. Microvascular Architecture of Hepatic Metastases in a Mouse Model

    Directory of Open Access Journals (Sweden)

    Darshini Kuruppu

    1997-01-01

    Full Text Available Development of effective treatment for hepatic metastases can be initiated by a better understanding of tumour vasculature and blood supply. This study was designed to characterise the microvascular architecture of hepatic metastases and observe the source of contributory blood supply from the host. Metastases were induced in mice by an intrasplenic injection of colon carcinoma cells (106 cells/ml. Vascularization of tumours was studied over a three week period by scanning electron microscopy of microvascular corrosion casts. Metastatic liver involvement was observed initially within a week post induction, as areas approximately 100 μm in diameter not perfused by the casting resin. On histology these spaces corresponded to tumour cell aggregates. The following weeks highlighted the angiogenesis phase of these tumours as they received a vascular supply from adjacent hepatic sinusoids. Direct sinusoidal supply of metastases was maintained throughout tumour growth. At the tumour periphery most sinusoids were compressed to form a sheath demarcating the tumour from the hepatic vasculature. No direct supply from the hepatic artery or the portal vein was observed. Dilated vessels termed vascular lakes dominated the complex microvascular architecture of the tumours, most tapering as they traversed towards the periphery. Four vascular branching patterns could be identified as true loops, bifurcations and trifurcations, spirals and capillary networks. The most significant observation in this study was the direct sinusoidal supply of metastases, together with the vascular lakes and the peripheral sinusoidal sheaths of the tumour microculature.

  13. Modeling of a 3DTV service in the software-defined networking architecture

    Science.gov (United States)

    Wilczewski, Grzegorz

    2014-11-01

    In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.

  14. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  15. Comparison of different artificial neural network architectures in modeling of Chlorella sp. flocculation.

    Science.gov (United States)

    Zenooz, Alireza Moosavi; Ashtiani, Farzin Zokaee; Ranjbar, Reza; Nikbakht, Fatemeh; Bolouri, Oberon

    2017-07-03

    Biodiesel production from microalgae feedstock should be performed after growth and harvesting of the cells, and the most feasible method for harvesting and dewatering of microalgae is flocculation. Flocculation modeling can be used for evaluation and prediction of its performance under different affective parameters. However, the modeling of flocculation in microalgae is not simple and has not performed yet, under all experimental conditions, mostly due to different behaviors of microalgae cells during the process under different flocculation conditions. In the current study, the modeling of microalgae flocculation is studied with different neural network architectures. Microalgae species, Chlorella sp., was flocculated with ferric chloride under different conditions and then the experimental data modeled using artificial neural network. Neural network architectures of multilayer perceptron (MLP) and radial basis function architectures, failed to predict the targets successfully, though, modeling was effective with ensemble architecture of MLP networks. Comparison between the performances of the ensemble and each individual network explains the ability of the ensemble architecture in microalgae flocculation modeling.

  16. Modern Methods of Measuring and Modelling Architectural Objects in the Process of their Valorisation

    Science.gov (United States)

    Zagroba, Marek

    2017-10-01

    As well as being a cutting-edge technology, laser scanning is still developing rapidly. Laser scanners have an almost unlimited range of use in many disciplines of contemporary engineering, where precision and high quality of tasks performed are of the utmost importance. Among these disciplines, special attention is drawn to architecture and urban space studies that is the fields of science which shape the space and surroundings occupied by people, thus having a direct impact on people’s lives. It is more complicated to take measurements with a laser scanner than with traditional methods, where laser target markers or a measuring tape are used. A specific procedure must be followed when measurements are taken with a laser scanner, and the aim is to obtain three-dimensional data about a building situated in a given space. Accuracy, low time consumption, safety and non-invasiveness are the primary advantages of this technology used in the civil engineering practice, when handling both historic and modern architecture. Using a laser scanner is especially important when taking measurements of vast engineering constructions, where an application of traditional techniques would be much more difficult and would require higher time and labour inputs, for example because of some less easily accessible nooks and crannies or due to the geometrical complexity of individual components of a building structure. In this article, the author undertakes the problem of measuring and modelling architectural objects in the process of their valorisation, i.e. the enhancement of their functional, usable, spatial and aesthetic values. Above all, the laser scanning method, by generating results as a point cloud, enables the user to obtain a very detailed, three-dimensional computer image of measured objects, and to make series of analyses and expert investigations, e.g. of the technical condition (deformation of construction elements) as well as the spatial management of the surrounding

  17. The Archaeology of Architecture. New models of analysis applied to structures of Alta Andalusia in the Iberian period

    OpenAIRE

    Sánchez, Julia

    1998-01-01

    New theories of architectural space, based on the philosophy of Lao-Tsé emerged at the end of nineteenth century. Interior space was now considered the core of architecture. Developing concepts of the movement of the human body in this space, new contributions focused on the detailed study of architecture have led to the creation of a new discipline called the Archaeology of Architecture. New models of analysis, based on access and visibility, are applied to the interior space of Iberian dome...

  18. Multiscale modeling of the anisotropic electrical conductivity of architectured and nanostructured Cu-Nb composite wires and experimental comparison

    International Nuclear Information System (INIS)

    Gu, T.; Medy, J.-R.; Volpi, F.; Castelnau, O.; Forest, S.; Hervé-Luanco, E.; Lecouturier, F.; Proudhon, H.; Renault, P.-O.

    2017-01-01

    Nanostructured and architectured copper niobium composite wires are excellent candidates for the generation of intense pulsed magnetic fields (> 90T) as they combine both high electrical conductivity and high strength. Multi-scaled Cu-Nb wires can be fabricated by accumulative drawing and bundling (a severe plastic deformation technique), leading to a multiscale, architectured and nanostructured microstructure providing a unique set of properties. This work presents a comprehensive multiscale study to predict the anisotropic effective electrical conductivity based on material nanostructure and architecture. Two homogenization methods are applied: a mean-field theory and a full-field approach. The size effect associated with the microstructure refinement is taken into account in the definition of the conductivity of each component in the composites. The multiscale character of the material is then accounted for through an iterative process. Both methods show excellent agreement with each other. The results are further compared, for the first time, with experimental data obtained by the four-point probe technique, and also show excellent agreement. Finally, the qualitative and quantitative understanding provided by these models demonstrates that the microstructure of Cu-Nb wires has a significant effect on the electrical conductivity.

  19. Modeling, analysis and optimization of network-on-chip communication architectures

    CERN Document Server

    Ogras, Umit Y

    2013-01-01

    Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures. This book explores outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.

  20. Architecture Model of Bussines, Information System and Technology in BAKOSURTANAL Based on TOGAF

    Directory of Open Access Journals (Sweden)

    Iyan Supriyana

    2010-04-01

    Full Text Available The information technology (IT is a necessary in BAKOSURTANAL to support business in relation with data and spatial information. Users will get the advantage through easy and fast access to data and spatial information. The important of the enterprise architecture (EA to play a role to support company is proven because it provides technology and process structure which are fundamental aspects in IT strategy. Enterprise architecture framework (EAF will accelerate and simplify the development of EA by ascertaining comprehensive coverage of solutions, ensuring the result of EA is always in line with the growth of enterprise. This paper explains the open group architecture framework (TOGAF from several of EAF. The result shows that the most suitable EAF for BAKOSURTANAL in Blueprint development is by proposing EA model that covers business, information system, and technology architecture which are relied on recommended technical basics that is possible to be implemented.

  1. The Art of Hardware Architecture Design Methods and Techniques for Digital Circuits

    CERN Document Server

    Arora, Mohit

    2012-01-01

    This book highlights the complex issues, tasks and skills that must be mastered by an IP designer, in order to design an optimized and robust digital circuit to solve a problem. The techniques and methodologies described can serve as a bridge between specifications that are known to the designer and RTL code that is final outcome, reducing significantly the time it takes to convert initial ideas and concepts into right-first-time silicon.� Coverage focuses on real problems rather than theoretical concepts, with an emphasis on design techniques across various aspects of chip-design.�� Describes techniques to help IP designers get it right the first time, creating designs optimized in terms of power, area and performance; Focuses on practical aspects of chip design and minimizes theory; Covers chip design in a consistent way, starting with basics and gradually developing advanced concepts, such as electromagnetic compatibility (EMC) design techniques and low-power design techniques such as dynamic voltage...

  2. Using DNase Hi-C techniques to map global and local three-dimensional genome architecture at high resolution.

    Science.gov (United States)

    Ma, Wenxiu; Ay, Ferhat; Lee, Choli; Gulsoy, Gunhan; Deng, Xinxian; Cook, Savannah; Hesson, Jennifer; Cavanaugh, Christopher; Ware, Carol B; Krumm, Anton; Shendure, Jay; Blau, C Anthony; Disteche, Christine M; Noble, William S; Duan, ZhiJun

    2018-06-01

    The folding and three-dimensional (3D) organization of chromatin in the nucleus critically impacts genome function. The past decade has witnessed rapid advances in genomic tools for delineating 3D genome architecture. Among them, chromosome conformation capture (3C)-based methods such as Hi-C are the most widely used techniques for mapping chromatin interactions. However, traditional Hi-C protocols rely on restriction enzymes (REs) to fragment chromatin and are therefore limited in resolution. We recently developed DNase Hi-C for mapping 3D genome organization, which uses DNase I for chromatin fragmentation. DNase Hi-C overcomes RE-related limitations associated with traditional Hi-C methods, leading to improved methodological resolution. Furthermore, combining this method with DNA capture technology provides a high-throughput approach (targeted DNase Hi-C) that allows for mapping fine-scale chromatin architecture at exceptionally high resolution. Hence, targeted DNase Hi-C will be valuable for delineating the physical landscapes of cis-regulatory networks that control gene expression and for characterizing phenotype-associated chromatin 3D signatures. Here, we provide a detailed description of method design and step-by-step working protocols for these two methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Modeling techniques for quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Jirauschek, Christian [Institute for Nanoelectronics, Technische Universität München, D-80333 Munich (Germany); Kubis, Tillmann [Network for Computational Nanotechnology, Purdue University, 207 S Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  4. Modeling techniques for quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  5. Proposal of a congestion control technique in LAN networks using an econometric model ARIMA

    Directory of Open Access Journals (Sweden)

    Joaquín F Sánchez

    2017-01-01

    Full Text Available Hasty software development can produce immediate implementations with source code unnecessarily complex and hardly readable. These small kinds of software decay generate a technical debt that could be big enough to seriously affect future maintenance activities. This work presents an analysis technique for identifying architectural technical debt related to non-uniformity of naming patterns; the technique is based on term frequency over package hierarchies. The proposal has been evaluated on projects of two popular organizations, Apache and Eclipse. The results have shown that most of the projects have frequent occurrences of the proposed naming patterns, and using a graph model and aggregated data could enable the elaboration of simple queries for debt identification. The technique has features that favor its applicability on emergent architectures and agile software development.

  6. Architecture of the Neurath Basic Model View Controller

    Directory of Open Access Journals (Sweden)

    K. Yermashov

    2006-01-01

    Full Text Available The idea of the Neurath Basic Model View Controller (NBMVC appeared during the discussion of the design of domain-specific modeling tools based on the Neurath Modeling Language [Yer06]. The NBMVC is the core of the modeling process within the modeling environment. It reduces complexity out of the design process by providing domain-specific interfaces between the developer and the model. These interfaces help to organize and manipulate the model. The organization includes, for example, a layer with visual components to drop them in and filter them out. The control routines includes, for example, model transformations.

  7. Exploring business model innovation in professional service firms : Insights from architecture

    NARCIS (Netherlands)

    Lieftink, B.; Bos-de Vos, M.; Lauche, K.; Smits, A.

    2014-01-01

    Business model innovation may be a significant source of competitive advantage and firm performance. New ways of doing business have become increasingly important in the professional service sector. This research specifically focuses on business model innovation by architecture firms, which are

  8. Environment model creation and ADAS architecture for trucks : design and implementation of a sensor fusion algorithm

    NARCIS (Netherlands)

    Stamatopoulos, E.

    2016-01-01

    This report presents a structural approach for environment model creation and ADAS architecture for trucks. In particular, an appropriate sensor suite that is suitable for a set of ADAS functions is defined. On this basis, the development of a proof of concept for an Environment Model system, by

  9. A game-theoretic architecture for visible watermarking system of ACOCOA (adaptive content and contrast aware technique

    Directory of Open Access Journals (Sweden)

    Tsai Min-Jen

    2011-01-01

    Full Text Available Abstract Digital watermarking techniques have been developed to protect the intellectual property. A digital watermarking system is basically judged based on two characteristics: security robustness and image quality. In order to obtain a robust visible watermarking in practice, we present a novel watermarking algorithm named adaptive content and contrast aware (ACOCOA, which considers the host image content and watermark texture. In addition, we propose a powerful security architecture against attacks for visible watermarking system which is based on game-theoretic approach that provides an equilibrium condition solution for the decision maker by studying the effects of transmission power on intensity and perceptual efficiency. The experimental results demonstrate that the feasibility of the proposed approach not only provides effectiveness and robustness for the watermarked images, but also allows the watermark encoder to obtain the best adaptive watermarking strategy under attacks.

  10. Writing Gardens - Gardening Drawings: Fung, Brunier and Garening as a model of Landscape Architectural Practice

    Directory of Open Access Journals (Sweden)

    Julian Raxworthy

    2004-06-01

    Full Text Available Landscape architecture is different from other design discourses, notably architecture, because of its utilisation of' dynamic' construction media such as plant materials, soils and water, compared with the 'static' materials of architecture, colloquially described as bricks and mortar. This dynamism refers to the fact that landscape materials not only change, but get better over time. While this is a material difference, its implications extend to practice, which has been modelled, from architecture, to favour a static mode of representation: the drawing. While the drawing is important for the propositional nature of landscape architecture, it may be valuable to look at other disciplines, allied to landscape architecture, which might be seen as better able to engage with change. In this essay, the garden provides just such an example. In the writings of Stanislaus Fung on the Chinese garden text the Yuan vi, an argument is made about writing being a fundamental act in the endeavour of gardening that may offer a bridge across the 'ontological disparity' that exists between representation and the subject, the landscape. To speak of writing in this context suggests that writing about gardens is actually a type of gardening in itself. This argument is extended in the current essay quickly to see if it is also appropriate to consider drawings in this way. This essay also attempts to legitimate theoretically the real possibility of modifying landscape architectural practices to engage with change, by suggesting what might be learned from gardening. In further research by this author, this argument will be used as the theoretical basis for critiquing gardens in such a way that lessons learnt from garden designers can be valuably incorporated back into the discourse of landscape architecture.

  11. Rabbit tissue model (RTM) harvesting technique.

    Science.gov (United States)

    Medina, Marelyn

    2002-01-01

    A method for creating a tissue model using a female rabbit for laparoscopic simulation exercises is described. The specimen is called a Rabbit Tissue Model (RTM). Dissection techniques are described for transforming the rabbit carcass into a small, compact unit that can be used for multiple training sessions. Preservation is accomplished by using saline and refrigeration. Only the animal trunk is used, with the rest of the animal carcass being discarded. Practice exercises are provided for using the preserved organs. Basic surgical skills, such as dissection, suturing, and knot tying, can be practiced on this model. In addition, the RTM can be used with any pelvic trainer that permits placement of larger practice specimens within its confines.

  12. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  13. Real-Time Model and Simulation Architecture for Half- and Full-Bridge Modular Multilevel Converters

    Science.gov (United States)

    Ashourloo, Mojtaba

    This work presents an equivalent model and simulation architecture for real-time electromagnetic transient analysis of either half-bridge or full-bridge modular multilevel converter (MMC) with 400 sub-modules (SMs) per arm. The proposed CPU/FPGA-based architecture is optimized for the parallel implementation of the presented MMC model on the FPGA and is beneficiary of a high-throughput floating-point computational engine. The developed real-time simulation architecture is capable of simulating MMCs with 400 SMs per arm at 825 nanoseconds. To address the difficulties of the sorting process implementation, a modified Odd-Even Bubble sorting is presented in this work. The comparison of the results under various test scenarios reveals that the proposed real-time simulator is representing the system responses in the same way of its corresponding off-line counterpart obtained from the PSCAD/EMTDC program.

  14. Rasch family models in e-learning: analyzing architectural sketching with a digital pen.

    Science.gov (United States)

    Scalise, Kathleen; Cheng, Nancy Yen-Wen; Oskui, Nargas

    2009-01-01

    Since architecture students studying design drawing are usually assessed qualitatively on the basis of their final products, the challenges and stages of their learning have remained masked. To clarify the challenges in design drawing, we have been using the BEAR Assessment System and Rasch family models to measure levels of understanding for individuals and groups, in order to correct pedagogical assumptions and tune teaching materials. This chapter discusses the analysis of 81 drawings created by architectural students to solve a space layout problem, collected and analyzed with digital pen-and-paper technology. The approach allows us to map developmental performance criteria and perceive achievement overlaps in learning domains assumed separate, and then re-conceptualize a three-part framework to represent learning in architectural drawing. Results and measurement evidence from the assessment and Rasch modeling are discussed.

  15. Deferred Action: Theoretical model of process architecture design for emergent business processes

    Directory of Open Access Journals (Sweden)

    Patel, N.V.

    2007-01-01

    Full Text Available E-Business modelling and ebusiness systems development assumes fixed company resources, structures, and business processes. Empirical and theoretical evidence suggests that company resources and structures are emergent rather than fixed. Planning business activity in emergent contexts requires flexible ebusiness models based on better management theories and models . This paper builds and proposes a theoretical model of ebusiness systems capable of catering for emergent factors that affect business processes. Drawing on development of theories of the ‘action and design’class the Theory of Deferred Action is invoked as the base theory for the theoretical model. A theoretical model of flexible process architecture is presented by identifying its core components and their relationships, and then illustrated with exemplar flexible process architectures capable of responding to emergent factors. Managerial implications of the model are considered and the model’s generic applicability is discussed.

  16. A solution for future designs using techniques from vernacular architecture in southern Iran

    Science.gov (United States)

    Mirahmadi, Fatima; Altan, Hasim

    2018-02-01

    Nowadays in modern life, every technology and technique for comfortable life is available. People with low income, in other words, with low levels of economic power, can also have those facilities to stay warm in winter and stay cool in summer. Many years back when there were no advanced systems for human needs, passive strategies played a big role in peoples' lives. This paper concentrates on a small city in Iran that had used special strategies to solve peoples' environmental issues. The city is called Evaz, which is located in the Fars region of Iran with distance around 20 km from Gerash city and 370 km from south east of Shiraz. Evaz receives minimum rainfall, which is the reason why water is limited in this area and therefore, cisterns (water storage) had been used for many years that is studied in more detail in this paper. In summers, the climate is hot and dry, sometimes the external temperatures reaching around 46 °C during the day. Although the winters are typically cold and likewise dry, moderate climate is available in Evaz during autumn and spring. This study identifies some of the past strategies and describes them in detail with analysis for transformation and connections with the modern and traditional fundamentals. Furthermore, the study develops some solutions utilizing a combination of both modern and traditional techniques in design to suggest better and more effective ways to save energy, and at the same time to remain sustainable for the future.

  17. A model of tumor architecture and spatial interactions with tumor microenvironment in breast carcinoma

    Science.gov (United States)

    Ben Cheikh, Bassem; Bor-Angelier, Catherine; Racoceanu, Daniel

    2017-03-01

    Breast carcinomas are cancers that arise from the epithelial cells of the breast, which are the cells that line the lobules and the lactiferous ducts. Breast carcinoma is the most common type of breast cancer and can be divided into different subtypes based on architectural features and growth patterns, recognized during a histopathological examination. Tumor microenvironment (TME) is the cellular environment in which tumor cells develop. Being composed of various cell types having different biological roles, TME is recognized as playing an important role in the progression of the disease. The architectural heterogeneity in breast carcinomas and the spatial interactions with TME are, to date, not well understood. Developing a spatial model of tumor architecture and spatial interactions with TME can advance our understanding of tumor heterogeneity. Furthermore, generating histological synthetic datasets can contribute to validating, and comparing analytical methods that are used in digital pathology. In this work, we propose a modeling method that applies to different breast carcinoma subtypes and TME spatial distributions based on mathematical morphology. The model is based on a few morphological parameters that give access to a large spectrum of breast tumor architectures and are able to differentiate in-situ ductal carcinomas (DCIS) and histological subtypes of invasive carcinomas such as ductal (IDC) and lobular carcinoma (ILC). In addition, a part of the parameters of the model controls the spatial distribution of TME relative to the tumor. The validation of the model has been performed by comparing morphological features between real and simulated images.

  18. Unified Deep Learning Architecture for Modeling Biology Sequence.

    Science.gov (United States)

    Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang

    2017-10-09

    Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.

  19. Modelling kinetics of plant canopy architecture: concepts and applications

    NARCIS (Netherlands)

    Birch, C.J.; Andrieu, B.; Fournier, C.; Vos, J.; Room, P.

    2003-01-01

    Most crop models simulate the crop canopy as an homogeneous medium. This approach enables modelling of mass and energy transfer through relatively simple equations, and is useful for understanding crop production. However, schematisation of an homogeneous medium cannot address the heterogeneous

  20. Optimization of Strategies and Models Review for Optimal Technologies - Based On Fuzzy Schemes for Green Architecture

    OpenAIRE

    Ghada Elshafei; Abdelazim Negm

    2015-01-01

    Recently, the green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives in green buildings should be designed t...

  1. Using UML Modeling to Facilitate Three-Tier Architecture Projects in Software Engineering Courses

    Science.gov (United States)

    Mitra, Sandeep

    2014-01-01

    This article presents the use of a model-centric approach to facilitate software development projects conforming to the three-tier architecture in undergraduate software engineering courses. Many instructors intend that such projects create software applications for use by real-world customers. While it is important that the first version of these…

  2. Federated Access Control in Heterogeneous Intercloud Environment: Basic Models and Architecture Patterns

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Lee, C.

    2014-01-01

    This paper presents on-going research to define the basic models and architecture patterns for federated access control in heterogeneous (multi-provider) multi-cloud and inter-cloud environment. The proposed research contributes to the further definition of Intercloud Federation Framework (ICFF)

  3. Can diversity in root architecture explain plant water use efficiency? A modeling study.

    Science.gov (United States)

    Tron, Stefania; Bodner, Gernot; Laio, Francesco; Ridolfi, Luca; Leitner, Daniel

    2015-09-24

    Drought stress is a dominant constraint to crop production. Breeding crops with adapted root systems for effective uptake of water represents a novel strategy to increase crop drought resistance. Due to complex interaction between root traits and high diversity of hydrological conditions, modeling provides important information for trait based selection. In this work we use a root architecture model combined with a soil-hydrological model to analyze whether there is a root system ideotype of general adaptation to drought or water uptake efficiency of root systems is a function of specific hydrological conditions. This was done by modeling transpiration of 48 root architectures in 16 drought scenarios with distinct soil textures, rainfall distributions, and initial soil moisture availability. We find that the efficiency in water uptake of root architecture is strictly dependent on the hydrological scenario. Even dense and deep root systems are not superior in water uptake under all hydrological scenarios. Our results demonstrate that mere architectural description is insufficient to find root systems of optimum functionality. We find that in environments with sufficient rainfall before the growing season, root depth represents the key trait for the exploration of stored water, especially in fine soils. Root density, instead, especially near the soil surface, becomes the most relevant trait for exploiting soil moisture when plant water supply is mainly provided by rainfall events during the root system development. We therefore concluded that trait based root breeding has to consider root systems with specific adaptation to the hydrology of the target environment.

  4. Polyvariant architectural model of Crambe koktebelica ( Junge N. Busch shoot system

    Directory of Open Access Journals (Sweden)

    Olga F. Scherbakova

    2013-04-01

    Full Text Available The article is devoted to integrated biomorphological study on Crambe koktebelica ( Junge N. Busch. Different variants of ontogenesis and basic architectural model were ascertained by for this rare species. Influence of milieu conditions on shoot system structure of the species individuals was shown.

  5. Elsevier special issue on foundations and applications of model driven architecture

    NARCIS (Netherlands)

    Aksit, Mehmet; Ivanov, Ivan

    2008-01-01

    Model Driven Architecture (MDA) is an approach for software development proposed by Object Management Group (OMG). The basic principle of MDA is the separation of the specification of system functionality from the specification of the implementation of that functionality on a specific platform. The

  6. An architecture model for communication of safety in public transportation

    NARCIS (Netherlands)

    Rajabalinejad, Mohammad; Horváth, Imre; Pernot, Jean-Paul; Rusák, Zoltan

    2016-01-01

    Safety in transportation is under the influence of the rising complexity, increasing demands for capacity and decreasing cost. Furthermore, the interdisciplinary environment of operation and altered safety regulations invite for a centralized (integrated) modelling/ communication approach. This

  7. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...... to this systematic thinking of the building technique we get a diverse and functional architecture. Creating a new and clearer story telling about new and smart system based thinking behind the architectural expression....

  8. SASAgent: an agent based architecture for search, retrieval and composition of scientific models.

    Science.gov (United States)

    Felipe Mendes, Luiz; Silva, Laryssa; Matos, Ely; Braga, Regina; Campos, Fernanda

    2011-07-01

    Scientific computing is a multidisciplinary field that goes beyond the use of computer as machine where researchers write simple texts, presentations or store analysis and results of their experiments. Because of the huge hardware/software resources invested in experiments and simulations, this new approach to scientific computing currently adopted by research groups is well represented by e-Science. This work aims to propose a new architecture based on intelligent agents to search, recover and compose simulation models, generated in the context of research projects related to biological domain. The SASAgent architecture is described as a multi-tier, comprising three main modules, where CelO ontology satisfies requirements put by e-science projects mainly represented by the semantic knowledge base. Preliminary results suggest that the proposed architecture is promising to achieve requirements found in e-Science projects, considering mainly the biological domain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    Science.gov (United States)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  10. “Pushing the Envelope” a modeling-based approach to the development of organic, responsive architectural form

    Directory of Open Access Journals (Sweden)

    David Yearley

    2012-11-01

    Full Text Available This paper tests design procedures for the development of complex, organic architectural forms. It illustrates a postgraduate student design process, implementing a development sequence based on the intelligent manipulation of architectural envelopes using a variety of existing modeling tools and emerging digital techniques. These stages of development respond to imposed spatial and environmental constraints. The tests began with full-scale modeling of small segments. The major constraints at this stage were spatial requirements and the physical characteristics of materials. The forms derived from the bending properties of prestressed green timber and the dimensions of shingle cladding. This was followed by digital 3D modeling using common commercial applications. At this stage initial models were derived from a traditional space requirement brief. The envelopes for these activities were then manipulated to respond to the spatial limitations imposed by surrounding buildings. This digital modeling process metaphorically “pushed the limits” as vertices of the envelope model were stretched and shifted to achieve a perceived “fit” between the two sets of spatial dimensions. The spatially manipulated geometry was then imported into Ecotect, an environmental analysis package. As an example, the envelope’s morphology and cladding material options on the acoustic qualities of the surrounding space were tested. The improved geometry was then imported into a Virtual Reality room, in which the spatial experience was simulated in presentations to the design team and potential occupants. This room utilized six projectors to create an immersive experience to users wearing stereoscopic goggles, and moving in a space surrounded by three large screens, creating a CAVE-like presentation space. Finally there was an attempt to complete the circle by returning from the simulated world to the physical worlds, by creating full-scale models from the digital

  11. Hybrid programming model for implicit PDE simulations on multicore architectures

    KAUST Repository

    Kaushik, Dinesh; Keyes, David E.; Balay, Satish; Smith, Barry F.

    2011-01-01

    The complexity of programming modern multicore processor based clusters is rapidly rising, with GPUs adding further demand for fine-grained parallelism. This paper analyzes the performance of the hybrid (MPI+OpenMP) programming model in the context of an implicit unstructured mesh CFD code. At the implementation level, the effects of cache locality, update management, work division, and synchronization frequency are studied. The hybrid model presents interesting algorithmic opportunities as well: the convergence of linear system solver is quicker than the pure MPI case since the parallel preconditioner stays stronger when hybrid model is used. This implies significant savings in the cost of communication and synchronization (explicit and implicit). Even though OpenMP based parallelism is easier to implement (with in a subdomain assigned to one MPI process for simplicity), getting good performance needs attention to data partitioning issues similar to those in the message-passing case. © 2011 Springer-Verlag.

  12. Layered architecture for quantum computing

    OpenAIRE

    Jones, N. Cody; Van Meter, Rodney; Fowler, Austin G.; McMahon, Peter L.; Kim, Jungsang; Ladd, Thaddeus D.; Yamamoto, Yoshihisa

    2010-01-01

    We develop a layered quantum-computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction. In doing so, we propose a new quantum-computer architecture based on optical control of quantum dot...

  13. Statistical modeling of nitrogen-dependent modulation of root system architecture in Arabidopsis thaliana.

    Science.gov (United States)

    Araya, Takao; Kubo, Takuya; von Wirén, Nicolaus; Takahashi, Hideki

    2016-03-01

    Plant root development is strongly affected by nutrient availability. Despite the importance of structure and function of roots in nutrient acquisition, statistical modeling approaches to evaluate dynamic and temporal modulations of root system architecture in response to nutrient availability have remained as widely open and exploratory areas in root biology. In this study, we developed a statistical modeling approach to investigate modulations of root system architecture in response to nitrogen availability. Mathematical models were designed for quantitative assessment of root growth and root branching phenotypes and their dynamic relationships based on hierarchical configuration of primary and lateral roots formulating the fishbone-shaped root system architecture in Arabidopsis thaliana. Time-series datasets reporting dynamic changes in root developmental traits on different nitrate or ammonium concentrations were generated for statistical analyses. Regression analyses unraveled key parameters associated with: (i) inhibition of primary root growth under nitrogen limitation or on ammonium; (ii) rapid progression of lateral root emergence in response to ammonium; and (iii) inhibition of lateral root elongation in the presence of excess nitrate or ammonium. This study provides a statistical framework for interpreting dynamic modulation of root system architecture, supported by meta-analysis of datasets displaying morphological responses of roots to diverse nitrogen supplies. © 2015 Institute of Botany, Chinese Academy of Sciences.

  14. Architecture for Direct Model-to-Part CNC Manufacturing

    Directory of Open Access Journals (Sweden)

    Gilbert Poon

    2006-02-01

    Full Text Available In the traditional paradigm for Computer Numerical Control (CNC machining, tool paths are programmed offline from the CNC machine using the Computer-Aided Design (CAD model of the workpiece. The program is downloaded to the CNC controller and the part is then machined. Since a CAD model does not exist inside the CNC controller, it is unaware of the part to be machined and cannot predict or prevent errors. Not only is this paradigm labor intensive, it can lead to catastrophic damage if there are errors during machining. This paper presents a new concept for CNC machine control whereby a CAD model of the workpiece exists inside the controller and the tool positions are generated in real-time by the controller using the computer's graphics hardware without human intervention. The new concept was implemented on an experimental lathe machine specifically designed to machine complicated ornamental wood workpieces with a personal computer. An example workpiece was machined and measured using a 3D camera. The measured data was registered to the CAD model to evaluate machining accuracy.

  15. Ontology Driven Meta-Modeling of Service Oriented Architecture

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... #Department of Computer Applications, National Institute of ... *5Department of Computer Science, Winona State University, MN, USA ..... Further, it has aided in service .... Software: A Research Roadmap”, Workshop on the Future of ... and A. Solberg, “Model-driven service engineering with SoaML”, in.

  16. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Noonan, Nicholas James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  17. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  18. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  19. Modelling production system architectures in the early phases of product development

    DEFF Research Database (Denmark)

    Guðlaugsson, Tómas Vignir; Ravn, Poul Martin; Mortensen, Niels Henrik

    2016-01-01

    are needed and appropriate to enable determination of obtainable product quality. In order to meet this challenge, it is suggested that a visual modelling framework be adopted that clarifies which product and production features are known at a specific time of the project and which features will be worked...... on – leading to an improved basis for prioritizing activities in the project. Requirements for the contents of the framework are presented, and literature on production and system models is reviewed. The production system architecture modelling framework is founded on methods and approaches in literature......This article suggests a framework for modelling a production system architecture in the early phases of product development.The challenge in these phases is that the products to be produced are not completely defined and yet decisions need to be made early in the process on what investments...

  20. From data to the decision: A software architecture to integrate predictive modelling in clinical settings.

    Science.gov (United States)

    Martinez-Millana, A; Fernandez-Llatas, C; Sacchi, L; Segagni, D; Guillen, S; Bellazzi, R; Traver, V

    2015-08-01

    The application of statistics and mathematics over large amounts of data is providing healthcare systems with new tools for screening and managing multiple diseases. Nonetheless, these tools have many technical and clinical limitations as they are based on datasets with concrete characteristics. This proposition paper describes a novel architecture focused on providing a validation framework for discrimination and prediction models in the screening of Type 2 diabetes. For that, the architecture has been designed to gather different data sources under a common data structure and, furthermore, to be controlled by a centralized component (Orchestrator) in charge of directing the interaction flows among data sources, models and graphical user interfaces. This innovative approach aims to overcome the data-dependency of the models by providing a validation framework for the models as they are used within clinical settings.

  1. Research on mixed network architecture collaborative application model

    Science.gov (United States)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  2. A Model Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete Point Linear Models

    Science.gov (United States)

    2016-04-01

    AND ROTORCRAFT FROM DISCRETE -POINT LINEAR MODELS Eric L. Tobias and Mark B. Tischler Aviation Development Directorate Aviation and Missile...Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete -Point Linear Models 5...of discrete -point linear models and trim data. The model stitching simulation architecture is applicable to any aircraft configuration readily

  3. MAS architecture and knowledge model for vehicles data communication

    Directory of Open Access Journals (Sweden)

    René MANDIAU

    2013-07-01

    Full Text Available Normal 0 21 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Completely autonomous vehicles in traffic should allow to decrease the number of road accident victims greatly, and should allow gains in terms of performance and economy. Modelling the vehicles interaction, and especially knowledge sharing, is one of the main challenges to optimize traffic flow with autonomous vehicles. We propose in this paper a model of knowledge communication between mobile agents on a traffic network. The model of knowledge and of interaction enables to propagate new knowledge without overloading the system with a too large number of communications. For that, only the new knowledge is communicated, and two agents communicate the same knowledge only once. Moreover, in order to allow agents to update their knowledge (perceived or created, a notion of degradation is used. A simulator has been built to evaluate the proposal, before to implement it in mobile robots. Some results of the simulator are proposed in this article.

  4. MAS architecture and knowledge model for vehicles data communication

    Directory of Open Access Journals (Sweden)

    Emmanuel ADAM

    2012-07-01

    Full Text Available Normal 0 21 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Completely autonomous vehicles in traffic should allow to decrease the number of road accident victims greatly, and should allow gains in terms of performance and economy. Modelling the vehicles interaction, and especially knowledge sharing, is one of the main challenges to optimize traffic flow with autonomous vehicles. We propose in this paper a model of knowledge communication between mobile agents on a traffic network. The model of knowledge and of interaction enables to propagate new knowledge without overloading the system with a too large number of communications. For that, only the new knowledge is communicated, and two agents communicate the same knowledge only once. Moreover, in order to allow agents to update their knowledge (perceived or created, a notion of degradation is used. A simulator has been built to evaluate the proposal, before to implement it in mobile robots. Some results of the simulator are proposed in this article.

  5. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity

    Science.gov (United States)

    Malinin, Laura H.

    2016-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person’s interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity. PMID:26779087

  6. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity.

    Science.gov (United States)

    Malinin, Laura H

    2015-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person's interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity.

  7. Modeling the impact of scaffold architecture and mechanical loading on collagen turnover in engineered cardiovascular tissues.

    Science.gov (United States)

    Argento, G; de Jonge, N; Söntjens, S H M; Oomens, C W J; Bouten, C V C; Baaijens, F P T

    2015-06-01

    The anisotropic collagen architecture of an engineered cardiovascular tissue has a major impact on its in vivo mechanical performance. This evolving collagen architecture is determined by initial scaffold microstructure and mechanical loading. Here, we developed and validated a theoretical and computational microscale model to quantitatively understand the interplay between scaffold architecture and mechanical loading on collagen synthesis and degradation. Using input from experimental studies, we hypothesize that both the microstructure of the scaffold and the loading conditions influence collagen turnover. The evaluation of the mechanical and topological properties of in vitro engineered constructs reveals that the formation of extracellular matrix layers on top of the scaffold surface influences the mechanical anisotropy on the construct. Results show that the microscale model can successfully capture the collagen arrangement between the fibers of an electrospun scaffold under static and cyclic loading conditions. Contact guidance by the scaffold, and not applied load, dominates the collagen architecture. Therefore, when the collagen grows inside the pores of the scaffold, pronounced scaffold anisotropy guarantees the development of a construct that mimics the mechanical anisotropy of the native cardiovascular tissue.

  8. A service-oriented architecture for integrating the modeling and formal verification of genetic regulatory networks

    Directory of Open Access Journals (Sweden)

    Page Michel

    2009-12-01

    Full Text Available Abstract Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks.

  9. Cultural heritage conservation and communication by digital modeling tools. Case studies: minor architectures of the Thirties in the Turin area

    Science.gov (United States)

    Bruno, A., Jr.; Spallone, R.

    2015-08-01

    Between the end of the twenties and the beginning of the World war two Turin, as the most of the Italian cities, was endowed by the fascist regime of many new buildings to guarantee its visibility and to control the territory: the fascist party main houses and the local ones. The style that was adopted for these constructions was inspired by the guide lines of the Modern movement which were spreading by a generation of architects as Le Corbusier, Gropius, Mendelsohn. At the end of the war many buildings were reconverted to several functions that led heavy transformations not respectful of the original worth, other were demolished. Today it's possible to rebuild those lost architectures in their primal format as it was created by their architects on paper (and in their mind). This process can guarantee the three-dimensional perception, the authenticity of the materials and the placement into the Turin urban tissue, using static and dynamic digital representation systems. The "three-dimensional re-drawing" of the projects, thought as an heuristic practice devoted to reveal the original idea of the project, inserts itself in a digital model of the urban and natural context as we can live it today, to simulate the perceptive effects that the building could stir up today. The modeling skills are the basis to product videos able to explore the relationship between the environment and "re-built architectures", describing with the synthetic movie techniques, the main formal and perceptive roots. The model represents a scientific product that can be involved in a virtual archive of cultural goods to preserve the collective memory of the architectural and urban past image of Turin.

  10. Beyond Virtual Replicas: 3D Modeling and Maltese Prehistoric Architecture

    Directory of Open Access Journals (Sweden)

    Filippo Stanco

    2013-01-01

    Full Text Available In the past decade, computer graphics have become strategic for the development of projects aimed at the interpretation of archaeological evidence and the dissemination of scientific results to the public. Among all the solutions available, the use of 3D models is particularly relevant for the reconstruction of poorly preserved sites and monuments destroyed by natural causes or human actions. These digital replicas are, at the same time, a virtual environment that can be used as a tool for the interpretative hypotheses of archaeologists and as an effective medium for a visual description of the cultural heritage. In this paper, the innovative methodology and aims and outcomes of a virtual reconstruction of the Borg in-Nadur megalithic temple, carried out by Archeomatica Project of the University of Catania, are offered as a case study for a virtual archaeology of prehistoric Malta.

  11. Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models

    Science.gov (United States)

    Ruiz-Torres, Alex J.; McCleskey, Carey

    2000-01-01

    The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.

  12. ARCHITECTURAL FORM CREATION IN THE DESIGN STUDIO: PHYSICAL MODELING AS AN EFFECTIVE DESIGN TOOL

    Directory of Open Access Journals (Sweden)

    Wael Abdelhameed

    2011-11-01

    Full Text Available This research paper attempts to shed more light on an area of the design studio, which concerns with the use of physical modeling as a design medium in architectural form creation. An experiment has been carried out during an architectural design studio in order to not only investigate physical modeling as a tool of form creation but also improve visual design thinking that students employ while using this manual tool. To achieve the research objective, a method was proposed and applied to track form creation processes, based upon three types of operation, namely: sketching transformations, divergent physical-modeling transformations, and convergent physical-modeling transformations. The method helps record the innovative transitions of form during conceptual designing in a simple way. Investigating form creation processes and activities associated with visual design thinking enables the research to conclude to general results of the role of physical modeling in the conceptual phase of designing, and to specific results of the methods used in this architectural design studio experiment.

  13. Analysis of clinical complication data for radiation hepatitis using a parallel architecture model

    International Nuclear Information System (INIS)

    Jackson, A.; Haken, R.K. ten; Robertson, J.M.; Kessler, M.L.; Kutcher, G.J.; Lawrence, T.S.

    1995-01-01

    Purpose: The detailed knowledge of dose volume distributions available from the three-dimensional (3D) conformal radiation treatment of tumors in the liver (reported elsewhere) offers new opportunities to quantify the effect of volume on the probability of producing radiation hepatitis. We aim to test a new parallel architecture model of normal tissue complication probability (NTCP) with these data. Methods and Materials: Complication data and dose volume histograms from a total of 93 patients with normal liver function, treated on a prospective protocol with 3D conformal radiation therapy and intraarterial hepatic fluorodeoxyuridine, were analyzed with a new parallel architecture model. Patient treatment fell into six categories differing in doses delivered and volumes irradiated. By modeling the radiosensitivity of liver subunits, we are able to use dose volume histograms to calculate the fraction of the liver damaged in each patient. A complication results if this fraction exceeds the patient's functional reserve. To determine the patient distribution of functional reserves and the subunit radiosensitivity, the maximum likelihood method was used to fit the observed complication data. Results: The parallel model fit the complication data well, although uncertainties on the functional reserve distribution and subunit radiosensitivy are highly correlated. Conclusion: The observed radiation hepatitis complications show a threshold effect that can be described well with a parallel architecture model. However, additional independent studies are required to better determine the parameters defining the functional reserve distribution and subunit radiosensitivity

  14. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    Science.gov (United States)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  15. Inverted nuclear architecture and its development during differentiation of mouse rod photoreceptor cells: a new model to study nuclear architecture.

    Science.gov (United States)

    Solovei, I; Joffe, B

    2010-09-01

    Interphase nuclei have a conserved architecture: heterochromatin occupies the nuclear periphery, whereas euchromatin resides in the nuclear interior. It has recently been found that rod photoreceptor cells of nocturnal mammals have an inverted architecture, which transforms these nuclei in microlenses and supposedly facilitates a reduction in photon loss in the retina. This unique deviation from the nearly universal pattern throws a new light on the nuclear organization. In the article we discuss the implications of the studies of the inverted nuclei for understanding the role of the spatial organization of the nucleus in nuclear functions.

  16. A Reference Architecture for Provisioning of Tools as a Service: Meta-Model, Ontologies and Design Elements

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali; Sheng, Quan Z.

    2016-01-01

    Software Architecture (SA) plays a critical role in designing, developing and evolving cloud-based platforms that can be used to provision different types of services to consumers on demand. In this paper, we present a Reference Architecture (RA) for designing cloud-based Tools as a service SPACE...... (TSPACE) for provisioning a bundled suite of tools by following the Software as a Service (SaaS) model. The reference architecture has been designed by leveraging information structuring approaches and by using well-known architecture design principles and patterns. The RA has been documented using view...

  17. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... proportions, to organize the process on site choosing either one room wall components or several rooms wall components – either horizontally or vertically. Combined with the seamless joint the playing with these possibilities the new industrialized architecture can deliver variations in choice of solutions...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  18. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... expression in the specific housing area. It is the aim of this article to expand the different design strategies which architects can use – to give the individual project attitudes and designs with architectural quality. Through the customized component production it is possible to choose different...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  19. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Architectural freedom and industrialized architecture. Inge Vestergaard, Associate Professor, Cand. Arch. Aarhus School of Architecture, Denmark Noerreport 20, 8000 Aarhus C Telephone +45 89 36 0000 E-mai l inge.vestergaard@aarch.dk Based on the repetitive architecture from the "building boom" 1960...... customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performed expression in direct relation to the given context. Through the last couple of years we have in Denmark been focusing a more sustainable and low energy building technique, which also include...... to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...

  20. Cloud GIS and 3d Modelling to Enhance Sardinian Late Gothic Architectural Heritage

    Science.gov (United States)

    Pisu, C.; Casu, P.

    2013-07-01

    This work proposes the documentation, virtual reconstruction and spreading of architectural heritage through the use of software packages that operate in cloud computing. Cloud computing makes available a variety of applications and tools which can be effective both for the preparation and for the publication of different kinds of data. We tested the versatil ity and ease of use of such documentation tools in order to study a particular architectural phenomenon. The ultimate aim is to develop a multi-scale and multi-layer information system, oriented to the divulgation of Sardinian late gothic architecture. We tested the applications on portals of late Gothic architecture in Sardinia. The actions of conservation, protection and enhancement of cultural heritage are all founded on the social function that can be reached only through the widest possible fruition by the community. The applications of digital technologies on cultural heritage can contribute to the construction of effective communication models that, relying on sensory and emotional involvement of the viewer, can attract a wider audience to cultural content.

  1. CLOUD GIS AND 3D MODELLING TO ENHANCE SARDINIAN LATE GOTHIC ARCHITECTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    C. Pisu

    2013-07-01

    Full Text Available This work proposes the documentation, virtual reconstruction and spreading of architectural heritage through the use of software packages that operate in cloud computing. Cloud computing makes available a variety of applications and tools which can be effective both for the preparation and for the publication of different kinds of data. We tested the versatil ity and ease of use of such documentation tools in order to study a particular architectural phenomenon. The ultimate aim is to develop a multi-scale and multi-layer information system, oriented to the divulgation of Sardinian late gothic architecture. We tested the applications on portals of late Gothic architecture in Sardinia. The actions of conservation, protection and enhancement of cultural heritage are all founded on the social function that can be reached only through the widest possible fruition by the community. The applications of digital technologies on cultural heritage can contribute to the construction of effective communication models that, relying on sensory and emotional involvement of the viewer, can attract a wider audience to cultural content.

  2. A Modular GIS-Based Software Architecture for Model Parameter Estimation using the Method of Anchored Distributions (MAD)

    Science.gov (United States)

    Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.

    2012-12-01

    The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.

  3. Dependencies among Architectural Views Got from Software Requirements Based on a Formal Model

    Directory of Open Access Journals (Sweden)

    Osis Janis

    2014-12-01

    Full Text Available A system architect has software requirements and some unspecified knowledge about a problem domain (e.g., an enterprise as source information for assessment and evaluation of possible solutions and getting the target point, a preliminary software design. The solving factor is architect’s experience and expertise in the problem domain (“AS-IS”. A proposed approach is dedicated to assist a system architect in making an appropriate decision on the solution (“TO-BE”. It is based on a formal mathematical model, Topological Functioning Model (TFM. Compliant TFMs can be transformed into software architectural views. The paper demonstrates and discusses tracing dependency links from the requirements to and between the architectural views.

  4. Application of the Life Cycle Analysis and the Building Information Modelling Software in the Architectural Climate Change-Oriented Design Process

    Science.gov (United States)

    Gradziński, Piotr

    2017-10-01

    Whereas World’s climate is changing (inter alia, under the influence of architecture activity), the author attempts to reorientations design practice primarily in a direction the use and adapt to the climatic conditions. Architectural Design using in early stages of the architectural Design Process of the building, among other Life Cycle Analysis (LCA) and digital analytical tools BIM (Building Information Modelling) defines the overriding requirements which the designer/architect should meet. The first part, the text characterized the architecture activity influences (by consumption, pollution, waste, etc.) and the use of building materials (embodied energy, embodied carbon, Global Warming Potential, etc.) within the meaning of the direct negative environmental impact. The second part, the paper presents the revision of the methods and analytical techniques prevent negative influences. Firstly, showing the study of the building by using the Life Cycle Analysis of the structure (e.g. materials) and functioning (e.g. energy consumptions) of the architectural object (stages: before use, use, after use). Secondly, the use of digital analytical tools for determining the benefits of running multi-faceted simulations in terms of environmental factors (exposure to light, shade, wind) directly affecting shaping the form of the building. The conclusion, author’s research results highlight the fact that indicates the possibility of building design using the above-mentioned elements (LCA, BIM) causes correction, early designs decisions in the design process of architectural form, minimizing the impact on nature, environment. The work refers directly to the architectural-environmental dimensions, orienting the design process of buildings in respect of widely comprehended climatic changes.

  5. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    International Nuclear Information System (INIS)

    Wulff, W.; Cheng, H.S.; Mallen, A.N.

    1987-01-01

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses

  6. Freight data architecture business process, logical data model, and physical data model.

    Science.gov (United States)

    2014-09-01

    This document summarizes the study teams efforts to establish data-sharing partnerships : and relay the lessons learned. In addition, it provides information on a prototype freight data : architecture and supporting description and specifications ...

  7. The research of contamination regularities of historical buildings and architectural monuments by methods of computer modeling

    Directory of Open Access Journals (Sweden)

    Kuzmichev Andrey A.

    2017-01-01

    Full Text Available Due to the active step of urbanization and rapid development of industry the external appearance of buildings and architectural monuments of urban environment from visual ecology position requires special attention. Dust deposition by polluted atmospheric air is one of the key aspects of degradation of the facades of buildings. With the help of modern computer modeling methods it is possible to evaluate the impact of polluted atmospheric air on the external facades of the buildings in order to save them.

  8. A functional-structural kiwifruit vine model integrating architecture, carbon dynamics and effects of the environment.

    Science.gov (United States)

    Cieslak, Mikolaj; Seleznyova, Alla N; Hanan, Jim

    2011-04-01

    Functional-structural modelling can be used to increase our understanding of how different aspects of plant structure and function interact, identify knowledge gaps and guide priorities for future experimentation. By integrating existing knowledge of the different aspects of the kiwifruit (Actinidia deliciosa) vine's architecture and physiology, our aim is to develop conceptual and mathematical hypotheses on several of the vine's features: (a) plasticity of the vine's architecture; (b) effects of organ position within the canopy on its size; (c) effects of environment and horticultural management on shoot growth, light distribution and organ size; and (d) role of carbon reserves in early shoot growth. Using the L-system modelling platform, a functional-structural plant model of a kiwifruit vine was created that integrates architectural development, mechanistic modelling of carbon transport and allocation, and environmental and management effects on vine and fruit growth. The branching pattern was captured at the individual shoot level by modelling axillary shoot development using a discrete-time Markov chain. An existing carbon transport resistance model was extended to account for several source/sink components of individual plant elements. A quasi-Monte Carlo path-tracing algorithm was used to estimate the absorbed irradiance of each leaf. Several simulations were performed to illustrate the model's potential to reproduce the major features of the vine's behaviour. The model simulated vine growth responses that were qualitatively similar to those observed in experiments, including the plastic response of shoot growth to local carbon supply, the branching patterns of two Actinidia species, the effect of carbon limitation and topological distance on fruit size and the complex behaviour of sink competition for carbon. The model is able to reproduce differences in vine and fruit growth arising from various experimental treatments. This implies it will be a valuable

  9. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    Science.gov (United States)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  10. How do architecture patterns and tactics interact? A model and annotation

    NARCIS (Netherlands)

    Harrison, Neil B.; Avgeriou, Paris

    2010-01-01

    Software architecture designers inevitably work with both architecture patterns and tactics. Architecture patterns describe the high-level structure and behavior of software systems as the solution to multiple system requirements, whereas tactics are design decisions that improve individual quality

  11. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  12. Model-based security analysis of the German health card architecture.

    Science.gov (United States)

    Jürjens, J; Rumm, R

    2008-01-01

    Health-care information systems are particularly security-critical. In order to make these applications secure, the security analysis has to be an integral part of the system design and IT management process for such systems. This work presents the experiences and results from the security analysis of the system architecture of the German Health Card, by making use of an approach to model-based security engineering that is based on the UML extension UMLsec. The focus lies on the security mechanisms and security policies of the smart-card-based architecture which were analyzed using the UMLsec method and tools. Main results of the paper include a report on the employment of the UMLsec method in an industrial health information systems context as well as indications of its benefits and limitations. In particular, two potential security weaknesses were detected and countermeasures discussed. The results indicate that it can be feasible to apply a model-based security analysis using UMLsec to an industrial health information system like the German Health Card architecture, and that doing so can have concrete benefits (such as discovering potential weaknesses, and an increased confidence that no further vulnerabilities of the kind that were considered are present).

  13. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  14. Constructing service-oriented architecture adoption maturity matrix using Kano model

    Science.gov (United States)

    Hamzah, Mohd Hamdi Irwan; Baharom, Fauziah; Mohd, Haslina

    2017-10-01

    Commonly, organizations adopted Service-Oriented Architecture (SOA) because it can provide a flexible reconfiguration and can reduce the development time and cost. In order to guide the SOA adoption, previous industry and academia have constructed SOA maturity model. However, there is a limited number of works on how to construct the matrix in the previous SOA maturity model. Therefore, this study is going to provide a method that can be used in order to construct the matrix in the SOA maturity model. This study adapts Kano Model to construct the cross evaluation matrix focused on SOA adoption IT and business benefits. This study found that Kano Model can provide a suitable and appropriate method for constructing the cross evaluation matrix in SOA maturity model. Kano model also can be used to plot, organize and better represent the evaluation dimension for evaluating the SOA adoption.

  15. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    Science.gov (United States)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  16. Agent-Based Model of Information Security System: Architecture and Formal Framework for Coordinated Intelligent Agents Behavior Specification

    National Research Council Canada - National Science Library

    Gorodetski, Vladimir

    2001-01-01

    The contractor will research and further develop the technology supporting an agent-based architecture for an information security system and a formal framework to specify a model of distributed knowledge...

  17. D Textured Modelling of both Exterior and Interior of Korean Styled Architectures

    Science.gov (United States)

    Lee, J.-D.; Bhang, K.-J.; Schuhr, W.

    2017-08-01

    This paper describes 3D modelling procedure of two Korean styled architectures which were performed through a series of processing from data acquired with the terrestrial laser scanner. These two case projects illustate the use of terrestrial laser scanner as a digital documentation tool for management, conservation and restoration of the cultural assets. We showed an approach to automate reconstruction of both the outside and inside models of a building from laser scanning data. Laser scanning technology is much more efficient than existing photogrammetry in measuring shape and constructing spatial database for preservation and restoration of cultural assets as well as for deformation monitoring and safety diagnosis of structures.

  18. 3D TEXTURED MODELLING OF BOTH EXTERIOR AND INTERIOR OF KOREAN STYLED ARCHITECTURES

    Directory of Open Access Journals (Sweden)

    J.-D. Lee

    2017-08-01

    Full Text Available This paper describes 3D modelling procedure of two Korean styled architectures which were performed through a series of processing from data acquired with the terrestrial laser scanner. These two case projects illustate the use of terrestrial laser scanner as a digital documentation tool for management, conservation and restoration of the cultural assets. We showed an approach to automate reconstruction of both the outside and inside models of a building from laser scanning data. Laser scanning technology is much more efficient than existing photogrammetry in measuring shape and constructing spatial database for preservation and restoration of cultural assets as well as for deformation monitoring and safety diagnosis of structures.

  19. BIOMEHANICAL MODEL OF THE GOLF SWING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Milan Čoh

    2011-08-01

    Full Text Available Golf is an extremely complex game which depends on a number of interconnected factors. One of the most important elements is undoubtedly the golf swing technique. High performance of the golf swing technique is generated by: the level of motor abilities, high degree of movement control, the level of movement structure stabilisation, morphological characteristics, inter- and intro-muscular coordination, motivation, and concentration. The golf swing technique was investigated using the biomechanical analysis method. Kinematic parameters were registered using two synchronised high-speed cameras at a frequency of 2,000 Hz. The sample of subjects consisted of three professional golf players. The study results showed a relatively high variability of the swing technique. The maximum velocity of the ball after a wood swing ranged from 233 to 227 km/h. The velocity of the ball after an iron swing was lower by 10 km/h on average. The elevation angle of the ball ranged from 11.7 to 15.3 degrees. In the final phase of the golf swing, i.e. downswing, the trunk rotators play the key role.

  20. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  1. STATE OF THE ART OF THE LANDSCAPE ARCHITECTURE SPATIAL DATA MODEL FROM A GEOSPATIAL PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    A. Kastuari

    2016-10-01

    Full Text Available Spatial data and information had been used for some time in planning or landscape design. For a long time, architects were using spatial data in the form of topographic map for their designs. This method is not efficient, and it is also not more accurate than using spatial analysis by utilizing GIS. Architects are sometimes also only accentuating the aesthetical aspect for their design, but not taking landscape process into account which could cause the design could be not suitable for its use and its purpose. Nowadays, GIS role in landscape architecture has been formalized by the emergence of Geodesign terminology that starts in Representation Model and ends in Decision Model. The development of GIS could be seen in several fields of science that now have the urgency to use 3 dimensional GIS, such as in: 3D urban planning, flood modeling, or landscape planning. In this fields, 3 dimensional GIS is able to support the steps in modeling, analysis, management, and integration from related data, that describe the human activities and geophysics phenomena in more realistic way. Also, by applying 3D GIS and geodesign in landscape design, geomorphology information can be better presented and assessed. In some research, it is mentioned that the development of 3D GIS is not established yet, either in its 3D data structure, or in its spatial analysis function. This study literature will able to accommodate those problems by providing information on existing development of 3D GIS for landscape architecture, data modeling, the data accuracy, representation of data that is needed by landscape architecture purpose, specifically in the river area.

  2. State of the Art of the Landscape Architecture Spatial Data Model from a Geospatial Perspective

    Science.gov (United States)

    Kastuari, A.; Suwardhi, D.; Hanan, H.; Wikantika, K.

    2016-10-01

    Spatial data and information had been used for some time in planning or landscape design. For a long time, architects were using spatial data in the form of topographic map for their designs. This method is not efficient, and it is also not more accurate than using spatial analysis by utilizing GIS. Architects are sometimes also only accentuating the aesthetical aspect for their design, but not taking landscape process into account which could cause the design could be not suitable for its use and its purpose. Nowadays, GIS role in landscape architecture has been formalized by the emergence of Geodesign terminology that starts in Representation Model and ends in Decision Model. The development of GIS could be seen in several fields of science that now have the urgency to use 3 dimensional GIS, such as in: 3D urban planning, flood modeling, or landscape planning. In this fields, 3 dimensional GIS is able to support the steps in modeling, analysis, management, and integration from related data, that describe the human activities and geophysics phenomena in more realistic way. Also, by applying 3D GIS and geodesign in landscape design, geomorphology information can be better presented and assessed. In some research, it is mentioned that the development of 3D GIS is not established yet, either in its 3D data structure, or in its spatial analysis function. This study literature will able to accommodate those problems by providing information on existing development of 3D GIS for landscape architecture, data modeling, the data accuracy, representation of data that is needed by landscape architecture purpose, specifically in the river area.

  3. Guiding Principles for Data Architecture to Support the Pathways Community HUB Model.

    Science.gov (United States)

    Zeigler, Bernard P; Redding, Sarah; Leath, Brenda A; Carter, Ernest L; Russell, Cynthia

    2016-01-01

    The Pathways Community HUB Model provides a unique strategy to effectively supplement health care services with social services needed to overcome barriers for those most at risk of poor health outcomes. Pathways are standardized measurement tools used to define and track health and social issues from identification through to a measurable completion point. The HUB use Pathways to coordinate agencies and service providers in the community to eliminate the inefficiencies and duplication that exist among them. Experience with the Model has brought out the need for better information technology solutions to support implementation of the Pathways themselves through decision-support tools for care coordinators and other users to track activities and outcomes, and to facilitate reporting. Here we provide a basis for discussing recommendations for such a data infrastructure by developing a conceptual model that formalizes the Pathway concept underlying current implementations. The main contribution is a set of core recommendations as a framework for developing and implementing a data architecture to support implementation of the Pathways Community HUB Model. The objective is to present a tool for communities interested in adopting the Model to learn from and to adapt in their own development and implementation efforts. Experience with the Community Health Access Project (CHAP) data base system (the core implementation of the Model) has identified several issues and remedies that have been developed to address these issues. Based on analysis of issues and remedies, we present several key features for a data architecture meeting the just mentioned recommendations. Presentation of features is followed by a practical guide to their implementation allowing an organization to consider either tailoring off-the-shelf generic systems to meet the requirements or offerings that are specialized for community-based care coordination. Looking to future extensions, we discuss the

  4. Architectural Modeling for the Future Internet-enabled Enterprise (AMFInE) Workshop

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Zelm, Martin; Sanchis, Raquel; Poler, Raul; Doumeingts, Guy

    2012-01-01

    In order for future enterprises to make effective use of the Future Internet, it is necessary that their Enterprise Architecture aligns with the Future Internet Architecture. An Enterprise Architecture is a comprehensive architecture description that spans enterprise and technology aspects, to allow

  5. Automatic generation of virtual worlds from architectural and mechanical CAD models

    International Nuclear Information System (INIS)

    Szepielak, D.

    2003-12-01

    Accelerator projects like the XFEL or the planned linear collider TESLA involve extensive architectural and mechanical design work, resulting in a variety of CAD models. The CAD models will be showing different parts of the project, like e.g. the different accelerator components or parts of the building complexes, and they will be created and stored by different groups in different formats. A complete CAD model of the accelerator and its buildings is thus difficult to obtain and would also be extremely huge and difficult to handle. This thesis describes the design and prototype development of a tool which automatically creates virtual worlds from different CAD models. The tool will enable the user to select a required area for visualization on a map, and then create a 3D-model of the selected area which can be displayed in a web-browser. The thesis first discusses the system requirements and provides some background on data visualization. Then, it introduces the system architecture, the algorithms and the used technologies, and finally demonstrates the capabilities of the system using two case studies. (orig.)

  6. Mathematical modeling of a new satellite thermal architecture system connecting the east and west radiator panels and flight performance prediction

    International Nuclear Information System (INIS)

    Torres, Alejandro; Mishkinis, Donatas; Kaya, Tarik

    2014-01-01

    An entirely novel satellite thermal architecture, connecting the east and west radiators of a geostationary telecommunications satellite via loop heat pipes (LHPs), is proposed. The LHP operating temperature is regulated by using pressure regulating valves (PRVs). A transient numerical model is developed to simulate the thermal dynamic behavior of the proposed system. The details of the proposed architecture and mathematical model are presented. The model is used to analyze a set of critical design cases to identify potential failure modes prior to the qualification and in-orbit tests. The mathematical model results for critical cases are presented and discussed. The model results demonstrated the robustness and versatility of the proposed architecture under the predicted worst-case conditions. - Highlights: •We developed a mathematical model of a novel satellite thermal architecture. •We provided the dimensioning cases to design the thermal architecture. •We provided the failure mode cases to verify the thermal architecture. •We provided the results of the corresponding dimensioning and failure cases

  7. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  8. Layered Architecture for Quantum Computing

    Directory of Open Access Journals (Sweden)

    N. Cody Jones

    2012-07-01

    Full Text Available We develop a layered quantum-computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction. In doing so, we propose a new quantum-computer architecture based on optical control of quantum dots. The time scales of physical-hardware operations and logical, error-corrected quantum gates differ by several orders of magnitude. By dividing functionality into layers, we can design and analyze subsystems independently, demonstrating the value of our layered architectural approach. Using this concrete hardware platform, we provide resource analysis for executing fault-tolerant quantum algorithms for integer factoring and quantum simulation, finding that the quantum-dot architecture we study could solve such problems on the time scale of days.

  9. The dynamic information architecture system : a simulation framework to provide interoperability for process models

    International Nuclear Information System (INIS)

    Hummel, J. R.; Christiansen, J. H.

    2002-01-01

    As modeling and simulation becomes a more important part of the day-to-day activities in industry and government, organizations are being faced with the vexing problem of how to integrate a growing suite of heterogeneous models both within their own organizations and between organizations. The Argonne National Laboratory, which is operated by the University of Chicago for the United States Department of Energy, has developed the Dynamic Information Architecture System (DIAS) to address such problems. DIAS is an object-oriented, subject domain independent framework that is used to integrate legacy or custom-built models and applications. In this paper we will give an overview of the features of DIAS and give examples of how it has been used to integrate models in a number of applications. We shall also describe some of the key supporting DIAS tools that provide seamless interoperability between models and applications

  10. Using three-dimensional plant root architecture in models of shallow-slope stability.

    Science.gov (United States)

    Danjon, Frédéric; Barker, David H; Drexhage, Michael; Stokes, Alexia

    2008-05-01

    The contribution of vegetation to shallow-slope stability is of major importance in landslide-prone regions. However, existing slope stability models use only limited plant root architectural parameters. This study aims to provide a chain of tools useful for determining the contribution of tree roots to soil reinforcement. Three-dimensional digitizing in situ was used to obtain accurate root system architecture data for mature Quercus alba in two forest stands. These data were used as input to tools developed, which analyse the spatial position of roots, topology and geometry. The contribution of roots to soil reinforcement was determined by calculating additional soil cohesion using the limit equilibrium model, and the factor of safety (FOS) using an existing slope stability model, Slip4Ex. Existing models may incorrectly estimate the additional soil cohesion provided by roots, as the spatial position of roots crossing the potential slip surface is usually not taken into account. However, most soil reinforcement by roots occurs close to the tree stem and is negligible at a distance >1.0 m from the tree, and therefore global values of FOS for a slope do not take into account local slippage along the slope. Within a forest stand on a landslide-prone slope, soil fixation by roots can be minimal between uniform rows of trees, leading to local soil slippage. Therefore, staggered rows of trees would improve overall slope stability, as trees would arrest the downward movement of soil. The chain of tools consisting of both software (free for non-commercial use) and functions available from the first author will enable a more accurate description and use of root architectural parameters in standard slope stability analyses.

  11. A simple three-dimensional macroscopic root water uptake model based on the hydraulic architecture approach

    Directory of Open Access Journals (Sweden)

    V. Couvreur

    2012-08-01

    Full Text Available Many hydrological models including root water uptake (RWU do not consider the dimension of root system hydraulic architecture (HA because explicitly solving water flow in such a complex system is too time consuming. However, they might lack process understanding when basing RWU and plant water stress predictions on functions of variables such as the root length density distribution. On the basis of analytical solutions of water flow in a simple HA, we developed an "implicit" model of the root system HA for simulation of RWU distribution (sink term of Richards' equation and plant water stress in three-dimensional soil water flow models. The new model has three macroscopic parameters defined at the soil element scale, or at the plant scale, rather than for each segment of the root system architecture: the standard sink fraction distribution SSF, the root system equivalent conductance Krs and the compensatory RWU conductance Kcomp. It clearly decouples the process of water stress from compensatory RWU, and its structure is appropriate for hydraulic lift simulation. As compared to a model explicitly solving water flow in a realistic maize root system HA, the implicit model showed to be accurate for predicting RWU distribution and plant collar water potential, with one single set of parameters, in dissimilar water dynamics scenarios. For these scenarios, the computing time of the implicit model was a factor 28 to 214 shorter than that of the explicit one. We also provide a new expression for the effective soil water potential sensed by plants in soils with a heterogeneous water potential distribution, which emerged from the implicit model equations. With the proposed implicit model of the root system HA, new concepts are brought which open avenues towards simple and mechanistic RWU models and water stress functions operational for field scale water dynamics simulation.

  12. A canopy architectural model to study the competitive ability of chickpea with sowthistle.

    Science.gov (United States)

    Cici, S-Zahra-Hosseini; Adkins, Steve; Hanan, Jim

    2008-06-01

    Improving the competitive ability of crops is a sustainable method of weed management. This paper shows how a virtual plant model of competition between chickpea (Cicer arietinum) and sowthistle (Sonchus oleraceus) can be used as a framework for discovering and/or developing more competitive chickpea cultivars. The virtual plant models were developed using the L-systems formalism, parameterized according to measurements taken on plants at intervals during their development. A quasi-Monte Carlo light-environment model was used to model the effect of chickpea canopy on the development of sowthistle. The chickpea-light environment-sowthistle model (CLES model) captured the hypothesis that the architecture of chickpea plants modifies the light environment inside the canopy and determines sowthistle growth and development pattern. The resulting CLES model was parameterized for different chickpea cultivars (viz. 'Macarena', 'Bumper', 'Jimbour' and '99071-1001') to compare their competitive ability with sowthistle. To validate the CLES model, an experiment was conducted using the same four chickpea cultivars as different treatments with a sowthistle growing under their canopy. The growth of sowthistle, both in silico and in glasshouse experiments, was reduced most by '99071-1001', a cultivar with a short phyllochron. The second rank of competitive ability belonged to 'Macarena' and 'Bumper', while 'Jimbour' was the least competitive cultivar. The architecture of virtual chickpea plants modified the light inside the canopy, which influenced the growth and development of the sowthistle plants in response to different cultivars. This is the first time that a virtual plant model of a crop-weed interaction has been developed. This virtual plant model can serve as a platform for a broad range of applications in the study of chickpea-weed interactions and their environment.

  13. Data accuracy assessment using enterprise architecture

    Science.gov (United States)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  14. Evaluation of End-Products in Architecture Design Process: A Fuzzy Decision-Making Model

    Directory of Open Access Journals (Sweden)

    Serkan PALABIYIK

    2012-06-01

    Full Text Available This paper presents a study on the development of a fuzzy multi-criteria decision-making model for the evaluation of end products of the architectural design process. Potentials of the developed model were investigated within the scope of architectural design education, specifically an international design studio titled “Design for Disassembly and Reuse: Design & Building Multipurpose Transformable Pavilions.” The studio work followed a design process that integrated systematic and heuristic thinking. The design objectives and assessment criteria were clearly set out at the beginning of the process by the studio coordinator with the aim of narrowing the design space and increasing awareness of the consequences of design decisions. At the end of the design process, designs produced in the studio were evaluated using the developed model to support decision making. The model facilitated the identification of positive and negative aspects of the designs and selection of the design alternative that best met the studio objectives set at the beginning.

  15. Model-Based Engine Control Architecture with an Extended Kalman Filter

    Science.gov (United States)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  16. BIM-based Modeling and Data Enrichment of Classical Architectural Buildings

    Directory of Open Access Journals (Sweden)

    Fabrizio Ivan Apollonio

    2012-12-01

    Full Text Available EnIn this paper we presented a BIM-based approach for the documentation of Architectural Heritage. Knowledge of classical architecture is first extracted from the treatises for parametric modeling in object level. Then we established a profile library based on semantic studies to sweep out different objects. Variants grow out from the parametric models by editing or regrouping parameters based on grammars. Multiple data including material, structure and real-life state are enriched with respect to different research motivations. The BIM models are expected to ease the modeling process and provide comprehensive data shared among different platforms for further simulations.ItIn questo articolo è presentata una procedura definita nell'ambito dei sistemi BIM con l'obiettivo di documentare il Patrimonio Architettonico. I dati conoscitivi relativi all'architettura classica sono, in una prima fase, ottenuti dai trattati al fine di modellare in maniera parametrica a livello di oggetti. Successivamente è stata definita una libreria di profili, basata su principi semantici, dalla quale è possibile ottenere oggetti differenti. Dati di natura differente, relativi ad esempio ai materiali, alle strutture, allo stato di fatto, sono implementati in funzione delle differenti esigenze. I modelli BIM hanno la potenzialità di facilitare le procedure di modellazione e di fornire informazioni e dati completi che possono essere condivisi tra piattaforme differenti per ulteriori simulazioni ed analisi.

  17. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    Science.gov (United States)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  18. Performance Analysis of GFDL's GCM Line-By-Line Radiative Transfer Model on GPU and MIC Architectures

    Science.gov (United States)

    Menzel, R.; Paynter, D.; Jones, A. L.

    2017-12-01

    Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.

  19. A State-Based Modeling Approach for Efficient Performance Evaluation of Embedded System Architectures at Transaction Level

    Directory of Open Access Journals (Sweden)

    Anthony Barreteau

    2012-01-01

    Full Text Available Abstract models are necessary to assist system architects in the evaluation process of hardware/software architectures and to cope with the still increasing complexity of embedded systems. Efficient methods are required to create reliable models of system architectures and to allow early performance evaluation and fast exploration of the design space. In this paper, we present a specific transaction level modeling approach for performance evaluation of hardware/software architectures. This approach relies on a generic execution model that exhibits light modeling effort. Created models are used to evaluate by simulation expected processing and memory resources according to various architectures. The proposed execution model relies on a specific computation method defined to improve the simulation speed of transaction level models. The benefits of the proposed approach are highlighted through two case studies. The first case study is a didactic example illustrating the modeling approach. In this example, a simulation speed-up by a factor of 7,62 is achieved by using the proposed computation method. The second case study concerns the analysis of a communication receiver supporting part of the physical layer of the LTE protocol. In this case study, architecture exploration is led in order to improve the allocation of processing functions.

  20. Fractionated Spacecraft Architectures Seeding Study

    National Research Council Canada - National Science Library

    Mathieu, Charlotte; Weigel, Annalisa

    2006-01-01

    .... Models were developed from a customer-centric perspective to assess different fractionated spacecraft architectures relative to traditional spacecraft architectures using multi-attribute analysis...

  1. Proceedings of the International Conference on Parallel Architectures and Compilation Techniques Held 24-26 August 1994 in Montreal, Canada

    Science.gov (United States)

    1994-08-26

    International Symposium on Computer Architecture, April 1994. [16] D. Nagle, R. Uhlig, T. Stanley, S. Sechrest, T. Mudge, and Richard Brown, "Design...F. Catthoor, G. Goossens , et al.: Open-ended System for High-Level Synthesis of Flexible Signal Processors, Proc. European Conference on Design

  2. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  3. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  4. The Reactive-Causal Architecture: Introducing an Emotion Model along with Theories of Needs

    Science.gov (United States)

    Aydin, Ali Orhan; Orgun, Mehmet Ali

    In the entertainment application area, one of the major aims is to develop believable agents. To achieve this aim, agents should be highly autonomous, situated, flexible, and display affect. The Reactive-Causal Architecture (ReCau) is proposed to simulate these core attributes. In its current form, ReCau cannot explain the effects of emotions on intelligent behaviour. This study aims is to further improve the emotion model of ReCau to explain the effects of emotions on intelligent behaviour. This improvement allows ReCau to be emotional to support the development of believable agents.

  5. Interactive Modeling of Architectural Freeform Structures - Combining Geometry with Fabrication and Statics

    KAUST Repository

    Jiang, Caigui

    2014-09-01

    This paper builds on recent progress in computing with geometric constraints, which is particularly relevant to architectural geometry. Not only do various kinds of meshes with additional properties (like planar faces, or with equilibrium forces in their edges) become available for interactive geometric modeling, but so do other arrangements of geometric primitives, like honeycomb structures. The latter constitute an important class of geometric objects, with relations to “Lobel” meshes, and to freeform polyhedral patterns. Such patterns are particularly interesting and pose research problems which go beyond what is known for meshes, e.g. with regard to their computing, their flexibility, and the assessment of their fairness.

  6. Modeling in architectural-planning solutions of agrarian technoparks as elements of the infrastructure

    Science.gov (United States)

    Abdrassilova, Gulnara S.

    2017-09-01

    In the context of development of the agriculture as the driver of the economy of Kazakhstan it is imperative to study new types of agrarian constructions (agroparks, agrotourists complexes, "vertical" farms, conservatories, greenhouses) that can be combined into complexes - agrarian technoparks. Creation of agrarian technoparks as elements of the infrastructure of the agglomeration shall ensure the breakthrough in the field of agrarian goods production, storing and recycling. Modeling of architectural-planning solutions of agrarian technoparks supports development of the theory and practice of designing objects based on innovative approaches.

  7. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    Science.gov (United States)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  8. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  9. Comments on three-dimensional modeling in ancient greek and roman architecture: Herodotus, Aristotle and Vitruvius

    Directory of Open Access Journals (Sweden)

    Artur Rozestraten

    2007-12-01

    Full Text Available This article reviews Herodotus' and Aristotle's text's extracts refered on specific bibliography as proofs of the use of architectural scale models in greek ancient architect's design process. This review of the original texts reveals mistaken traductions over whom insustainable historical perspectives have been built. The historical documents review extends to the roman world and analizes Vitruvius' text's extracts. This study aims, by relating textual documents and objects gattered by archaeology, to build new interpretations on the subject of representation and design process in Antiquity.

  10. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    Science.gov (United States)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  11. Developing Historic Building Information Modelling Guidelines and Procedures for Architectural Heritage in Ireland

    Science.gov (United States)

    Murphy, M.; Corns, A.; Cahill, J.; Eliashvili, K.; Chenau, A.; Pybus, C.; Shaw, R.; Devlin, G.; Deevy, A.; Truong-Hong, L.

    2017-08-01

    Cultural heritage researchers have recently begun applying Building Information Modelling (BIM) to historic buildings. The model is comprised of intelligent objects with semantic attributes which represent the elements of a building structure and are organised within a 3D virtual environment. Case studies in Ireland are used to test and develop the suitable systems for (a) data capture/digital surveying/processing (b) developing library of architectural components and (c) mapping these architectural components onto the laser scan or digital survey to relate the intelligent virtual representation of a historic structure (HBIM). While BIM platforms have the potential to create a virtual and intelligent representation of a building, its full exploitation and use is restricted to narrow set of expert users with access to costly hardware, software and skills. The testing of open BIM approaches in particular IFCs and the use of game engine platforms is a fundamental component for developing much wider dissemination. The semantically enriched model can be transferred into a WEB based game engine platform.

  12. Implementation of an Agent-Based Parallel Tissue Modelling Framework for the Intel MIC Architecture

    Directory of Open Access Journals (Sweden)

    Maciej Cytowski

    2017-01-01

    Full Text Available Timothy is a novel large scale modelling framework that allows simulating of biological processes involving different cellular colonies growing and interacting with variable environment. Timothy was designed for execution on massively parallel High Performance Computing (HPC systems. The high parallel scalability of the implementation allows for simulations of up to 109 individual cells (i.e., simulations at tissue spatial scales of up to 1 cm3 in size. With the recent advancements of the Timothy model, it has become critical to ensure appropriate performance level on emerging HPC architectures. For instance, the introduction of blood vessels supplying nutrients to the tissue is a very important step towards realistic simulations of complex biological processes, but it greatly increased the computational complexity of the model. In this paper, we describe the process of modernization of the application in order to achieve high computational performance on HPC hybrid systems based on modern Intel® MIC architecture. Experimental results on the Intel Xeon Phi™ coprocessor x100 and the Intel Xeon Phi processor x200 are presented.

  13. Infra-Free® (IF) Architecture System as the Method for Post-Disaster Shelter Model

    Science.gov (United States)

    Chang, Huai-Chien; Anilir, Serkan

    Currently, International Space Station (ISS) is capable to support 3 to 4 astronauts onboard for at least 6 months using an integrated life support system to support the need of crew onboard. Waste from daily life of the crew members are collected by waste recycle systems, electricity consumption depends on collecting solar energy, etc. though it likes the infrastructure we use on Earth, ISS can be comprehended nearly a self-reliant integrated architecture so far, this could be given an important hint for current architecture which is based on urban centralized infrastructure to support our daily lives but could be vulnerable in case of nature disasters. Comparatively, more and more economic activities and communications rely on the enormous urban central infrastructure to support our daily lives. Therefore, when in case of natural disasters, it may cut-out the infrastructure system temporarily or permanent. In order to solve this problem, we propose to design a temporary shelter, which is capable to work without depending on any existing infrastructure. We propose to use some closed-life-cycle or integrated technologies inspired by the possibilities of space and other emerging technologies into current daily architecture by using Infra-free® design framework; which proposes to integrate various life supporting infrastructural elements into one-closed system. We try to work on a scenario for post-disaster management housing as the method for solving the lifeline problems such as solid and liquid waste, energy, and water and hygiene solution into one system. And trying to establish an Infra-free® model of shelter for disaster area. The ultimate objective is to design a Temp Infra-free® model dealing with the sanitation and environment preservation concerns for disaster area.

  14. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  15. A new paradigm for continuous alignment of business and IT: combining enterprise architecture modeling and enterprise ontology

    CSIR Research Space (South Africa)

    Hinkelmann, K

    2015-08-01

    Full Text Available Alignment of Business and IT: Combining Enterprise Architecture Modeling and Enterprise Ontology Knut Hinkelmann, School of Business, FHNW University of Applied Sciences and Arts Northwestern Switzerland, 4600 Olten, Switzerland and Department... initiatives, the architecture at the start of a development might not be appropriate anymore when the new business processes and information systems are rolled out. The grand challenge for today's enterprises, which we deal with in this research...

  16. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  17. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders...

  18. Using the ACT-R architecture to specify 39 quantitative process models of decision making

    Directory of Open Access Journals (Sweden)

    Julian N. Marewski

    2011-08-01

    Full Text Available Hypotheses about decision processes are often formulated qualitatively and remain silent about the interplay of decision, memorial, and other cognitive processes. At the same time, existing decision models are specified at varying levels of detail, making it difficult to compare them. We provide a methodological primer on how detailed cognitive architectures such as ACT-R allow remedying these problems. To make our point, we address a controversy, namely, whether noncompensatory or compensatory processes better describe how people make decisions from the accessibility of memories. We specify 39 models of accessibility-based decision processes in ACT-R, including the noncompensatory recognition heuristic and various other popular noncompensatory and compensatory decision models. Additionally, to illustrate how such models can be tested, we conduct a model comparison, fitting the models to one experiment and letting them generalize to another. Behavioral data are best accounted for by race models. These race models embody the noncompensatory recognition heuristic and compensatory models as a race between competing processes, dissolving the dichotomy between existing decision models.

  19. Lamb wave propagation modelling and simulation using parallel processing architecture and graphical cards

    International Nuclear Information System (INIS)

    Paćko, P; Bielak, T; Staszewski, W J; Uhl, T; Spencer, A B; Worden, K

    2012-01-01

    This paper demonstrates new parallel computation technology and an implementation for Lamb wave propagation modelling in complex structures. A graphical processing unit (GPU) and computer unified device architecture (CUDA), available in low-cost graphical cards in standard PCs, are used for Lamb wave propagation numerical simulations. The local interaction simulation approach (LISA) wave propagation algorithm has been implemented as an example. Other algorithms suitable for parallel discretization can also be used in practice. The method is illustrated using examples related to damage detection. The results demonstrate good accuracy and effective computational performance of very large models. The wave propagation modelling presented in the paper can be used in many practical applications of science and engineering. (paper)

  20. Geostatistical simulation of geological architecture and uncertainty propagation in groundwater modeling

    DEFF Research Database (Denmark)

    He, Xiulan

    parameters and model structures, which are the primary focuses of this PhD research. Parameter uncertainty was analyzed using an optimization tool (PEST: Parameter ESTimation) in combination with a random sampling method (LHS: Latin Hypercube Sampling). Model structure, namely geological architecture...... be compensated by model parameters, e.g. when hydraulic heads are considered. However, geological structure is the primary source of uncertainty with respect to simulations of groundwater age and capture zone. Operational MPS based software has been on stage for just around ten years; yet, issues regarding...... geological structures of these three sites provided appropriate conditions for testing the methods. Our study documented that MPS is an efficient approach for simulating geological heterogeneity, especially for non-stationary system. The high resolution of geophysical data such as SkyTEM is valuable both...

  1. Architecture of a Process Broker for Interoperable Geospatial Modeling on the Web

    Directory of Open Access Journals (Sweden)

    Lorenzo Bigagli

    2015-04-01

    Full Text Available The identification of appropriate mechanisms for process sharing and reuse by means of composition is considered a key enabler for the effective uptake of a global Earth Observation infrastructure, currently pursued by the international geospatial research community. Modelers in need of running complex workflows may benefit from outsourcing process composition to a dedicated external service, according to the brokering approach. This work introduces our architecture of a process broker, as a distributed information system for creating, validating, editing, storing, publishing and executing geospatial-modeling workflows. The broker provides a service framework for adaptation, reuse and complementation of existing processing resources (including models and geospatial services in general in the form of interoperable, executable workflows. The described solution has been experimentally applied in several use scenarios in the context of EU-funded projects and the Global Earth Observation System of Systems.

  2. Relational Architecture

    DEFF Research Database (Denmark)

    Reeh, Henrik

    2018-01-01

    in a scholarly institution (element #3), as well as the certified PhD scholar (element #4) and the architectural profession, notably its labour market (element #5). This first layer outlines the contemporary context which allows architectural research to take place in a dynamic relationship to doctoral education...... a human and institutional development going on since around 1990 when the present PhD institution was first implemented in Denmark. To be sure, the model is centred around the PhD dissertation (element #1). But it involves four more components: the PhD candidate (element #2), his or her supervisor...... and interrelated fields in which history, place, and sound come to emphasize architecture’s relational qualities rather than the apparent three-dimensional solidity of constructed space. A third layer of relational architecture is at stake in the professional experiences after the defence of the authors...

  3. Reference architecture and interoperability model for data mining and fusion in scientific cross-domain infrastructures

    Science.gov (United States)

    Haener, Rainer; Waechter, Joachim; Grellet, Sylvain; Robida, Francois

    2017-04-01

    Interoperability is the key factor in establishing scientific research environments and infrastructures, as well as in bringing together heterogeneous, geographically distributed risk management, monitoring, and early warning systems. Based on developments within the European Plate Observing System (EPOS), a reference architecture has been devised that comprises architectural blue-prints and interoperability models regarding the specification of business processes and logic as well as the encoding of data, metadata, and semantics. The architectural blueprint is developed on the basis of the so called service-oriented architecture (SOA) 2.0 paradigm, which combines intelligence and proactiveness of event-driven with service-oriented architectures. SOA 2.0 supports analysing (Data Mining) both, static and real-time data in order to find correlations of disparate information that do not at first appear to be intuitively obvious: Analysed data (e.g., seismological monitoring) can be enhanced with relationships discovered by associating them (Data Fusion) with other data (e.g., creepmeter monitoring), with digital models of geological structures, or with the simulation of geological processes. The interoperability model describes the information, communication (conversations) and the interactions (choreographies) of all participants involved as well as the processes for registering, providing, and retrieving information. It is based on the principles of functional integration, implemented via dedicated services, communicating via service-oriented and message-driven infrastructures. The services provide their functionality via standardised interfaces: Instead of requesting data directly, users share data via services that are built upon specific adapters. This approach replaces the tight coupling at data level by a flexible dependency on loosely coupled services. The main component of the interoperability model is the comprehensive semantic description of the information

  4. Model of a DNA-protein complex of the architectural monomeric protein MC1 from Euryarchaea.

    Directory of Open Access Journals (Sweden)

    Françoise Paquet

    Full Text Available In Archaea the two major modes of DNA packaging are wrapping by histone proteins or bending by architectural non-histone proteins. To supplement our knowledge about the binding mode of the different DNA-bending proteins observed across the three domains of life, we present here the first model of a complex in which the monomeric Methanogen Chromosomal protein 1 (MC1 from Euryarchaea binds to the concave side of a strongly bent DNA. In laboratory growth conditions MC1 is the most abundant architectural protein present in Methanosarcina thermophila CHTI55. Like most proteins that strongly bend DNA, MC1 is known to bind in the minor groove. Interaction areas for MC1 and DNA were mapped by Nuclear Magnetic Resonance (NMR data. The polarity of protein binding was determined using paramagnetic probes attached to the DNA. The first structural model of the DNA-MC1 complex we propose here was obtained by two complementary docking approaches and is in good agreement with the experimental data previously provided by electron microscopy and biochemistry. Residues essential to DNA-binding and -bending were highlighted and confirmed by site-directed mutagenesis. It was found that the Arg25 side-chain was essential to neutralize the negative charge of two phosphates that come very close in response to a dramatic curvature of the DNA.

  5. Strategies for memory-based decision making: Modeling behavioral and neural signatures within a cognitive architecture.

    Science.gov (United States)

    Fechner, Hanna B; Pachur, Thorsten; Schooler, Lael J; Mehlhorn, Katja; Battal, Ceren; Volz, Kirsten G; Borst, Jelmer P

    2016-12-01

    How do people use memories to make inferences about real-world objects? We tested three strategies based on predicted patterns of response times and blood-oxygen-level-dependent (BOLD) responses: one strategy that relies solely on recognition memory, a second that retrieves additional knowledge, and a third, lexicographic (i.e., sequential) strategy, that considers knowledge conditionally on the evidence obtained from recognition memory. We implemented the strategies as computational models within the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture, which allowed us to derive behavioral and neural predictions that we then compared to the results of a functional magnetic resonance imaging (fMRI) study in which participants inferred which of two cities is larger. Overall, versions of the lexicographic strategy, according to which knowledge about many but not all alternatives is searched, provided the best account of the joint patterns of response times and BOLD responses. These results provide insights into the interplay between recognition and additional knowledge in memory, hinting at an adaptive use of these two sources of information in decision making. The results highlight the usefulness of implementing models of decision making within a cognitive architecture to derive predictions on the behavioral and neural level. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. A probabilistic evaluation procedure for process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2018-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  7. Modeling the impact of scaffold architecture and mechanical loading on collagen turnover in engineered cardiovascular tissues

    NARCIS (Netherlands)

    Argento, G.; de Jonge, N.; Söntjens, S.H.M.; Oomens, C.W.J.; Bouten, C.V.C.; Baaijens, F.P.T.

    2015-01-01

    The anisotropic collagen architecture of an engineered cardiovascular tissue has a major impact on its in vivo mechanical performance. This evolving collagen architecture is determined by initial scaffold microstructure and mechanical loading. Here, we developed and validated a theoretical and

  8. Laboratory infrastructure driven key performance indicator development using the smart grid architecture model

    DEFF Research Database (Denmark)

    Syed, Mazheruddin H.; Guillo-Sansano, Efren; Blair, Steven M.

    2017-01-01

    This study presents a methodology for collaboratively designing laboratory experiments and developing key performance indicators for the testing and validation of novel power system control architectures in multiple laboratory environments. The contribution makes use of the smart grid architecture...

  9. Panel 5 -- Open Architecture, Open Business Models and Collaboration for Acquisition

    National Research Council Canada - National Science Library

    Johnson, Bill; Guertin, Nick

    2007-01-01

    What do we mean by Naval Open Architecture? Naval Open Architecture is the confluence of business and technical practices yielding modular, interoperable systems that adhere to open standards with published interfaces...

  10. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... correlation between the study of existing architectures and the training of competences to design for present-day realities.......This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...

  11. OmniPHR: A distributed architecture model to integrate personal health records.

    Science.gov (United States)

    Roehrs, Alex; da Costa, Cristiano André; da Rosa Righi, Rodrigo

    2017-07-01

    The advances in the Information and Communications Technology (ICT) brought many benefits to the healthcare area, specially to digital storage of patients' health records. However, it is still a challenge to have a unified viewpoint of patients' health history, because typically health data is scattered among different health organizations. Furthermore, there are several standards for these records, some of them open and others proprietary. Usually health records are stored in databases within health organizations and rarely have external access. This situation applies mainly to cases where patients' data are maintained by healthcare providers, known as EHRs (Electronic Health Records). In case of PHRs (Personal Health Records), in which patients by definition can manage their health records, they usually have no control over their data stored in healthcare providers' databases. Thereby, we envision two main challenges regarding PHR context: first, how patients could have a unified view of their scattered health records, and second, how healthcare providers can access up-to-date data regarding their patients, even though changes occurred elsewhere. For addressing these issues, this work proposes a model named OmniPHR, a distributed model to integrate PHRs, for patients and healthcare providers use. The scientific contribution is to propose an architecture model to support a distributed PHR, where patients can maintain their health history in an unified viewpoint, from any device anywhere. Likewise, for healthcare providers, the possibility of having their patients data interconnected among health organizations. The evaluation demonstrates the feasibility of the model in maintaining health records distributed in an architecture model that promotes a unified view of PHR with elasticity and scalability of the solution. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  13. Modeling safety instrumented systems with MooN voting architectures addressing system reconfiguration for testing

    International Nuclear Information System (INIS)

    Torres-Echeverria, A.C.; Martorell, S.; Thompson, H.A.

    2011-01-01

    This paper addresses the modeling of probability of dangerous failure on demand and spurious trip rate of safety instrumented systems that include MooN voting redundancies in their architecture. MooN systems are a special case of k-out-of-n systems. The first part of the article is devoted to the development of a time-dependent probability of dangerous failure on demand model with capability of handling MooN systems. The model is able to model explicitly common cause failure and diagnostic coverage, as well as different test frequencies and strategies. It includes quantification of both detected and undetected failures, and puts emphasis on the quantification of common cause failure to the system probability of dangerous failure on demand as an additional component. In order to be able to accommodate changes in testing strategies, special treatment is devoted to the analysis of system reconfiguration (including common cause failure) during test of one of its components, what is then included in the model. Another model for spurious trip rate is also analyzed and extended under the same methodology in order to empower it with similar capabilities. These two models are powerful enough, but at the same time simple, to be suitable for handling of dependability measures in multi-objective optimization of both system design and test strategies for safety instrumented systems. The level of modeling detail considered permits compliance with the requirements of the standard IEC 61508. The two models are applied to brief case studies to demonstrate their effectiveness. The results obtained demonstrated that the first model is adequate to quantify time-dependent PFD of MooN systems during different system states (i.e. full operation, test and repair) and different MooN configurations, which values are averaged to obtain the PFD avg . Also, it was demonstrated that the second model is adequate to quantify STR including spurious trips induced by internal component failure and

  14. Architectural fragments

    DEFF Research Database (Denmark)

    Bang, Jacob Sebastian

    2018-01-01

    I have created a large collection of plaster models: a collection of Obstructions, errors and opportunities that may develop into architecture. The models are fragments of different complex shapes as well as more simple circular models with different profiling and diameters. In this contect I have....... I try to invent the ways of drawing the models - that decode and unfold them into architectural fragments- into future buildings or constructions in the landscape. [1] Luigi Moretti: Italian architect, 1907 - 1973 [2] Man Ray: American artist, 1890 - 1976. in 2015, I saw the wonderful exhibition...... "Man Ray - Human Equations" at the Glyptotek in Copenhagen, organized by the Philips Collection in Washington D.C. and the Israel Museum in Jerusalem (in 2013). See also: "Man Ray - Human Equations" catalogue published by Hatje Cantz Verlag, Germany, 2014....

  15. Analysis of optical near-field energy transfer by stochastic model unifying architectural dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Naruse, Makoto, E-mail: naruse@nict.go.jp [Photonic Network Research Institute, National Institute of Information and Communications Technology, 4-2-1 Nukui-kita, Koganei, Tokyo 184-8795 (Japan); Nanophotonics Research Center, Graduate School of Engineering, The University of Tokyo, 2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan); Akahane, Kouichi; Yamamoto, Naokatsu [Photonic Network Research Institute, National Institute of Information and Communications Technology, 4-2-1 Nukui-kita, Koganei, Tokyo 184-8795 (Japan); Holmström, Petter [Laboratory of Photonics and Microwave Engineering, Royal Institute of Technology (KTH), SE-164 40 Kista (Sweden); Thylén, Lars [Laboratory of Photonics and Microwave Engineering, Royal Institute of Technology (KTH), SE-164 40 Kista (Sweden); Hewlett-Packard Laboratories, Palo Alto, California 94304 (United States); Huant, Serge [Institut Néel, CNRS and Université Joseph Fourier, 25 rue des Martyrs BP 166, 38042 Grenoble Cedex 9 (France); Ohtsu, Motoichi [Nanophotonics Research Center, Graduate School of Engineering, The University of Tokyo, 2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan); Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, 2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2014-04-21

    We theoretically and experimentally demonstrate energy transfer mediated by optical near-field interactions in a multi-layer InAs quantum dot (QD) structure composed of a single layer of larger dots and N layers of smaller ones. We construct a stochastic model in which optical near-field interactions that follow a Yukawa potential, QD size fluctuations, and temperature-dependent energy level broadening are unified, enabling us to examine device-architecture-dependent energy transfer efficiencies. The model results are consistent with the experiments. This study provides an insight into optical energy transfer involving inherent disorders in materials and paves the way to systematic design principles of nanophotonic devices that will allow optimized performance and the realization of designated functions.

  16. Analysis of optical near-field energy transfer by stochastic model unifying architectural dependencies

    International Nuclear Information System (INIS)

    Naruse, Makoto; Akahane, Kouichi; Yamamoto, Naokatsu; Holmström, Petter; Thylén, Lars; Huant, Serge; Ohtsu, Motoichi

    2014-01-01

    We theoretically and experimentally demonstrate energy transfer mediated by optical near-field interactions in a multi-layer InAs quantum dot (QD) structure composed of a single layer of larger dots and N layers of smaller ones. We construct a stochastic model in which optical near-field interactions that follow a Yukawa potential, QD size fluctuations, and temperature-dependent energy level broadening are unified, enabling us to examine device-architecture-dependent energy transfer efficiencies. The model results are consistent with the experiments. This study provides an insight into optical energy transfer involving inherent disorders in materials and paves the way to systematic design principles of nanophotonic devices that will allow optimized performance and the realization of designated functions

  17. Estimating Fallout Building Attributes from Architectural Features and Global Earthquake Model (GEM) Building Descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Michael B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kane, Staci R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-03-01

    A nuclear explosion has the potential to injure or kill tens to hundreds of thousands (or more) of people through exposure to fallout (external gamma) radiation. Existing buildings can protect their occupants (reducing fallout radiation exposures) by placing material and distance between fallout particles and individuals indoors. Prior efforts have determined an initial set of building attributes suitable to reasonably assess a given building’s protection against fallout radiation. The current work provides methods to determine the quantitative values for these attributes from (a) common architectural features and data and (b) buildings described using the Global Earthquake Model (GEM) taxonomy. These methods will be used to improve estimates of fallout protection for operational US Department of Defense (DoD) and US Department of Energy (DOE) consequence assessment models.

  18. Modeling, analysis and feasibility study of new drivetrain architectures for off-highway vehicles

    International Nuclear Information System (INIS)

    Hegazy, Omar; Barrero, Ricardo; Van den Bossche, Peter; El Baghdadi, Mohamed; Smekens, Jelle; Van Mierlo, Joeri; Vriens, Wouter; Bogaerts, Bruno

    2016-01-01

    Electrified powertrains/propulsion systems have gained a significant interest in transport industry to develop energy-efficient vehicular systems. In these powertrains, rechargeable energy storage systems (such as High Energy Batteries, Electrochemical Double-Layer Capacitors or High Power Batteries), electric motors, energy management strategies and advanced power electronics converters play an important role in the development of new generations of clean vehicles. Therefore, this paper proposes new drivetrain architectures for Straddle Truck Carriers, as one of off-highway vehicles, which are used in harbors to move containers, in order to improve the Straddle Truck Carriers drivetrain efficiency and to reduce the greenhouse emissions as well as the energy bills. The proposed drivetrains are: (1) series hybrid Straddle Truck Carrier based on a small rechargeable energy storage system (option A), (2) series hybrid based on a reduced internal combustion engine (option B), (3) full electric drivetrain, and (4) new full electric drivetrain based on dynamic wireless-power transfer system. In this paper, an accurate quasi-static model of the conventional Straddle Truck Carrier drivetrain is developed and described in detail. Then, the proposed drivetrain architectures are designed and modeled using Matlab/Simulink. This article also presents the optimal design of rechargeable energy storage systems that can be utilized in those drivetrains. Based on the rechargeable energy storage system type, a thorough comparative study of new Straddle Truck Carrier drivetrains is described in detail. Finally, the developed model and simulation results have been validated with real measurements of the drivetrain. - Highlights: • Development and validation of an accurate electrical model for a Straddle Truck Carrier drivetrain. • New series hybrid and electric Straddle Truck Carrier drivetrains are proposed. • Optimal design and powertrain sizing are presented. • Thorough

  19. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  20. Analytical method of CIM to PIM transformation in Model Driven Architecture (MDA

    Directory of Open Access Journals (Sweden)

    Martin Kardos

    2010-06-01

    Full Text Available Information system’s models on higher level of abstraction have become a daily routine in many software companies. The concept of Model Driven Architecture (MDA published by standardization body OMG1 since 2001 has become a concept for creation of software applications and information systems. MDA specifies four levels of abstraction: top three levels are created as graphical models and the last one as implementation code model. Many research works of MDA are focusing on the lower levels and transformations between each other. The top level of abstraction, called Computation Independent Model (CIM and its transformation to the lower level called Platform Independent Model (PIM is not so extensive research topic. Considering to a great importance and usability of this level in practice of IS2Keywords: transformation, MDA, CIM, PIM, UML, DFD. development now our research activity is focused to this highest level of abstraction – CIM and its possible transformation to the lower PIM level. In this article we are presenting a possible solution of CIM modeling and its analytic method of transformation to PIM.

  1. Multi-Sensor As-Built Models of Complex Industrial Architectures

    Directory of Open Access Journals (Sweden)

    Jean-François Hullo

    2015-12-01

    Full Text Available In the context of increased maintenance operations and generational renewal work, a nuclear owner and operator, like Electricité de France (EDF, is invested in the scaling-up of tools and methods of “as-built virtual reality” for whole buildings and large audiences. In this paper, we first present the state of the art of scanning tools and methods used to represent a very complex architecture. Then, we propose a methodology and assess it in a large experiment carried out on the most complex building of a 1300-megawatt power plant, an 11-floor reactor building. We also present several developments that made possible the acquisition, processing and georeferencing of multiple data sources (1000+ 3D laser scans and RGB panoramic, total-station surveying, 2D floor plans and the 3D reconstruction of CAD as-built models. In addition, we introduce new concepts for user interaction with complex architecture, elaborated during the development of an application that allows a painless exploration of the whole dataset by professionals, unfamiliar with such data types. Finally, we discuss the main feedback items from this large experiment, the remaining issues for the generalization of such large-scale surveys and the future technical and scientific challenges in the field of industrial “virtual reality”.

  2. EASEE: an open architecture approach for modeling battlespace signal and sensor phenomenology

    Science.gov (United States)

    Waldrop, Lauren E.; Wilson, D. Keith; Ekegren, Michael T.; Borden, Christian T.

    2017-04-01

    Open architecture in the context of defense applications encourages collaboration across government agencies and academia. This paper describes a success story in the implementation of an open architecture framework that fosters transparency and modularity in the context of Environmental Awareness for Sensor and Emitter Employment (EASEE), a complex physics-based software package for modeling the effects of terrain and atmospheric conditions on signal propagation and sensor performance. Among the highlighted features in this paper are: (1) a code refactorization to separate sensitive parts of EASEE, thus allowing collaborators the opportunity to view and interact with non-sensitive parts of the EASEE framework with the end goal of supporting collaborative innovation, (2) a data exchange and validation effort to enable the dynamic addition of signatures within EASEE thus supporting a modular notion that components can be easily added or removed to the software without requiring recompilation by developers, and (3) a flexible and extensible XML interface, which aids in decoupling graphical user interfaces from EASEE's calculation engine, and thus encourages adaptability to many different defense applications. In addition to the outlined points above, this paper also addresses EASEE's ability to interface with both proprietary systems such as ArcGIS. A specific use case regarding the implementation of an ArcGIS toolbar that leverages EASEE's XML interface and enables users to set up an EASEE-compliant configuration for probability of detection or optimal sensor placement calculations in various modalities is discussed as well.

  3. Readout Architecture for Hybrid Pixel Readout Chips

    CERN Document Server

    AUTHOR|(SzGeCERN)694170; Westerlund, Tomi; Wyllie, Ken

    The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99 % with half the output rate as a bus-based system. The network-based solution avoids ``broken'' columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of $>$ 10 % to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling ($TLM$) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of $>$ 10 in run-time...

  4. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    Campbell and Shiller (1987) proposed a graphical technique for the present value model, which consists of plotting estimates of the spread and theoretical spread as calculated from the cointegrated vector autoregressive model without imposing the restrictions implied by the present value model....... In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  5. Modelization of three-dimensional bone micro-architecture using Markov random fields with a multi-level clique system

    International Nuclear Information System (INIS)

    Lamotte, T.; Dinten, J.M.; Peyrin, F.

    2004-01-01

    Imaging trabecular bone micro-architecture in vivo non-invasively is still a challenging issue due to the complexity and small size of the structure. Thus, having a realistic 3D model of bone micro-architecture could be useful in image segmentation or image reconstruction. The goal of this work was to develop a 3D model of trabecular bone micro-architecture which can be seen as a problem of texture synthesis. We investigated a statistical model based on 3D Markov Random Fields (MRF's). Due to the Hammersley-Clifford theorem MRF's may equivalently be defined by an energy function on some set of cliques. In order to model 3D binary bone texture images (bone / background), we first used a particular well-known subclass of MRFs: the Ising model. The local energy function at some voxel depends on the closest neighbors of the voxels and on some parameters which control the shape and the proportion of bone. However, simulations yielded textures organized as connected clusters which even when varying the parameters did not approach the complexity of bone micro-architecture. Then, we introduced a second level of cliques taking into account neighbors located at some distance d from the site s and a new set of cliques allowing to control the plate thickness and spacing. The 3D bone texture images generated using the proposed model were analyzed using the usual bone-architecture quantification tools in order to relate the parameters of the MRF model to the characteristic parameters of bone micro-architecture (trabecular spacing, trabecular thickness, number of trabeculae...). (authors)

  6. Using a cognitive architecture in educational and recreational games : How to incorporate a model in your App

    NARCIS (Netherlands)

    Taatgen, Niels A.; de Weerd, Harmen; Reitter, David; Ritter, Frank

    2016-01-01

    We present a Swift re-implementation of the ACT-R cognitive architecture, which can be used to quickly build iOS Apps that incorporate an ACT-R model as a core feature. We discuss how this implementation can be used in an example model, and explore the breadth of possibilities by presenting six Apps

  7. Editorial - Special Issue on Model-driven Service-oriented architectures

    NARCIS (Netherlands)

    Andrade Almeida, João; Ferreira Pires, Luis; van Sinderen, Marten J.; Steen, M.W.A.

    2009-01-01

    Model-driven approaches to software development have proliferated in recent years owing to the availability of techniques based on metamodelling and model transformations, such as the meta-object facility (MOF) and the query view transformation (QVT) standards. During the same period,

  8. Extraction of fibre network architecture by X-ray tomography and prediction of elastic properties using an affine analytical model

    International Nuclear Information System (INIS)

    Tsarouchas, D.; Markaki, A.E.

    2011-01-01

    This paper proposes a method for extracting reliable architectural characteristics from complex porous structures using micro-computed tomography (μCT) images. The work focuses on a highly porous material composed of a network of fibres bonded together. The segmentation process, allowing separation of the fibres from the remainder of the image, is the most critical step in constructing an accurate representation of the network architecture. Segmentation methods, based on local and global thresholding, were investigated and evaluated by a quantitative comparison of the architectural parameters they yielded, such as the fibre orientation and segment length (sections between joints) distributions and the number of inter-fibre crossings. To improve segmentation accuracy, a deconvolution algorithm was proposed to restore the original images. The efficacy of the proposed method was verified by comparing μCT network architectural characteristics with those obtained using high resolution CT scans (nanoCT). The results indicate that this approach resolves the architecture of these complex networks and produces results approaching the quality of nanoCT scans. The extracted architectural parameters were used in conjunction with an affine analytical model to predict the axial and transverse stiffnesses of the fibre network. Transverse stiffness predictions were compared with experimentally measured values obtained by vibration testing.

  9. Advanced customization in architectural design and construction

    CERN Document Server

    Naboni, Roberto

    2015-01-01

    This book presents the state of the art in advanced customization within the sector of architectural design and construction, explaining important new technologies that are boosting design, product and process innovation and identifying the challenges to be confronted as we move toward a mass customization construction industry. Advanced machinery and software integration are discussed, as well as an overview of the manufacturing techniques offered through digital methods that are acquiring particular significance within the field of digital architecture. CNC machining, Robotic Fabrication, and Additive Manufacturing processes are all clearly explained, highlighting their ability to produce personalized architectural forms and unique construction components. Cutting-edge case studies in digitally fabricated architectural realizations are described and, looking towards the future, a new model of 100% customized architecture for design and construction is presented. The book is an excellent guide to the profoun...

  10. A Model-Based Systems Engineering Methodology for Employing Architecture In System Analysis: Developing Simulation Models Using Systems Modeling Language Products to Link Architecture and Analysis

    Science.gov (United States)

    2016-06-01

    18 Figure 5 Spiral Model ...............................................................................................20 Figure 6...Memorandum No. 1. Tallahassee, FL: Florida Department of Transportation. 19 The spiral model of system development, first introduced in Boehm...system capabilities into the waterfall model would prove quite difficult, the spiral model assumes that available technologies will change over the

  11. Insights into Working Memory from The Perspective of The EPIC Architecture for Modeling Skilled Perceptual-Motor and Cognitive Human Performance

    National Research Council Canada - National Science Library

    Kieras, David

    1998-01-01

    Computational modeling of human perceptual-motor and cognitive performance based on a comprehensive detailed information- processing architecture leads to new insights about the components of working memory...

  12. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  13. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    Science.gov (United States)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  14. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    Science.gov (United States)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  15. A mental architecture modeling of inference of sensory stimuli to the teaching of the deaf

    Directory of Open Access Journals (Sweden)

    Rubens Santos Guimaraes

    2016-10-01

    Full Text Available The transmission and retention of knowledge rests on the cognitive faculty of the concepts linked to it. The repeatability of your applications builds a solid foundation for Education, according to cultural and behavioral standards set by the Society. This cognitive ability to infer on what we observe and perceive, regarded as intrinsic human beings, independent of their physical capacity. This article presents a conceptual model Mental Architecture Digitized – AMD, enabling reproduce inferences about sensory stimuli deaf, focused on the implementation of a web system that aims to improve the teaching and learning of students with hearing disability. In this proposal, we evaluate the contextual aspects experienced by learners during the interactions between the constituent elements of a study session, based on experiments with two deaf students enrolled in regular high school. The results allow us to infer the potential of a computer communications environment to expand the possibilities of social inclusion of these students.

  16. CFD modelling and wind tunnel validation of airflow through plant canopies using 3D canopy architecture

    International Nuclear Information System (INIS)

    Endalew, A. Melese; Hertog, M.; Delele, M.A.; Baetens, K.; Persoons, T.; Baelmans, M.; Ramon, H.; Nicolai, B.M.; Verboven, P.

    2009-01-01

    The efficiency of pesticide application to agricultural fields and the resulting environmental contamination highly depend on atmospheric airflow. A computational fluid dynamics (CFD) modelling of airflow within plant canopies using 3D canopy architecture was developed to understand the effect of the canopy to airflow. The model average air velocity was validated using experimental results in a wind tunnel with two artificial model trees of 24 cm height. Mean air velocities and their root mean square (RMS) values were measured on a vertical plane upstream and downstream sides of the trees in the tunnel using 2D hotwire anemometer after imposing a uniform air velocity of 10 m s -1 at the inlet. 3D virtual canopy geometries of the artificial trees were modelled and introduced into a computational fluid domain whereby airflow through the trees was simulated using Reynolds-Averaged Navier-Stokes (RANS) equations and k-ε turbulence model. There was good agreement of the average longitudinal velocity, U between the measurements and the simulation results with relative errors less than 2% for upstream and 8% for downstream sides of the trees. The accuracy of the model prediction for turbulence kinetic energy k and turbulence intensity I was acceptable within the tree height when using a roughness length (y 0 = 0.02 mm) for the surface roughness of the tree branches and by applying a source model in a porous sub-domain created around the trees. The approach was applied for full scale orchard trees in the atmospheric boundary layer (ABL) and was compared with previous approaches and works. The simulation in the ABL was made using two groups of full scale orchard trees; short (h = 3 m) with wider branching and long (h = 4 m) with narrow branching. This comparison showed good qualitative agreements on the vertical profiles of U with small local differences as expected due to the spatial disparities in tree architecture. This work was able to show airflow within and above the

  17. Fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave and free-space-optics architecture with an adaptive diversity combining technique.

    Science.gov (United States)

    Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung

    2016-05-01

    We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions.

  18. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    Directory of Open Access Journals (Sweden)

    Chiao-Ling Lo

    2016-08-01

    Full Text Available Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP. This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross resulted in small haplotype blocks (HB with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS, were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50% of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284 and intronic regions (169 with the least in exon's (4, suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a, excitatory receptors (Grin2a, Gria3, Grip1, neurotransmitters (Pomc, and synapses (Snap29. This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits.

  19. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    Science.gov (United States)

    Lo, Chiao-Ling; Lossie, Amy C; Liang, Tiebing; Liu, Yunlong; Xuei, Xiaoling; Lumeng, Lawrence; Zhou, Feng C; Muir, William M

    2016-08-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits.

  20. Model-Based Systems Engineering With the Architecture Analysis and Design Language (AADL) Applied to NASA Mission Operations

    Science.gov (United States)

    Munoz Fernandez, Michela Miche

    2014-01-01

    The potential of Model Model Systems Engineering (MBSE) using the Architecture Analysis and Design Language (AADL) applied to space systems will be described. AADL modeling is applicable to real-time embedded systems- the types of systems NASA builds. A case study with the Juno mission to Jupiter showcases how this work would enable future missions to benefit from using these models throughout their life cycle from design to flight operations.

  1. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  2. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  3. OpenSimRoot: widening the scope and application of root architectural models.

    Science.gov (United States)

    Postma, Johannes A; Kuppe, Christian; Owen, Markus R; Mellor, Nathan; Griffiths, Marcus; Bennett, Malcolm J; Lynch, Jonathan P; Watt, Michelle

    2017-08-01

    OpenSimRoot is an open-source, functional-structural plant model and mathematical description of root growth and function. We describe OpenSimRoot and its functionality to broaden the benefits of root modeling to the plant science community. OpenSimRoot is an extended version of SimRoot, established to simulate root system architecture, nutrient acquisition and plant growth. OpenSimRoot has a plugin, modular infrastructure, coupling single plant and crop stands to soil nutrient and water transport models. It estimates the value of root traits for water and nutrient acquisition in environments and plant species. The flexible OpenSimRoot design allows upscaling from root anatomy to plant community to estimate the following: resource costs of developmental and anatomical traits; trait synergisms; and (interspecies) root competition. OpenSimRoot can model three-dimensional images from magnetic resonance imaging (MRI) and X-ray computed tomography (CT) of roots in soil. New modules include: soil water-dependent water uptake and xylem flow; tiller formation; evapotranspiration; simultaneous simulation of mobile solutes; mesh refinement; and root growth plasticity. OpenSimRoot integrates plant phenotypic data with environmental metadata to support experimental designs and to gain a mechanistic understanding at system scales. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  4. ANALYSIS OF TERRESTRIAL PLANET FORMATION BY THE GRAND TACK MODEL: SYSTEM ARCHITECTURE AND TACK LOCATION

    Energy Technology Data Exchange (ETDEWEB)

    Brasser, R.; Ida, S. [Earth-Life Science Institute, Tokyo Institute of Technology, Meguro-ku, Tokyo 152-8550 (Japan); Matsumura, S. [School of Science and Engineering, Division of Physics, Fulton Building, University of Dundee, Dundee DD1 4HN (United Kingdom); Mojzsis, S. J. [Collaborative for Research in Origins (CRiO), Department of Geological Sciences, University of Colorado, UCB 399, 2200 Colorado Avenue, Boulder, Colorado 80309-0399 (United States); Werner, S. C. [The Centre for Earth Evolution and Dynamics, University of Oslo, Sem Saelandsvei 24, NO-0371 Oslo (Norway)

    2016-04-20

    The Grand Tack model of terrestrial planet formation has emerged in recent years as the premier scenario used to account for several observed features of the inner solar system. It relies on the early migration of the giant planets to gravitationally sculpt and mix the planetesimal disk down to ∼1 au, after which the terrestrial planets accrete from material remaining in a narrow circumsolar annulus. Here, we investigate how the model fares under a range of initial conditions and migration course-change (“tack”) locations. We run a large number of N-body simulations with tack locations of 1.5 and 2 au and test initial conditions using equal-mass planetary embryos and a semi-analytical approach to oligarchic growth. We make use of a recent model of the protosolar disk that takes into account viscous heating, includes the full effect of type 1 migration, and employs a realistic mass–radius relation for the growing terrestrial planets. Our results show that the canonical tack location of Jupiter at 1.5 au is inconsistent with the most massive planet residing at 1 au at greater than 95% confidence. This favors a tack farther out at 2 au for the disk model and parameters employed. Of the different initial conditions, we find that the oligarchic case is capable of statistically reproducing the orbital architecture and mass distribution of the terrestrial planets, while the equal-mass embryo case is not.

  5. Macro-economic factors influencing the architectural business model shift in the pharmaceutical industry.

    Science.gov (United States)

    Dierks, Raphaela Marie Louisa; Bruyère, Olivier; Reginster, Jean-Yves; Richy, Florent-Frederic

    2016-10-01

    Technological innovations, new regulations, increasing costs of drug productions and new demands are only few key drivers of a projected alternation in the pharmaceutical industry. The purpose of this review is to understand the macro economic factors responsible for the business model revolution to possess a competitive advantage over market players. Areas covered: Existing literature on macro-economic factors changing the pharmaceutical landscape has been reviewed to present a clear image of the current market environment. Expert commentary: Literature shows that pharmaceutical companies are facing an architectural alteration, however the evidence on the rationale driving the transformation is outstanding. Merger & Acquisitions (M&A) deals and collaborations are headlining the papers. Q1 2016 did show a major slowdown in M&A deals by volume since 2013 (with deal cancellations of Pfizer and Allergan, or the downfall of Valeant), but pharmaceutical analysts remain confident that this shortfall was a consequence of the equity market volatility. It seems likely that the shift to an M&A model will become apparent during the remainder of 2016, with deal announcements of Abbott Laboratories, AbbVie and Sanofi worth USD 45billion showing the appetite of big pharma companies to shift from the fully vertical integrated business model to more horizontal business models.

  6. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    Directory of Open Access Journals (Sweden)

    M. Kedzierski

    2015-08-01

    Full Text Available One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  7. Exploring a model-driven architecture (MDA) approach to health care information systems development.

    Science.gov (United States)

    Raghupathi, Wullianallur; Umar, Amjad

    2008-05-01

    To explore the potential of the model-driven architecture (MDA) in health care information systems development. An MDA is conceptualized and developed for a health clinic system to track patient information. A prototype of the MDA is implemented using an advanced MDA tool. The UML provides the underlying modeling support in the form of the class diagram. The PIM to PSM transformation rules are applied to generate the prototype application from the model. The result of the research is a complete MDA methodology to developing health care information systems. Additional insights gained include development of transformation rules and documentation of the challenges in the application of MDA to health care. Design guidelines for future MDA applications are described. The model has the potential for generalizability. The overall approach supports limited interoperability and portability. The research demonstrates the applicability of the MDA approach to health care information systems development. When properly implemented, it has the potential to overcome the challenges of platform (vendor) dependency, lack of open standards, interoperability, portability, scalability, and the high cost of implementation.

  8. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  9. Architectural Theory and Graphical Criteria for Modelling Certain Late Gothic Projects by Hernan Ruiz "the Elder"

    Directory of Open Access Journals (Sweden)

    Antonio Luis Ampliato Briones

    2014-10-01

    Full Text Available This paper primarily reflects on the need to create graphical codes for producing images intended to communicate architecture. Each step of the drawing needs to be a deliberate process in which the proposed code highlights the relationship between architectural theory and graphic action. Our aim is not to draw the result of the architectural process but the design structure of the actual process; to draw as we design; to draw as we build. This analysis of the work of the Late Gothic architect Hernan Ruiz the Elder, from Cordoba, addresses two aspects: the historical and architectural investigation, and the graphical project for communication purposes.

  10. Internet of Things: a possible change in the distributed modeling and simulation architecture paradigm

    Science.gov (United States)

    Riecken, Mark; Lessmann, Kurt; Schillero, David

    2016-05-01

    The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed

  11. The Three-Dimensional Architecture of the Internal Capsule of the Human Brain Demonstrated by Fiber Dissection Technique

    Directory of Open Access Journals (Sweden)

    Cristina Goga

    2015-01-01

    Full Text Available The fiber dissection technique involves peeling away white matter fiber tracts of the brain to display its three-dimensional anatomic arrangement. The intricate three-dimensional configuration and structure of the internal capsule (IC is not well defined. By using the fiber dissection technique, our aim was to expose and study the IC to achieve a clearer conception of its configuration and relationships with neighboring white matter fibers and central nuclei. The lateral and medial aspects of the temporal lobes of twenty, previously frozen, formalin-fixed human brains were dissected under the operating microscope using the fiber dissection technique.

  12. VERNACULAR ARCHITECTURE: AN INTRODUCTORY COURSE TO LEARN ARCHITECTURE IN INDIA

    Directory of Open Access Journals (Sweden)

    Miki Desai

    2010-07-01

    -climatic forces, human and material resources and techniques that satisfy the socio cultural needs and desires of a given people. Research analysis, large scale model making, simulation, actual size mockups and such engage the students in make-believe world of architectural learning in this course.

  13. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    Science.gov (United States)

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  14. Real-time stereo matching architecture based on 2D MRF model: a memory-efficient systolic array

    Directory of Open Access Journals (Sweden)

    Park Sungchan

    2011-01-01

    Full Text Available Abstract There is a growing need in computer vision applications for stereopsis, requiring not only accurate distance but also fast and compact physical implementation. Global energy minimization techniques provide remarkably precise results. But they suffer from huge computational complexity. One of the main challenges is to parallelize the iterative computation, solving the memory access problem between the big external memory and the massive processors. Remarkable memory saving can be obtained with our memory reduction scheme, and our new architecture is a systolic array. If we expand it into N's multiple chips in a cascaded manner, we can cope with various ranges of image resolutions. We have realized it using the FPGA technology. Our architecture records 19 times smaller memory than the global minimization technique, which is a principal step toward real-time chip implementation of the various iterative image processing algorithms with tiny and distributed memory resources like optical flow, image restoration, etc.

  15. A pilot modeling technique for handling-qualities research

    Science.gov (United States)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  16. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  17. 3D CT modeling of hepatic vessel architecture and volume calculation in living donated liver transplantation

    International Nuclear Information System (INIS)

    Frericks, Bernd B.; Caldarone, Franco C.; Savellano, Dagmar Hoegemann; Stamm, Georg; Kirchhoff, Timm D.; Shin, Hoen-Oh; Galanski, Michael; Nashan, Bjoern; Klempnauer, Juergen; Schenk, Andrea; Selle, Dirk; Spindler, Wolf; Peitgen, Heinz-Otto

    2004-01-01

    The aim of this study was to evaluate a software tool for non-invasive preoperative volumetric assessment of potential donors in living donated liver transplantation (LDLT). Biphasic helical CT was performed in 56 potential donors. Data sets were post-processed using a non-commercial software tool for segmentation, volumetric analysis and visualisation of liver segments. Semi-automatic definition of liver margins allowed the segmentation of parenchyma. Hepatic vessels were delineated using a region-growing algorithm with automatically determined thresholds. Volumes and shapes of liver segments were calculated automatically based on individual portal-venous branches. Results were visualised three-dimensionally and statistically compared with conventional volumetry and the intraoperative findings in 27 transplanted cases. Image processing was easy to perform within 23 min. Of the 56 potential donors, 27 were excluded from LDLT because of inappropriate liver parenchyma or vascular architecture. Two recipients were not transplanted due to poor clinical conditions. In the 27 transplanted cases, preoperatively visualised vessels were confirmed, and only one undetected accessory hepatic vein was revealed. Calculated graft volumes were 1110±180 ml for right lobes, 820 ml for the left lobe and 270±30 ml for segments II+III. The calculated volumes and intraoperatively measured graft volumes correlated significantly. No significant differences between the presented automatic volumetry and the conventional volumetry were observed. A novel image processing technique was evaluated which allows a semi-automatic volume calculation and 3D visualisation of the different liver segments. (orig.)

  18. Tile Low Rank Cholesky Factorization for Climate/Weather Modeling Applications on Manycore Architectures

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Keyes, David E.

    2017-01-01

    Covariance matrices are ubiquitous in computational science and engineering. In particular, large covariance matrices arise from multivariate spatial data sets, for instance, in climate/weather modeling applications to improve prediction using statistical methods and spatial data. One of the most time-consuming computational steps consists in calculating the Cholesky factorization of the symmetric, positive-definite covariance matrix problem. The structure of such covariance matrices is also often data-sparse, in other words, effectively of low rank, though formally dense. While not typically globally of low rank, covariance matrices in which correlation decays with distance are nearly always hierarchically of low rank. While symmetry and positive definiteness should be, and nearly always are, exploited for performance purposes, exploiting low rank character in this context is very recent, and will be a key to solving these challenging problems at large-scale dimensions. The authors design a new and flexible tile row rank Cholesky factorization and propose a high performance implementation using OpenMP task-based programming model on various leading-edge manycore architectures. Performance comparisons and memory footprint saving on up to 200K×200K covariance matrix size show a gain of more than an order of magnitude for both metrics, against state-of-the-art open-source and vendor optimized numerical libraries, while preserving the numerical accuracy fidelity of the original model. This research represents an important milestone in enabling large-scale simulations for covariance-based scientific applications.

  19. Tile Low Rank Cholesky Factorization for Climate/Weather Modeling Applications on Manycore Architectures

    KAUST Repository

    Akbudak, Kadir

    2017-05-11

    Covariance matrices are ubiquitous in computational science and engineering. In particular, large covariance matrices arise from multivariate spatial data sets, for instance, in climate/weather modeling applications to improve prediction using statistical methods and spatial data. One of the most time-consuming computational steps consists in calculating the Cholesky factorization of the symmetric, positive-definite covariance matrix problem. The structure of such covariance matrices is also often data-sparse, in other words, effectively of low rank, though formally dense. While not typically globally of low rank, covariance matrices in which correlation decays with distance are nearly always hierarchically of low rank. While symmetry and positive definiteness should be, and nearly always are, exploited for performance purposes, exploiting low rank character in this context is very recent, and will be a key to solving these challenging problems at large-scale dimensions. The authors design a new and flexible tile row rank Cholesky factorization and propose a high performance implementation using OpenMP task-based programming model on various leading-edge manycore architectures. Performance comparisons and memory footprint saving on up to 200K×200K covariance matrix size show a gain of more than an order of magnitude for both metrics, against state-of-the-art open-source and vendor optimized numerical libraries, while preserving the numerical accuracy fidelity of the original model. This research represents an important milestone in enabling large-scale simulations for covariance-based scientific applications.

  20. The adaptive nature of the human neurocognitive architecture: an alternative model.

    Science.gov (United States)

    La Cerra, P; Bingham, R

    1998-09-15

    The model of the human neurocognitive architecture proposed by evolutionary psychologists is based on the presumption that the demands of hunter-gatherer life generated a vast array of cognitive adaptations. Here we present an alternative model. We argue that the problems inherent in the biological markets of ancestral hominids and their mammalian predecessors would have required an adaptively flexible, on-line information-processing system, and would have driven the evolution of a functionally plastic neural substrate, the neocortex, rather than a confederation of evolutionarily prespecified social cognitive adaptations. In alignment with recent neuroscientific evidence, we suggest that human cognitive processes result from the activation of constructed cortical representational networks, which reflect probabilistic relationships between sensory inputs, behavioral responses, and adaptive outcomes. The developmental construction and experiential modification of these networks are mediated by subcortical circuitries that are responsive to the life history regulatory system. As a consequence, these networks are intrinsically adaptively constrained. The theoretical and research implications of this alternative evolutionary model are discussed.

  1. Communities-business models and system architectures: the blueprint of MP3.com, Napster and Gnutella revisited

    OpenAIRE

    Lechner, U.; Schmid, Beat

    2001-01-01

    Information and communication technology opens up an unprecedented space of design options for the creation of economic value. The business model "community" and the role "community organizer" are determined to become pivotal in the digital economy. We argue that any online business model needs to take communities and community organizing in the design of communication and the system architecture into account. Our discussion is guided by the media model (Schmid, 1997). We characterize the rel...

  2. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  3. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  4. Change in the Pathologic Supraspinatus: A Three-Dimensional Model of Fiber Bundle Architecture within Anterior and Posterior Regions

    Directory of Open Access Journals (Sweden)

    Soo Y. Kim

    2015-01-01

    Full Text Available Supraspinatus tendon tears are common and lead to changes in the muscle architecture. To date, these changes have not been investigated for the distinct regions and parts of the pathologic supraspinatus. The purpose of this study was to create a novel three-dimensional (3D model of the muscle architecture throughout the supraspinatus and to compare the architecture between muscle regions and parts in relation to tear severity. Twelve cadaveric specimens with varying degrees of tendon tears were used. Three-dimensional coordinates of fiber bundles were collected in situ using serial dissection and digitization. Data were reconstructed and modeled in 3D using Maya. Fiber bundle length (FBL and pennation angle (PA were computed and analyzed. FBL was significantly shorter in specimens with large retracted tears compared to smaller tears, with the deeper fibers being significantly shorter than other parts in the anterior region. PA was significantly greater in specimens with large retracted tears, with the superficial fibers often demonstrating the largest PA. The posterior region was absent in two specimens with extensive tears. Architectural changes associated with tendon tears affect the regions and varying depths of supraspinatus differently. The results provide important insights on residual function of the pathologic muscle, and the 3D model includes detailed data that can be used in future modeling studies.

  5. A Knowledge Conversion Model Based on the Cognitive Load Theory for Architectural Design Education

    Science.gov (United States)

    Wu, Yun-Wu; Liao, Shin; Wen, Ming-Hui; Weng, Kuo-Hua

    2017-01-01

    The education of architectural design requires balanced curricular arrangements of respectively theoretical knowledge and practical skills to really help students build their knowledge structures, particularly helping them in solving the problems of cognitive load. The purpose of this study is to establish an architectural design knowledge…

  6. Semantic Web-Driven LMS Architecture towards a Holistic Learning Process Model Focused on Personalization

    Science.gov (United States)

    Kerkiri, Tania

    2010-01-01

    A comprehensive presentation is here made on the modular architecture of an e-learning platform with a distinctive emphasis on content personalization, combining advantages from semantic web technology, collaborative filtering and recommendation systems. Modules of this architecture handle information about both the domain-specific didactic…

  7. Quantifying branch architecture of tropical trees using terrestrial LiDAR and 3D modelling

    NARCIS (Netherlands)

    Lau, Alvaro; Bentley, Lisa Patrick; Martius, Christopher; Shenkin, Alexander; Bartholomeus, Harm; Raumonen, Pasi; Malhi, Yadvinder; Jackson, Tobias; Herold, Martin

    2018-01-01

    Tree architecture is the three-dimensional arrangement of above ground parts of a tree. Ecologists hypothesize that the topology of tree branches represents optimized adaptations to tree’s environment. Thus, an accurate description of tree architecture leads to a better understanding of how form is

  8. Racionalizam u arhitekturi: nekoliko modela instrumentalizacije / Rationalism in Architecture: Several Models of Instrumentalization

    OpenAIRE

    Vladimir Stevanović

    2014-01-01

    Rationalism in architecture is a European concept which, from Enlightment to postmodern era, advocates values of order, clarity and logic, represented through primary geometrism, functionalism, profitability and absence of ornament. The text connects and analyzes (1) formal-stilistic manifestations of rationalism in architecture: French neoclassicism; Soviet constructivism, German new objectivity, Italian rationalism; postmodern Italian neorationalism in the context of (2) dominant paradigms ...

  9. A New ABCD Technique to Analyze Business Models & Concepts

    OpenAIRE

    Aithal P. S.; Shailasri V. T.; Suresh Kumar P. M.

    2015-01-01

    Various techniques are used to analyze individual characteristics or organizational effectiveness like SWOT analysis, SWOC analysis, PEST analysis etc. These techniques provide an easy and systematic way of identifying various issues affecting a system and provides an opportunity for further development. Whereas these provide a broad-based assessment of individual institutions and systems, it suffers limitations while applying to business context. The success of any business model depends on ...

  10. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  11. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  12. Microprobing the Molecular Spatial Distribution and Structural Architecture of Feed-type Sorghum Seed Tissue (Sorghum Bicolor L.) using the Synchrotron Radiation Infrared Microspectroscopy Technique

    International Nuclear Information System (INIS)

    Yu, P.

    2011-01-01

    Sorghum seed (Sorghum bicolor L.) has unique degradation and fermentation behaviours compared with other cereal grains such as wheat, barley and corn. This may be related to its cell and cell-wall architecture. The advanced synchrotron radiation infrared microspectroscopy (SR-IMS) technique enables the study of cell or living cell biochemistry within cellular dimensions. The objective of this study was to use the SR-IMS imaging technique to microprobe molecular spatial distribution and cell architecture of the sorghum seed tissue comprehensively. High-density mapping was carried out using SR-IMS on beamline U2B at the National Synchrotron Light Source (Brookhaven National Laboratory, NY, USA). Molecular images were systematically recorded from the outside to the inside of the seed tissue under various chemical functional groups and their ratios [peaks at ∼1725 (carbonyl C=O ester), 1650 (amide I), 1657 (protein secondary structure α-helix), 1628 (protein secondary structure β-sheet), 1550 (amide II), 1515 (aromatic compounds of lignin), 1428, 1371, 1245 (cellulosic compounds in plant seed tissue), 1025 (non-structural CHO, starch granules), 1246 (cellulosic material), 1160 (CHO), 1150 (CHO), 1080 (CHO), 930 (CHO), 860 (CHO), 3350 (OH and NH stretching), 2960 (CH 3 anti-symmetric), 2929 (CH 2 anti-symmetric), 2877 (CH 3 symmetric) and 2848 cm -1 (CH 2 asymmetric)]. The relative protein secondary structure α-helix to β-sheet ratio image, protein amide I to starch granule ratio image, and anti-symmetric CH 3 to CH 2 ratio image were also investigated within the intact sorghum seed tissue. The results showed unique cell architecture, and the molecular spatial distribution and intensity in the sorghum seed tissue (which were analyzed through microprobe molecular imaging) were generated using SR-IMS. This imaging technique and methodology has high potential and could be used for scientists to develop specific cereal grain varieties with targeted food and feed

  13. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  14. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  15. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  16. The ATLAS Analysis Architecture

    International Nuclear Information System (INIS)

    Cranmer, K.S.

    2008-01-01

    We present an overview of the ATLAS analysis architecture including the relevant aspects of the computing model and the major architectural aspects of the Athena framework. Emphasis will be given to the interplay between the analysis use cases and the technical aspects of the architecture including the design of the event data model, transient-persistent separation, data reduction strategies, analysis tools, and ROOT interoperability

  17. Model-driven methodology for rapid deployment of smart spaces based on resource-oriented architectures.

    Science.gov (United States)

    Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  18. Model-Driven Methodology for Rapid Deployment of Smart Spaces Based on Resource-Oriented Architectures

    Directory of Open Access Journals (Sweden)

    José R. Casar

    2012-07-01

    Full Text Available Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT and Web of Things (WoT are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i to integrate sensing and actuating functionalities into everyday objects, and (ii to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD methodology based on the Model Driven Architecture (MDA. This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  19. Classifier models and architectures for EEG-based neonatal seizure detection

    International Nuclear Information System (INIS)

    Greene, B R; Marnane, W P; Lightbody, G; Reilly, R B; Boylan, G B

    2008-01-01

    Neonatal seizures are the most common neurological emergency in the neonatal period and are associated with a poor long-term outcome. Early detection and treatment may improve prognosis. This paper aims to develop an optimal set of parameters and a comprehensive scheme for patient-independent multi-channel EEG-based neonatal seizure detection. We employed a dataset containing 411 neonatal seizures. The dataset consists of multi-channel EEG recordings with a mean duration of 14.8 h from 17 neonatal patients. Early-integration and late-integration classifier architectures were considered for the combination of information across EEG channels. Three classifier models based on linear discriminants, quadratic discriminants and regularized discriminants were employed. Furthermore, the effect of electrode montage was considered. The best performing seizure detection system was found to be an early integration configuration employing a regularized discriminant classifier model. A referential EEG montage was found to outperform the more standard bipolar electrode montage for automated neonatal seizure detection. A cross-fold validation estimate of the classifier performance for the best performing system yielded 81.03% of seizures correctly detected with a false detection rate of 3.82%. With post-processing, the false detection rate was reduced to 1.30% with 59.49% of seizures correctly detected. These results represent a comprehensive illustration that robust reliable patient-independent neonatal seizure detection is possible using multi-channel EEG

  20. Microcomputed tomography and microfinite element modeling for evaluating polymer scaffolds architecture and their mechanical properties.

    Science.gov (United States)

    Alberich-Bayarri, Angel; Moratal, David; Ivirico, Jorge L Escobar; Rodríguez Hernández, José C; Vallés-Lluch, Ana; Martí-Bonmatí, Luis; Estellés, Jorge Más; Mano, Joao F; Pradas, Manuel Monleón; Ribelles, José L Gómez; Salmerón-Sánchez, Manuel

    2009-10-01

    Detailed knowledge of the porous architecture of synthetic scaffolds for tissue engineering, their mechanical properties, and their interrelationship was obtained in a nondestructive manner. Image analysis of microcomputed tomography (microCT) sections of different scaffolds was done. The three-dimensional (3D) reconstruction of the scaffold allows one to quantify scaffold porosity, including pore size, pore distribution, and struts' thickness. The porous morphology and porosity as calculated from microCT by image analysis agrees with that obtained experimentally by scanning electron microscopy and physically measured porosity, respectively. Furthermore, the mechanical properties of the scaffold were evaluated by making use of finite element modeling (FEM) in which the compression stress-strain test is simulated on the 3D structure reconstructed from the microCT sections. Elastic modulus as calculated from FEM is in agreement with those obtained from the stress-strain experimental test. The method was applied on qualitatively different porous structures (interconnected channels and spheres) with different chemical compositions (that lead to different elastic modulus of the base material) suitable for tissue regeneration. The elastic properties of the constructs are explained on the basis of the FEM model that supports the main mechanical conclusion of the experimental results: the elastic modulus does not depend on the geometric characteristics of the pore (pore size, interconnection throat size) but only on the total porosity of the scaffold. (c) 2009 Wiley Periodicals, Inc.

  1. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    International Nuclear Information System (INIS)

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.

    2014-01-01

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k c and k ¯ and the preferred monolayer curvature J 0 m , and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k c and the area compression modulus k A are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k ¯ and J 0 m can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k ¯ and J 0 m change sign with relevant parameter changes. Although typically k ¯ 0 m ≫0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes as these become vulnerable to pore formation or disintegration into lipid disks

  2. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    Science.gov (United States)

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  4. Coupling process-based models and plant architectural models: A key issue for simulating crop production

    NARCIS (Netherlands)

    Reffye, de P.; Heuvelink, E.; Guo, Y.; Hu, B.G.; Zhang, B.G.

    2009-01-01

    Process-Based Models (PBMs) can successfully predict the impact of environmental factors (temperature, light, CO2, water and nutrients) on crop growth and yield. These models are used widely for yield prediction and optimization of water and nutrient supplies. Nevertheless, PBMs do not consider

  5. Architecture's models: Integral thermal evaluation; Modelos en arquitectura: evaluacion termica integral

    Energy Technology Data Exchange (ETDEWEB)

    Roset, Jaume [Universidad Politecnica de Cataluna, Barcelona (Spain); Marincic, Irene; Ochoa, J. Manuel [Universidad de Sonora, Hermosillo, Sonora (Mexico)

    2000-07-01

    During the fist stages of a building design, considering energy conscious design, inhabitants needs and functionality, it is necessary to estimate and evaluate the buildings behavior and to know if it achieves the initial design objectives. Models are used to represent a description of objects and architectural concepts, as well as the hypothesis of its temporary and spatial behavior. The quantity and type of information needed as input of a model to be operative, has been, and still is, a controversial topic. The question is: how much useful are models that need a great quantity of inputs than other simpler ones? We consider as simple models those that habitually contain a certain quantity of empirical coefficients, which permit to reduce the number of inputs, solving the calculus as an approximation. In the other hand, in the architectural domain, information is usually presented in different type of supports (tables of numerical values, planes, physical models, ...). The information supplied in each type of support must necessary be combined, in order to maximize the information contained in the global system. In this paper, we present examples of thermal studies dealing with different ways to evaluate models, which involve variables of thermal behavior of buildings, the interaction between them and the environment and its influence on the indoor thermal comfort. As a general conclusion, we can say that a model should minimize the quantity of inputs required, which must be able to approach to the most relevant effects, that nearly represent the real behavior. The hypothesis and evaluation conditions of these effects must be understood and assumed by the user. [Spanish] Durante las etapas previas al diseno de un edificio eficiente energeticamente y coherente con las necesidades de sus usuarios y con su funcion, sera necesario estimar y evaluar su comportamiento para saber en que medida se estan logrando los objetivos de diseno planteados inicialmente. Con este

  6. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  7. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  8. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  9. Constructing canine carotid artery stenosis model by endovascular technique

    International Nuclear Information System (INIS)

    Cheng Guangsen; Liu Yizhi

    2005-01-01

    Objective: To establish a carotid artery stenosis model by endovascular technique suitable for neuro-interventional therapy. Methods: Twelve dogs were anesthetized, the unilateral segments of the carotid arteries' tunica media and intima were damaged by a corneous guiding wire of home made. Twenty-four carotid artery stenosis models were thus created. DSA examination was performed on postprocedural weeks 2, 4, 8, 10 to estimate the changes of those stenotic carotid arteries. Results: Twenty-four carotid artery stenosis models were successfully created in twelve dogs. Conclusions: Canine carotid artery stenosis models can be created with the endovascular method having variation of pathologic characters and hemodynamic changes similar to human being. It is useful for further research involving the new technique and new material for interventional treatment. (authors)

  10. A conceptual approach to approximate tree root architecture in infinite slope models

    Science.gov (United States)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  11. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  12. Molecular rheology of branched polymers: decoding and exploring the role of architectural dispersity through a synergy of anionic synthesis, interaction chromatography, rheometry and modeling.

    Science.gov (United States)

    van Ruymbeke, E; Lee, H; Chang, T; Nikopoulou, A; Hadjichristidis, N; Snijkers, F; Vlassopoulos, D

    2014-07-21

    An emerging challenge in polymer physics is the quantitative understanding of the influence of a macromolecular architecture (i.e., branching) on the rheological response of entangled complex polymers. Recent investigations of the rheology of well-defined architecturally complex polymers have determined the composition in the molecular structure and identified the role of side-products in the measured samples. The combination of different characterization techniques, experimental and/or theoretical, represents the current state-of-the-art. Here we review this interdisciplinary approach to molecular rheology of complex polymers, and show the importance of confronting these different tools for ensuring an accurate characterization of a given polymeric sample. We use statistical tools in order to relate the information available from the synthesis protocols of a sample and its experimental molar mass distribution (typically obtained from size exclusion chromatography), and hence obtain precise information about its structural composition, i.e. enhance the existing sensitivity limit. We critically discuss the use of linear rheology as a reliable quantitative characterization tool, along with the recently developed temperature gradient interaction chromatography. The latter, which has emerged as an indispensable characterization tool for branched architectures, offers unprecedented sensitivity in detecting the presence of different molecular structures in a sample. Combining these techniques is imperative in order to quantify the molecular composition of a polymer and its consequences on the macroscopic properties. We validate this approach by means of a new model asymmetric comb polymer which was synthesized anionically. It was thoroughly characterized and its rheology was carefully analyzed. The main result is that the rheological signal reveals fine molecular details, which must be taken into account to fully elucidate the viscoelastic response of entangled branched

  13. Molecular rheology of branched polymers: Decoding and exploring the role of architectural dispersity through a synergy of anionic synthesis, interaction chromatography, rheometry and modeling

    KAUST Repository

    Van Ruymbeke, Evelyne

    2014-01-01

    An emerging challenge in polymer physics is the quantitative understanding of the influence of a macromolecular architecture (i.e., branching) on the rheological response of entangled complex polymers. Recent investigations of the rheology of well-defined architecturally complex polymers have determined the composition in the molecular structure and identified the role of side-products in the measured samples. The combination of different characterization techniques, experimental and/or theoretical, represents the current state-of-the-art. Here we review this interdisciplinary approach to molecular rheology of complex polymers, and show the importance of confronting these different tools for ensuring an accurate characterization of a given polymeric sample. We use statistical tools in order to relate the information available from the synthesis protocols of a sample and its experimental molar mass distribution (typically obtained from size exclusion chromatography), and hence obtain precise information about its structural composition, i.e. enhance the existing sensitivity limit. We critically discuss the use of linear rheology as a reliable quantitative characterization tool, along with the recently developed temperature gradient interaction chromatography. The latter, which has emerged as an indispensable characterization tool for branched architectures, offers unprecedented sensitivity in detecting the presence of different molecular structures in a sample. Combining these techniques is imperative in order to quantify the molecular composition of a polymer and its consequences on the macroscopic properties. We validate this approach by means of a new model asymmetric comb polymer which was synthesized anionically. It was thoroughly characterized and its rheology was carefully analyzed. The main result is that the rheological signal reveals fine molecular details, which must be taken into account to fully elucidate the viscoelastic response of entangled branched

  14. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  15. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  16. NMR and modelling techniques in structural and conformation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, R J [Liverpool Univ. (United Kingdom)

    1994-12-31

    The use of Lanthanide Induced Shifts (L.I.S.) and modelling techniques in conformational analysis is presented. The use of Co{sup III} porphyrins as shift reagents is discussed, with examples of their use in the conformational analysis of some heterocyclic amines. (author) 13 refs., 9 figs.

  17. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    This study presents that the chemometric techniques and modelling become an excellent tool in API assessment, air pollution source identification, apportionment and can be setbacks in designing an API monitoring network for effective air pollution resources management. Keywords: air pollutant index; chemometric; ANN; ...

  18. Software architecture and design of the web services facilitating climate model diagnostic analysis

    Science.gov (United States)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  19. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  20. Organizational Learning Supported by Reference Architecture Models: Industry 4.0 Laboratory Study

    Directory of Open Access Journals (Sweden)

    Marco Nardello

    2017-10-01

    Full Text Available The wave of the fourth industrial revolution (Industry 4.0 is bringing a new vision of the manufacturing industry. In manufacturing, one of the buzzwords of the moment is "Smart production". Smart production involves manufacturing equipment with many sensors that can generate and transmit large amounts of data. These data and information from manufacturing operations are however not shared in the organization. Therefore the organization is not using them to learn and improve their operations. To address this problem, the authors implemented in an Industry 4.0 laboratory an instance of an emerging technical standard specific for the manufacturing industry. Global manufacturing experts consider the Reference Architecture Model Industry 4.0 (RAMI4.0 as one of the corner stones for the implementation of Industry 4.0. The instantiation contributed to organizational learning in the laboratory by collecting and sharing up-to-date information concerning manufacturing equipment. This article discusses and generalizes the experience and outlines future research directions.

  1. When machine vision meets histology: A comparative evaluation of model architecture for classification of histology sections.

    Science.gov (United States)

    Zhong, Cheng; Han, Ju; Borowsky, Alexander; Parvin, Bahram; Wang, Yunfu; Chang, Hang

    2017-01-01

    Classification of histology sections in large cohorts, in terms of distinct regions of microanatomy (e.g., stromal) and histopathology (e.g., tumor, necrosis), enables the quantification of tumor composition, and the construction of predictive models of genomics and clinical outcome. To tackle the large technical variations and biological heterogeneities, which are intrinsic in large cohorts, emerging systems utilize either prior knowledge from pathologists or unsupervised feature learning for invariant representation of the underlying properties in the data. However, to a large degree, the architecture for tissue histology classification remains unexplored and requires urgent systematical investigation. This paper is the first attempt to provide insights into three fundamental questions in tissue histology classification: I. Is unsupervised feature learning preferable to human engineered features? II. Does cellular saliency help? III. Does the sparse feature encoder contribute to recognition? We show that (a) in I, both Cellular Morphometric Feature and features from unsupervised feature learning lead to superior performance when compared to SIFT and [Color, Texture]; (b) in II, cellular saliency incorporation impairs the performance for systems built upon pixel-/patch-level features; and (c) in III, the effect of the sparse feature encoder is correlated with the robustness of features, and the performance can be consistently improved by the multi-stage extension of systems built upon both Cellular Morphmetric Feature and features from unsupervised feature learning. These insights are validated with two cohorts of Glioblastoma Multiforme (GBM) and Kidney Clear Cell Carcinoma (KIRC). Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Parallel eigenanalysis of finite element models in a completely connected architecture

    Science.gov (United States)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  3. Evaluation of physics-based numerical modelling for diverse design architecture of perovskite solar cells

    Science.gov (United States)

    Mishra, A. K.; Catalan, Jorge; Camacho, Diana; Martinez, Miguel; Hodges, D.

    2017-08-01

    Solution processed organic-inorganic metal halide perovskite based solar cells are emerging as a new cost effective photovoltaic technology. In the context of increasing the power conversion efficiency (PCE) and sustainability of perovskite solar cells (PSC) devices, we comprehensively analyzed a physics-based numerical modelling for doped and un-doped PSC devices. Our analytics emphasized the role of different charge carrier layers from the view point of interfacial adhesion and its influence on charge extraction rate and charge recombination mechanism. Morphological and charge transport properties of perovskite thin film as a function of device architecture are also considered to investigate the photovoltaic properties of PSC. We observed that photocurrent is dominantly influenced by interfacial recombination process and photovoltage has functional relationship with defect density of perovskite absorption layer. A novel contour mapping method to understand the characteristics of current density-voltage (J-V) curves for each device as a function of perovskite layer thickness provide an important insight about the distribution spectrum of photovoltaic properties. Functional relationship of device efficiency and fill factor with absorption layer thickness are also discussed.

  4. An Architecturally Constrained Model of Random Number Generation and its Application to Modelling the Effect of Generation Rate

    Directory of Open Access Journals (Sweden)

    Nicholas J. Sexton

    2014-07-01

    Full Text Available Random number generation (RNG is a complex cognitive task for human subjects, requiring deliberative control to avoid production of habitual, stereotyped sequences. Under various manipulations (e.g., speeded responding, transcranial magnetic stimulation, or neurological damage the performance of human subjects deteriorates, as reflected in a number of qualitatively distinct, dissociable biases. For example, the intrusion of stereotyped behaviour (e.g., counting increases at faster rates of generation. Theoretical accounts of the task postulate that it requires the integrated operation of multiple, computationally heterogeneous cognitive control ('executive' processes. We present a computational model of RNG, within the framework of a novel, neuropsychologically-inspired cognitive architecture, ESPro. Manipulating the rate of sequence generation in the model reproduced a number of key effects observed in empirical studies, including increasing sequence stereotypy at faster rates. Within the model, this was due to time limitations on the interaction of supervisory control processes, namely, task setting, proposal of responses, monitoring, and response inhibition. The model thus supports the fractionation of executive function into multiple, computationally heterogeneous processes.

  5. Technology Reference Model (TRM) Reports: Federal Enterprise Architecture (FEA) Mapping Report

    Data.gov (United States)

    Department of Veterans Affairs — The One VA Enterprise Architecture (OneVA EA) is a comprehensive picture of the Department of Veterans Affairs' (VA) operations, capabilities and services and the...

  6. Technology Reference Model (TRM) Reports: Federal Enterprise Architecture (FEA) Category Count Report

    Data.gov (United States)

    Department of Veterans Affairs — The One VA Enterprise Architecture (OneVA EA) is a comprehensive picture of the Department of Veterans Affairs' (VA) operations, capabilities and services and the...

  7. Evaluating radiative transfer schemes treatment of vegetation canopy architecture in land surface models

    Science.gov (United States)

    Braghiere, Renato; Quaife, Tristan; Black, Emily

    2016-04-01

    Incoming shortwave radiation is the primary source of energy driving the majority of the Earth's climate system. The partitioning of shortwave radiation by vegetation into absorbed, reflected, and transmitted terms is important for most of biogeophysical processes, including leaf temperature changes and photosynthesis, and it is currently calculated by most of land surface schemes (LSS) of climate and/or numerical weather prediction models. The most commonly used radiative transfer scheme in LSS is the two-stream approximation, however it does not explicitly account for vegetation architectural effects on shortwave radiation partitioning. Detailed three-dimensional (3D) canopy radiative transfer schemes have been developed, but they are too computationally expensive to address large-scale related studies over long time periods. Using a straightforward one-dimensional (1D) parameterisation proposed by Pinty et al. (2006), we modified a two-stream radiative transfer scheme by including a simple function of Sun zenith angle, so-called "structure factor", which does not require an explicit description and understanding of the complex phenomena arising from the presence of vegetation heterogeneous architecture, and it guarantees accurate simulations of the radiative balance consistently with 3D representations. In order to evaluate the ability of the proposed parameterisation in accurately represent the radiative balance of more complex 3D schemes, a comparison between the modified two-stream approximation with the "structure factor" parameterisation and state-of-art 3D radiative transfer schemes was conducted, following a set of virtual scenarios described in the RAMI4PILPS experiment. These experiments have been evaluating the radiative balance of several models under perfectly controlled conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical

  8. Analysis of central enterprise architecture elements in models of six eHealth projects.

    Science.gov (United States)

    Virkanen, Hannu; Mykkänen, Juha

    2014-01-01

    Large-scale initiatives for eHealth services have been established in many countries on regional or national level. The use of Enterprise Architecture has been suggested as a methodology to govern and support the initiation, specification and implementation of large-scale initiatives including the governance of business changes as well as information technology. This study reports an analysis of six health IT projects in relation to Enterprise Architecture elements, focusing on central EA elements and viewpoints in different projects.

  9. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  10. A fermionic molecular dynamics technique to model nuclear matter

    International Nuclear Information System (INIS)

    Vantournhout, K.; Jachowicz, N.; Ryckebusch, J.

    2009-01-01

    Full text: At sub-nuclear densities of about 10 14 g/cm 3 , nuclear matter arranges itself in a variety of complex shapes. This can be the case in the crust of neutron stars and in core-collapse supernovae. These slab like and rod like structures, designated as nuclear pasta, have been modelled with classical molecular dynamics techniques. We present a technique, based on fermionic molecular dynamics, to model nuclear matter at sub-nuclear densities in a semi classical framework. The dynamical evolution of an antisymmetric ground state is described making the assumption of periodic boundary conditions. Adding the concepts of antisymmetry, spin and probability distributions to classical molecular dynamics, brings the dynamical description of nuclear matter to a quantum mechanical level. Applications of this model vary from investigation of macroscopic observables and the equation of state to the study of fundamental interactions on the microscopic structure of the matter. (author)

  11. Synapse-centric mapping of cortical models to the SpiNNaker neuromorphic architecture

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-09-01

    Full Text Available While the adult human brain has approximately 8.8x10^10 neurons, this number is dwarfed by its 1x10^15 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously.

  12. Changes in gene expression and cellular architecture in an ovarian cancer progression model.

    Directory of Open Access Journals (Sweden)

    Amy L Creekmore

    Full Text Available BACKGROUND: Ovarian cancer is the fifth leading cause of cancer deaths among women. Early stage disease often remains undetected due the lack of symptoms and reliable biomarkers. The identification of early genetic changes could provide insights into novel signaling pathways that may be exploited for early detection and treatment. METHODOLOGY/PRINCIPAL FINDINGS: Mouse ovarian surface epithelial (MOSE cells were used to identify stage-dependent changes in gene expression levels and signal transduction pathways by mouse whole genome microarray analyses and gene ontology. These cells have undergone spontaneous transformation in cell culture and transitioned from non-tumorigenic to intermediate and aggressive, malignant phenotypes. Significantly changed genes were overrepresented in a number of pathways, most notably the cytoskeleton functional category. Concurrent with gene expression changes, the cytoskeletal architecture became progressively disorganized, resulting in aberrant expression or subcellular distribution of key cytoskeletal regulatory proteins (focal adhesion kinase, α-actinin, and vinculin. The cytoskeletal disorganization was accompanied by altered patterns of serine and tyrosine phosphorylation as well as changed expression and subcellular localization of integral signaling intermediates APC and PKCβII. CONCLUSIONS/SIGNIFICANCE: Our studies have identified genes that are aberrantly expressed during MOSE cell neoplastic progression. We show that early stage dysregulation of actin microfilaments is followed by progressive disorganization of microtubules and intermediate filaments at later stages. These stage-specific, step-wise changes provide further insights into the time and spatial sequence of events that lead to the fully transformed state since these changes are also observed in aggressive human ovarian cancer cell lines independent of their histological type. Moreover, our studies support a link between aberrant cytoskeleton

  13. Overelaborated synaptic architecture and reduced synaptomatrix glycosylation in a Drosophila classic galactosemia disease model

    Directory of Open Access Journals (Sweden)

    Patricia Jumbo-Lucioni

    2014-12-01

    Full Text Available Classic galactosemia (CG is an autosomal recessive disorder resulting from loss of galactose-1-phosphate uridyltransferase (GALT, which catalyzes conversion of galactose-1-phosphate and uridine diphosphate (UDP-glucose to glucose-1-phosphate and UDP-galactose, immediately upstream of UDP–N-acetylgalactosamine and UDP–N-acetylglucosamine synthesis. These four UDP-sugars are essential donors for driving the synthesis of glycoproteins and glycolipids, which heavily decorate cell surfaces and extracellular spaces. In addition to acute, potentially lethal neonatal symptoms, maturing individuals with CG develop striking neurodevelopmental, motor and cognitive impairments. Previous studies suggest that neurological symptoms are associated with glycosylation defects, with CG recently being described as a congenital disorder of glycosylation (CDG, showing defects in both N- and O-linked glycans. Here, we characterize behavioral traits, synaptic development and glycosylated synaptomatrix formation in a GALT-deficient Drosophila disease model. Loss of Drosophila GALT (dGALT greatly impairs coordinated movement and results in structural overelaboration and architectural abnormalities at the neuromuscular junction (NMJ. Dietary galactose and mutation of galactokinase (dGALK or UDP-glucose dehydrogenase (sugarless genes are identified, respectively, as critical environmental and genetic modifiers of behavioral and cellular defects. Assaying the NMJ extracellular synaptomatrix with a broad panel of lectin probes reveals profound alterations in dGALT mutants, including depletion of galactosyl, N-acetylgalactosamine and fucosylated horseradish peroxidase (HRP moieties, which are differentially corrected by dGALK co-removal and sugarless overexpression. Synaptogenesis relies on trans-synaptic signals modulated by this synaptomatrix carbohydrate environment, and dGALT-null NMJs display striking changes in heparan sulfate proteoglycan (HSPG co-receptor and Wnt

  14. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  15. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  16. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  17. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  18. Mars Colony in situ resource utilization: An integrated architecture and economics model

    Science.gov (United States)

    Shishko, Robert; Fradet, René; Do, Sydney; Saydam, Serkan; Tapia-Cortez, Carlos; Dempster, Andrew G.; Coulton, Jeff

    2017-09-01

    This paper reports on our effort to develop an ensemble of specialized models to explore the commercial potential of mining water/ice on Mars in support of a Mars Colony. This ensemble starts with a formal systems architecting framework to describe a Mars Colony and capture its artifacts' parameters and technical attributes. The resulting database is then linked to a variety of ;downstream; analytic models. In particular, we integrated an extraction process (i.e., ;mining;) model, a simulation of the colony's environmental control and life support infrastructure known as HabNet, and a risk-based economics model. The mining model focuses on the technologies associated with in situ resource extraction, processing, storage and handling, and delivery. This model computes the production rate as a function of the systems' technical parameters and the local Mars environment. HabNet simulates the fundamental sustainability relationships associated with establishing and maintaining the colony's population. The economics model brings together market information, investment and operating costs, along with measures of market uncertainty and Monte Carlo techniques, with the objective of determining the profitability of commercial water/ice in situ mining operations. All told, over 50 market and technical parameters can be varied in order to address ;what-if; questions, including colony location.

  19. Sustainable Spaces with Psychological Values: Historical Architecture as Reference Book for Biomimetic Models with Biophilic Qualities

    Directory of Open Access Journals (Sweden)

    Nely Ramzy

    2015-07-01

    Full Text Available Biomimicry is a growing area of interest in architecture due to the potentials it offers for innovative architectural solutions and for more sustainable, regenerative built environment. Yet, a growing body of research identified various deficiencies to the employment of this approach in architecture. Of particular note are that: first, some biomimetic technologies are not inherently more sustainable or Nature-friendly than conventional equivalents; second, they lack any spatial expression of Nature and are visually ill-integrated into it. In a trial to redeem these deficiencies, this paper suggests a frame-work for more sustainable strategy that combines this approach with the relative approach of "Biophilia", with reference to examples from historical architecture. Using pioneering strategies and applications from different historical styles, the paper shows that the combination of these two approaches may lead to enhanced outcomes in terms of sustainability as well as human psychology and well-being. In doing so, architects may go beyond simply mimicking Nature to synthesizing architecture in tune with it and bringing in bio-inspired solutions that is more responsive to human needs and well-being.

  20. Space and Architecture's Current Line of Research? A Lunar Architecture Workshop With An Architectural Agenda.

    Science.gov (United States)

    Solomon, D.; van Dijk, A.

    space context that will be useful on Earth on a conceptual and practical level? * In what ways could architecture's field of reference offer building on the Moon (and other celestial bodies) a paradigm shift? 1 In addition to their models and designs, workshop participants will begin authoring a design recommendation for the building of (infra-) structures and habitats on celestial bodies in particular the Moon and Mars. The design recommendation, a substantiated aesthetic code of conduct (not legally binding) will address long term planning and incorporate issues of sustainability, durability, bio-diversity, infrastructure, CHANGE, and techniques that lend themselves to Earth-bound applications. It will also address the cultural implications of architectural design might have within the context of space exploration. The design recommendation will ultimately be presented for peer review to both the space and architecture communities. What would the endorsement from the architectural community of such a document mean to the space community? The Lunar Architecture Workshop is conceptualised, produced and organised by(in alphabetical order): Alexander van Dijk, Art Race in Space, Barbara Imhof; ES- CAPE*spHERE, Vienna, University of Technology, Institute for Design and Building Construction, Vienna, Bernard Foing; ESA SMART1 Project Scientist, Susmita Mo- hanty; MoonFront, LLC, Hans Schartner' Vienna University of Technology, Institute for Design and Building Construction, Debra Solomon; Art Race in Space, Dutch Art Institute, Paul van Susante; Lunar Explorers Society. Workshop locations: ESTEC, Noordwijk, NL and V2_Lab, Rotterdam, NL Workshop dates: June 3-16, 2002 (a Call for Participation will be made in March -April 2002.) 2

  1. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  2. Construct canine intracranial aneurysm model by endovascular technique

    International Nuclear Information System (INIS)

    Liang Xiaodong; Liu Yizhi; Ni Caifang; Ding Yi

    2004-01-01

    Objective: To construct canine bifurcation aneurysms suitable for evaluating the exploration of endovascular devices for interventional therapy by endovascular technique. Methods: The right common carotid artery of six dogs was expanded with a pliable balloon by means of endovascular technique, then embolization with detached balloon was taken at their originations DAS examination were performed on 1, 2, 3 d after the procedurse. Results: 6 aneurysm models were created in six dogs successfully with the mean width and height of the aneurysms decreasing in 3 days. Conclusions: This canine aneurysm model presents the virtue in the size and shape of human cerebral bifurcation saccular aneurysms on DSA image, suitable for developing the exploration of endovascular devices for aneurismal therapy. The procedure is quick, reliable and reproducible. (authors)

  3. Resource-aware system architecture model for implementation of quantum aided Byzantine agreement on quantum repeater networks

    Science.gov (United States)

    Taherkhani, Mohammand Amin; Navi, Keivan; Van Meter, Rodney

    2018-01-01

    Quantum aided Byzantine agreement is an important distributed quantum algorithm with unique features in comparison to classical deterministic and randomized algorithms, requiring only a constant expected number of rounds in addition to giving a higher level of security. In this paper, we analyze details of the high level multi-party algorithm, and propose elements of the design for the quantum architecture and circuits required at each node to run the algorithm on a quantum repeater network (QRN). Our optimization techniques have reduced the quantum circuit depth by 44% and the number of qubits in each node by 20% for a minimum five-node setup compared to the design based on the standard arithmetic circuits. These improvements lead to a quantum system architecture with 160 qubits per node, space-time product (an estimate of the required fidelity) {KQ}≈ 1.3× {10}5 per node and error threshold 1.1× {10}-6 for the total nodes in the network. The evaluation of the designed architecture shows that to execute the algorithm once on the minimum setup, we need to successfully distribute a total of 648 Bell pairs across the network, spread evenly between all pairs of nodes. This framework can be considered a starting point for establishing a road-map for light-weight demonstration of a distributed quantum application on QRNs.

  4. Towards Model Validation and Verification with SAT Techniques

    OpenAIRE

    Gogolla, Martin

    2010-01-01

    After sketching how system development and the UML (Unified Modeling Language) and the OCL (Object Constraint Language) are related, validation and verification with the tool USE (UML-based Specification Environment) is demonstrated. As a more efficient alternative for verification tasks, two approaches using SAT-based techniques are put forward: First, a direct encoding of UML and OCL with Boolean variables and propositional formulas, and second, an encoding employing an...

  5. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    Science.gov (United States)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  6. [Preparation of simulate craniocerebral models via three dimensional printing technique].

    Science.gov (United States)

    Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T

    2016-08-09

    Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.

  7. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  8. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  9. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  10. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  11. Examining the volume efficiency of the cortical architecture in a multi-processor network model.

    Science.gov (United States)

    Ruppin, E; Schwartz, E L; Yeshurun, Y

    1993-01-01

    The convoluted form of the sheet-like mammalian cortex naturally raises the question whether there is a simple geometrical reason for the prevalence of cortical architecture in the brains of higher vertebrates. Addressing this question, we present a formal analysis of the volume occupied by a massively connected network or processors (neurons) and then consider the pertaining cortical data. Three gross macroscopic features of cortical organization are examined: the segregation of white and gray matter, the circumferential organization of the gray matter around the white matter, and the folded cortical structure. Our results testify to the efficiency of cortical architecture.

  12. A model from the First National Architecture Period in Ankara: Hotel Erzurum

    Directory of Open Access Journals (Sweden)

    Hasan Fevzi Çügen

    2013-01-01

    Full Text Available As Ankara became the capital, the emerging problem of accommodation required a change in the function of some dwellings in the city. In the 1930s, the Hotel Erzurum was one of the buildings involved in this change. Hotel Erzurum was located in the Ulus district, right next to the Hotel Europe which was next door to the city’s wholesale produce market. In this study information is given about the construction and architectural features of Hotel Erzurum, which was built in the neo-classical style and was one of the examples with the salient features of the First National Architecture Period structures.

  13. Presenting Enterprise Architecture Strategies Using Business Model Canvas (Case Study of E-commerce PT Xyz)

    OpenAIRE

    Christini, Chintamy; Rahmad, Basuki

    2015-01-01

    Information Technology (IT) is known as enabler of business. One of the study that align business and IT is enterprise architecture. In this era, one of the business that become a trend the world is electronic commerce (e-commerce). With the suitable enterprise architecture strategies, e-commerce can be improved according to the business and IT condition. This paper represents about study case of an Indonesian e-commerce website which is still unknown by internet users and the SWOT strategy a...

  14. Formalizing correspondence rules for automotive architectural views

    NARCIS (Netherlands)

    Dajsuren, Y.; Gerpheide, C.M.; Serebrenik, A.; Wijs, A.J.; Vasilescu, B.N.; Brand, van den M.G.J.; Seinturier, L.; Bures, T.; McGregor, J.D.

    2014-01-01

    Architecture views have long been used in software industry to systematically model complex systems by representing them from the perspective of related stakeholder concerns. However, consensus has not been reached for the architecture views between automotive architecture description languages and

  15. The performance of a new Geant4 Bertini intra-nuclear cascade model in high throughput computing (HTC) cluster architecture

    Energy Technology Data Exchange (ETDEWEB)

    Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)

    2003-07-01

    We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)

  16. Architecture & Environment

    Science.gov (United States)

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  17. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  18. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  19. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  20. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile