WorldWideScience

Sample records for optimized architectural approaches

  1. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.

  2. Architecture Approach in System Development

    Directory of Open Access Journals (Sweden)

    Ladislav Burita

    2017-01-01

    Full Text Available The purpose of this paper is to describe a practical solution of architecture approach in system development. The software application is the system which optimizes the transport service. The first part of the paper defines the enterprise architecture, its parts and frameworks. Next is explained the NATO Architecture Framework (NAF, a tool for command and control systems development in military environment. The NAF is used for architecture design of the system for optimization of the transport service.

  3. Discrete optimization in architecture architectural & urban layout

    CERN Document Server

    Zawidzki, Machi

    2016-01-01

    This book presents three projects that demonstrate the fundamental problems of architectural design and urban composition – the layout design, evaluation and optimization. Part I describes the functional layout design of a residential building, and an evaluation of the quality of a town square (plaza). The algorithm for the functional layout design is based on backtracking using a constraint satisfaction approach combined with coarse grid discretization. The algorithm for the town square evaluation is based on geometrical properties derived directly from its plan. Part II introduces a crowd-simulation application for the analysis of escape routes on floor plans, and optimization of a floor plan for smooth crowd flow. The algorithms presented employ agent-based modeling and cellular automata.

  4. Future city architecture for optimal living

    CERN Document Server

    Pardalos, Panos

    2015-01-01

      This book offers a wealth of interdisciplinary approaches to urbanization strategies in architecture centered on growing concerns about the future of cities and their impacts on essential elements of architectural optimization, livability, energy consumption and sustainability. It portrays the urban condition in architectural terms, as well as the living condition in human terms, both of which can be optimized by mathematical modeling as well as mathematical calculation and assessment.   Special features include:   ·        new research on the construction of future cities and smart cities   ·        discussions of sustainability and new technologies designed to advance ideas to future city developments   Graduate students and researchers in architecture, engineering, mathematical modeling, and building physics will be engaged by the contributions written by eminent international experts from a variety of disciplines including architecture, engineering, modeling, optimization, and relat...

  5. Optimized Architectural Approaches in Hardware and Software Enabling Very High Performance Shared Storage Systems

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    There are issues encountered in high performance storage systems that normally lead to compromises in architecture. Compute clusters tend to have compute phases followed by an I/O phase that must move data from the entire cluster in one operation. That data may then be shared by a large number of clients creating unpredictable read and write patterns. In some cases the aggregate performance of a server cluster must exceed 100 GB/s to minimize the time required for the I/O cycle thus maximizing compute availability. Accessing the same content from multiple points in a shared file system leads to the classical problems of data "hot spots" on the disk drive side and access collisions on the data connectivity side. The traditional method for increasing apparent bandwidth usually includes data replication which is costly in both storage and management. Scaling a model that includes replicated data presents additional management challenges as capacity and bandwidth expand asymmetrically while the system is scaled. ...

  6. An Architecture for Performance Optimization in a Collaborative Knowledge-Based Approach for  Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Juan Ramon Velasco

    2011-09-01

    Full Text Available Over the past few years, Intelligent Spaces (ISs have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a an optimized design for the inference engine; (b a visual interface; (c a module to reduce the redundancy and complexity of the knowledge bases; (d a module to evaluate the accuracy of the new knowledge base; (e a module to adapt the format of the rules to the structure used by the inference engine; and (f a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern. and repilo (caused by the fungus Spilocaea oleagina. The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery without a substantial decrease in the accuracy of the inferred values.

  7. Multilayer Perceptron: Architecture Optimization and Training

    Directory of Open Access Journals (Sweden)

    Hassan Ramchoun

    2016-09-01

    Full Text Available The multilayer perceptron has a large wide of classification and regression applications in many fields: pattern recognition, voice and classification problems. But the architecture choice has a great impact on the convergence of these networks. In the present paper we introduce a new approach to optimize the network architecture, for solving the obtained model we use the genetic algorithm and we train the network with a back-propagation algorithm. The numerical results assess the effectiveness of the theoretical results shown in this paper, and the advantages of the new modeling compared to the previous model in the literature.

  8. Compressed optimization of device architectures

    Energy Technology Data Exchange (ETDEWEB)

    Frees, Adam [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Gamble, John King [Microsoft Research, Redmond, WA (United States). Quantum Architectures and Computation Group; Ward, Daniel Robert [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Blume-Kohout, Robin J [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Eriksson, M. A. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Friesen, Mark [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Coppersmith, Susan N. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics

    2014-09-01

    Recent advances in nanotechnology have enabled researchers to control individual quantum mechanical objects with unprecedented accuracy, opening the door for both quantum and extreme- scale conventional computation applications. As these devices become more complex, designing for facility of control becomes a daunting and computationally infeasible task. Here, motivated by ideas from compressed sensing, we introduce a protocol for the Compressed Optimization of Device Architectures (CODA). It leads naturally to a metric for benchmarking and optimizing device designs, as well as an automatic device control protocol that reduces the operational complexity required to achieve a particular output. Because this protocol is both experimentally and computationally efficient, it is readily extensible to large systems. For this paper, we demonstrate both the bench- marking and device control protocol components of CODA through examples of realistic simulations of electrostatic quantum dot devices, which are currently being developed experimentally for quantum computation.

  9. Approaching Environmental Issues in Architecture

    DEFF Research Database (Denmark)

    Petersen, Mads Dines; Knudstrup, Mary-Ann

    2013-01-01

    The research presented here takes its point of departure in the design process with a specific focus on how it is approached when designing energy efficient architecture. This is done through a case-study of a design process in a Danish architectural office. This study shows the importance...

  10. A Topology Optimisation Approach to Learning in Architectural Design

    DEFF Research Database (Denmark)

    Mullins, Michael; Kirkegaard, Poul Henning; Jessen, Rasmus Zederkof

    2005-01-01

    describes an attempt to unify analytic and analogical approaches in an architectural education setting, using topology optimization software. It uses as examples recent student projects where the architectural design process based on a topology optimization approach has been investigated. The paper...

  11. Computer architecture a quantitative approach

    CERN Document Server

    Hennessy, John L

    2019-01-01

    Computer Architecture: A Quantitative Approach, Sixth Edition has been considered essential reading by instructors, students and practitioners of computer design for over 20 years. The sixth edition of this classic textbook is fully revised with the latest developments in processor and system architecture. It now features examples from the RISC-V (RISC Five) instruction set architecture, a modern RISC instruction set developed and designed to be a free and openly adoptable standard. It also includes a new chapter on domain-specific architectures and an updated chapter on warehouse-scale computing that features the first public information on Google's newest WSC. True to its original mission of demystifying computer architecture, this edition continues the longstanding tradition of focusing on areas where the most exciting computing innovation is happening, while always keeping an emphasis on good engineering design.

  12. Topology Optimization - Engineering Contribution to Architectural Design

    Science.gov (United States)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2017-10-01

    The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one

  13. A novel approach using flexible scheduling and aggregation to optimize demand response in the developing interactive grid market architecture

    International Nuclear Information System (INIS)

    Reihani, Ehsan; Motalleb, Mahdi; Thornton, Matsu; Ghorbani, Reza

    2016-01-01

    Highlights: • Designing a DR market to increase renewable resources and decrease air pollution. • Explaining two economic models for DR market for selling available DR quantities. • Optimal allocating DR quantity to houses under each DR aggregator control. • Proposing a discomfort cost function for residential DR resources. • Performing a sensitivity analysis on discomfort cost function coefficients. - Abstract: With the increasing presence of intermittent renewable energy generation sources, variable control over loads and energy storage devices on the grid become even more important to maintain this balance. Increasing renewable energy penetration depends on both technical and economic factors. Distribution system consumers can contribute to grid stability by controlling residential electrical device power consumed by water heaters and battery storage systems. Coupled with dynamic supply pricing strategies, a comprehensive system for demand response (DR) exist. Proper DR management will allow greater integration of renewable energy sources partially replacing energy demand currently met by the combustion of fossil-fuels. An enticing economic framework providing increased value to consumers compensates them for reduced control of devices placed under a DR aggregator. Much work has already been done to develop more effective ways to implement DR control systems. Utilizing an integrated approach that combines consumer requirements into aggregate pools, and provides a dynamic response to market and grid conditions, we have developed a mathematical model that can quantify control parameters for optimum demand response and decide which resources to switch and when. In this model, optimization is achieved as a function of cost savings vs. customer comfort using mathematical market analysis. Two market modeling approaches—the Cournot and SFE—are presented and compared. A quadratic function is used for presenting the cost function of each DRA (Demand

  14. Optimizing Engineering Tools Using Modern Ground Architectures

    Science.gov (United States)

    2017-12-01

    ENGINEERING TOOLS USING MODERN GROUND ARCHITECTURES by Ryan P. McArdle December 2017 Thesis Advisor: Marc Peters Co-Advisor: I.M. Ross...Master’s thesis 4. TITLE AND SUBTITLE OPTIMIZING ENGINEERING TOOLS USING MODERN GROUND ARCHITECTURES 5. FUNDING NUMBERS 6. AUTHOR(S) Ryan P. McArdle 7... engineering tools. First, the effectiveness of MathWorks’ Parallel Computing Toolkit is assessed when performing somewhat basic computations in

  15. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  16. Architectural Optimization of Digital Libraries

    Science.gov (United States)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  17. Electromagnetic Vibration Energy Harvesting Devices Architectures, Design, Modeling and Optimization

    CERN Document Server

    Spreemann, Dirk

    2012-01-01

    Electromagnetic vibration transducers are seen as an effective way of harvesting ambient energy for the supply of sensor monitoring systems. Different electromagnetic coupling architectures have been employed but no comprehensive comparison with respect to their output performance has been carried out up to now. Electromagnetic Vibration Energy Harvesting Devices introduces an optimization approach which is applied to determine optimal dimensions of the components (magnet, coil and back iron). Eight different commonly applied coupling architectures are investigated. The results show that correct dimensions are of great significance for maximizing the efficiency of the energy conversion. A comparison yields the architectures with the best output performance capability which should be preferably employed in applications. A prototype development is used to demonstrate how the optimization calculations can be integrated into the design–flow. Electromagnetic Vibration Energy Harvesting Devices targets the design...

  18. An architectural approach to level design

    CERN Document Server

    Totten, Christopher W

    2014-01-01

    Explore Level Design through the Lens of Architectural and Spatial Experience TheoryWritten by a game developer and professor trained in architecture, An Architectural Approach to Level Design is one of the first books to integrate architectural and spatial design theory with the field of level design. It explores the principles of level design through the context and history of architecture, providing information useful to both academics and game development professionals.Understand Spatial Design Principles for Game Levels in 2D, 3D, and Multiplayer ApplicationsThe book presents architectura

  19. Ancient Climatic Architectural Design Approach

    Directory of Open Access Journals (Sweden)

    Nasibeh Faghih

    2013-01-01

    Full Text Available Ancient climatic architecture had found out a series of appropriate responses for the best compatibility with the critical climate condition for instance, designing ‘earth sheltered houses’ and ‘courtyard houses’. They could provide human climatic comfort without excessive usage of fossil fuel resources. Owing to the normal thermal conditions in the ground depth, earth sheltered houses can be slightly affected by thermal fluctuations due to being within the earth. In depth further than 6.1 meters, temperature alternation is minute during the year, equaling to average annual temperature of outside. More to the point, courtyard buildings as another traditional design approach, have prepared controlled climatic space based on creating the maximum shade in the summer and maximum solar heat absorption in the winter. The courtyard houses served the multiple functions of lighting to the rooms, acting as a heat absorber in the summer and a radiator in the winter, as well as providing an open space inside for community activities. It must be noted that they divided into summer and winter zones located in south and north of the central courtyard where residents were replaced into them according to changing the seasons. Therefore, Ancient climatic buildings provided better human thermal comfort in comparison with the use contemporary buildings of recent years, except with the air conditioning

  20. An Evolutionary Optimization Framework for Neural Networks and Neuromorphic Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Plank, James [University of Tennessee (UT); Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2016-01-01

    As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.

  1. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2007-01-01

    The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis

  2. Optimal causal inference: estimating stored information and approximating causal architecture.

    Science.gov (United States)

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

  3. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2011-01-01

    The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im

  4. A Declarative Approach to Architectural Reflection

    DEFF Research Database (Denmark)

    Ingstrup, Mads; Hansen, Klaus Marius

    2005-01-01

    which both creates runtime models of specific distributed architectures and allow for evaluation of AQL queries on these models. We illustrate the viability of the approach in two particular applications of such a model: constraint checking relative to an architectural style, and reasoning about certain......Recent research shows runtime architectural reflection is instrumental in, for instance, building adaptive and flexible systems or checking correspondence between design and implementation. Moreover, experience with computational reflection in various branches of computer science shows...... that the interface through which the meta-information of the running system is accessed, and possibly modified, lies at the heart of designing reflective systems. This paper proposes that such an interface should be like a database: accessed through queries expressed using the concepts with which architecture...

  5. Optimizations of Unstructured Aerodynamics Computations for Many-core Architectures

    KAUST Repository

    Al Farhan, Mohammed Ahmed

    2018-04-13

    We investigate several state-of-the-practice shared-memory optimization techniques applied to key routines of an unstructured computational aerodynamics application with irregular memory accesses. We illustrate for the Intel KNL processor, as a representative of the processors in contemporary leading supercomputers, identifying and addressing performance challenges without compromising the floating point numerics of the original code. We employ low and high-level architecture-specific code optimizations involving thread and data-level parallelism. Our approach is based upon a multi-level hierarchical distribution of work and data across both the threads and the SIMD units within every hardware core. On a 64-core KNL chip, we achieve nearly 2.9x speedup of the dominant routines relative to the baseline. These exhibit almost linear strong scalability up to 64 threads, and thereafter some improvement with hyperthreading. At substantially fewer Watts, we achieve up to 1.7x speedup relative to the performance of 72 threads of a 36-core Haswell CPU and roughly equivalent performance to 112 threads of a 56-core Skylake scalable processor. These optimizations are expected to be of value for many other unstructured mesh PDE-based scientific applications as multi and many-core architecture evolves.

  6. Enterprise architecture approach to mining companies engineering

    Directory of Open Access Journals (Sweden)

    Ilin’ Igor

    2017-01-01

    Full Text Available As Russian economy is still largely oriented on commodities production, there are a lot of cities where mining and commodity-oriented enterprises are the backbone of city economy. The mentioned enterprises mostly define the life quality of citizens in such cities, thus there are high requirements for engineering of city-forming enterprises. The paper describes the enterprise architecture approach for management system engineering of the mining enterprises. The paper contains the model of the mining enterprise architecture, the approach to the development and implementation of an integrated management system based on the concept of enterprise architecture and the structure of information systems and information technology infrastructure of the mining enterprise.

  7. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2015-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...

  8. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2014-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...

  9. Systemic Approach to Architectural Performance

    Directory of Open Access Journals (Sweden)

    Marie Davidova

    2017-04-01

    Full Text Available First-hand experiences in several design projects that were based on media richness and collaboration are described in this article. Although complex design processes are merely considered as socio-technical systems, they are deeply involved with natural systems. My collaborative research in the field of performance-oriented design combines digital and physical conceptual sketches, simulations and prototyping. GIGA-mapping - is applied to organise the data. The design process uses the most suitable tools, for the subtasks at hand, and the use of media is mixed according to particular requirements. These tools include digital and physical GIGA-mapping, parametric computer aided design (CAD, digital simulation of analyses, as well as sampling and 1:1 prototyping. Also discussed in this article are the methodologies used in several design projects to strategize these tools and the developments and trends in the tools employed.  The paper argues that the digital tools tend to produce similar results through given pre-sets that often do not correspond to real needs. Thus, there is a significant need for mixed methods including prototyping in the creative design process. Media mixing and cooperation across disciplines is unavoidable in the holistic approach to contemporary design. This includes the consideration of diverse biotic and abiotic agents. I argue that physical and digital GIGA-mapping is a crucial tool to use in coping with this complexity. Furthermore, I propose the integration of physical and digital outputs in one GIGA-map and the participation and co-design of biotic and abiotic agents into one rich design research space, which is resulting in an ever-evolving research-design process-result time-based design.

  10. Architecture and Landscape. Approaches from archaeology

    Directory of Open Access Journals (Sweden)

    Rebeca Blanco-Rotea

    2017-11-01

    Full Text Available This work proposes a theoretical and conceptual basis for the study of the fortified landscapes of the Galician- Portuguese border in the Modern Age. From this theoretical framework there was designed a research program that studies these landscapes. It proposes an approach to the study of this type of archaeological record from the Landscape Archeology and the Archeology of Architecture, introducing the concepts of built space and Archeology of Built Space.

  11. Modelling Approach In Islamic Architectural Designs

    Directory of Open Access Journals (Sweden)

    Suhaimi Salleh

    2014-06-01

    Full Text Available Architectural designs contribute as one of the main factors that should be considered in minimizing negative impacts in planning and structural development in buildings such as in mosques. In this paper, the ergonomics perspective is revisited which hence focuses on the conditional factors involving organisational, psychological, social and population as a whole. This paper tries to highlight the functional and architectural integration with ecstatic elements in the form of decorative and ornamental outlay as well as incorporating the building structure such as wall, domes and gates. This paper further focuses the mathematical aspects of the architectural designs such as polar equations and the golden ratio. These designs are modelled into mathematical equations of various forms, while the golden ratio in mosque is verified using two techniques namely, the geometric construction and the numerical method. The exemplary designs are taken from theSabah Bandaraya Mosque in Likas, Kota Kinabalu and the Sarawak State Mosque in Kuching,while the Universiti Malaysia Sabah Mosque is used for the Golden Ratio. Results show thatIslamic architectural buildings and designs have long had mathematical concepts and techniques underlying its foundation, hence, a modelling approach is needed to rejuvenate these Islamic designs.

  12. Battery-Less Electroencephalogram System Architecture Optimization

    Science.gov (United States)

    2016-12-01

    self-powered, adaptive data acquisition, subthreshold, internet of things 34 Peter Gadfort 301-394-0949Unclassified Unclassified Unclassified UU ii...desirable, such as for Internet of Things systems. The presented architecture is capable of low- power operation while maintaining a similar signal...the system will need to be harvested from the environment. There are several methods to harvest power from RF, solar , motion, and thermal. In this case

  13. Optimizing engineering tools using modern ground architectures

    OpenAIRE

    McArdle, Ryan P.

    2017-01-01

    Approved for public release; distribution is unlimited Over the past decade, a deluge of large and complex datasets (aka big data) has overwhelmed the scientific community. Traditional computing architectures were not capable of processing the data efficiently, or in some cases, could not process the data at all. Industry was forced to reexamine the existing data processing paradigm and develop innovative solutions to address the challenges. This thesis investigates how these modern comput...

  14. HEURISTIC APPROACHES FOR PORTFOLIO OPTIMIZATION

    OpenAIRE

    Manfred Gilli, Evis Kellezi

    2000-01-01

    The paper first compares the use of optimization heuristics to the classical optimization techniques for the selection of optimal portfolios. Second, the heuristic approach is applied to problems other than those in the standard mean-variance framework where the classical optimization fails.

  15. Advanced and secure architectural EHR approaches.

    Science.gov (United States)

    Blobel, Bernd

    2006-01-01

    Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context

  16. A Formal Approach to Software Architecture

    National Research Council Canada - National Science Library

    Allen, Robert

    1997-01-01

    .... While architectural concepts are often embodied in infrastructure to support specific architectural styles and in the initial conceptualization of a system configuration, the lack of an explicit...

  17. Heterogeneous architecture to process swarm optimization algorithms

    Directory of Open Access Journals (Sweden)

    Maria A. Dávila-Guzmán

    2014-01-01

    Full Text Available Since few years ago, the parallel processing has been embedded in personal computers by including co-processing units as the graphics processing units resulting in a heterogeneous platform. This paper presents the implementation of swarm algorithms on this platform to solve several functions from optimization problems, where they highlight their inherent parallel processing and distributed control features. In the swarm algorithms, each individual and dimension problem are parallelized by the granularity of the processing system which also offer low communication latency between individuals through the embedded processing. To evaluate the potential of swarm algorithms on graphics processing units we have implemented two of them: the particle swarm optimization algorithm and the bacterial foraging optimization algorithm. The algorithms’ performance is measured using the acceleration where they are contrasted between a typical sequential processing platform and the NVIDIA GeForce GTX480 heterogeneous platform; the results show that the particle swarm algorithm obtained up to 36.82x and the bacterial foraging swarm algorithm obtained up to 9.26x. Finally, the effect to increase the size of the population is evaluated where we show both the dispersion and the quality of the solutions are decreased despite of high acceleration performance since the initial distribution of the individuals can converge to local optimal solution.

  18. Nonlinear Shaping Architecture Designed with Using Evolutionary Structural Optimization Tools

    Science.gov (United States)

    Januszkiewicz, Krystyna; Banachowicz, Marta

    2017-10-01

    The paper explores the possibilities of using Structural Optimization Tools (ESO) digital tools in an integrated structural and architectural design in response to the current needs geared towards sustainability, combining ecological and economic efficiency. The first part of the paper defines the Evolutionary Structural Optimization tools, which were developed specifically for engineering purposes using finite element analysis as a framework. The development of ESO has led to several incarnations, which are all briefly discussed (Additive ESO, Bi-directional ESO, Extended ESO). The second part presents result of using these tools in structural and architectural design. Actual building projects which involve optimization as a part of the original design process will be presented (Crematorium in Kakamigahara Gifu, Japan, 2006 SANAA“s Learning Centre, EPFL in Lausanne, Switzerland 2008 among others). The conclusion emphasizes that the structural engineering and architectural design mean directing attention to the solutions which are used by Nature, designing works optimally shaped and forming their own environments. Architectural forms never constitute the optimum shape derived through a form-finding process driven only by structural optimization, but rather embody and integrate a multitude of parameters. It might be assumed that there is a similarity between these processes in nature and the presented design methods. Contemporary digital methods make the simulation of such processes possible, and thus enable us to refer back to the empirical methods of previous generations.

  19. Open architecture design and approach for the Integrated Sensor Architecture (ISA)

    Science.gov (United States)

    Moulton, Christine L.; Krzywicki, Alan T.; Hepp, Jared J.; Harrell, John; Kogut, Michael

    2015-05-01

    Integrated Sensor Architecture (ISA) is designed in response to stovepiped integration approaches. The design, based on the principles of Service Oriented Architectures (SOA) and Open Architectures, addresses the problem of integration, and is not designed for specific sensors or systems. The use of SOA and Open Architecture approaches has led to a flexible, extensible architecture. Using these approaches, and supported with common data formats, open protocol specifications, and Department of Defense Architecture Framework (DoDAF) system architecture documents, an integration-focused architecture has been developed. ISA can help move the Department of Defense (DoD) from costly stovepipe solutions to a more cost-effective plug-and-play design to support interoperability.

  20. Discrete optimization in architecture extremely modular systems

    CERN Document Server

    Zawidzki, Machi

    2017-01-01

    This book is comprised of two parts, both of which explore modular systems: Pipe-Z (PZ) and Truss-Z (TZ), respectively. It presents several methods of creating PZ and TZ structures subjected to discrete optimization. The algorithms presented employ graph-theoretic and heuristic methods. The underlying idea of both systems is to create free-form structures using the minimal number of types of modular elements. PZ is more conceptual, as it forms single-branch mathematical knots with a single type of module. Conversely, TZ is a skeletal system for creating free-form pedestrian ramps and ramp networks among any number of terminals in space. In physical space, TZ uses two types of modules that are mirror reflections of each other. The optimization criteria discussed include: the minimal number of units, maximal adherence to the given guide paths, etc.

  1. Proposing an Optimal Learning Architecture for the Digital Enterprise.

    Science.gov (United States)

    O'Driscoll, Tony

    2003-01-01

    Discusses the strategic role of learning in information age organizations; analyzes parallels between the application of technology to business and the application of technology to learning; and proposes a learning architecture that aligns with the knowledge-based view of the firm and optimizes the application of technology to achieve proficiency…

  2. Topology optimization approaches

    DEFF Research Database (Denmark)

    Sigmund, Ole; Maute, Kurt

    2013-01-01

    Topology optimization has undergone a tremendous development since its introduction in the seminal paper by Bendsøe and Kikuchi in 1988. By now, the concept is developing in many different directions, including “density”, “level set”, “topological derivative”, “phase field”, “evolutionary...

  3. A Systems Engineering Approach to Architecture Development

    Science.gov (United States)

    Di Pietro, David A.

    2015-01-01

    Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles

  4. Sustainable architecture approach in designing residential ...

    African Journals Online (AJOL)

    Sustainable architecture has been shaped with vernacular materials based on the vernacular architecture according to climatic conditions, saving energy and responding to needs and social and cultural conditions. In cold region architecture, the buildings are constructed as steps on the hills in the direction of sun and ...

  5. Systems approaches to study root architecture dynamics

    Directory of Open Access Journals (Sweden)

    Candela eCuesta

    2013-12-01

    Full Text Available The plant root system is essential for providing anchorage to the soil, supplying minerals and water, and synthesizing metabolites. It is a dynamic organ modulated by external cues such as environmental signals, water and nutrients availability, salinity and others. Lateral roots are initiated from the primary root post-embryonically, after which they progress through discrete developmental stages which can be independently controlled, providing a high level of plasticity during root system formation.Within this review, main contributions are presented, from the classical forward genetic screens to the more recent high-throughput approaches, combined with computer model predictions, dissecting how lateral roots and thereby root system architecture is established and developed.

  6. Genetic optimization of neural network architecture

    International Nuclear Information System (INIS)

    Harp, S.A.; Samad, T.

    1994-03-01

    Neural networks are now a popular technology for a broad variety of application domains, including the electric utility industry. Yet, as the technology continues to gain increasing acceptance, it is also increasingly apparent that the power that neural networks provide is not an unconditional blessing. Considerable care must be exercised during application development if the full benefit of the technology is to be realized. At present, no fully general theory or methodology for neural network design is available, and application development is a trial-and-error process that is time-consuming and expertise-intensive. Each application demands appropriate selections of the network input space, the network structure, and values of learning algorithm parameters-design choices that are closely coupled in ways that largely remain a mystery. This EPRI-funded exploratory research project was initiated to take the key next step in this research program: the validation of the approach on a realistic problem. We focused on the problem of modeling the thermal performance of the TVA Sequoyah nuclear power plant (units 1 and 2)

  7. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  8. What is the optimal architecture for visual information routing?

    Science.gov (United States)

    Wolfrum, Philipp; von der Malsburg, Christoph

    2007-12-01

    Analyzing the design of networks for visual information routing is an underconstrained problem due to insufficient anatomical and physiological data. We propose here optimality criteria for the design of routing networks. For a very general architecture, we derive the number of routing layers and the fanout that minimize the required neural circuitry. The optimal fanout l is independent of network size, while the number k of layers scales logarithmically (with a prefactor below 1), with the number n of visual resolution units to be routed independently. The results are found to agree with data of the primate visual system.

  9. Optimization of the Brillouin operator on the KNL architecture

    Science.gov (United States)

    Dürr, Stephan

    2018-03-01

    Experiences with optimizing the matrix-times-vector application of the Brillouin operator on the Intel KNL processor are reported. Without adjustments to the memory layout, performance figures of 360 Gflop/s in single and 270 Gflop/s in double precision are observed. This is with Nc = 3 colors, Nv = 12 right-hand-sides, Nthr = 256 threads, on lattices of size 323 × 64, using exclusively OMP pragmas. Interestingly, the same routine performs quite well on Intel Core i7 architectures, too. Some observations on the much harderWilson fermion matrix-times-vector optimization problem are added.

  10. A cognitive decision agent architecture for optimal energy management of microgrids

    International Nuclear Information System (INIS)

    Velik, Rosemarie; Nicolay, Pascal

    2014-01-01

    Highlights: • We propose an optimization approach for energy management in microgrids. • The optimizer emulates processes involved in human decision making. • Optimization objectives are energy self-consumption and financial gain maximization. • We gain improved optimization results in significantly reduced computation time. - Abstract: Via the integration of renewable energy and storage technologies, buildings have started to change from passive (electricity) consumers to active prosumer microgrids. Along with this development come a shift from centralized to distributed production and consumption models as well as discussions about the introduction of variable demand–supply-driven grid electricity prices. Together with upcoming ICT and automation technologies, these developments open space to a wide range of novel energy management and energy trading possibilities to optimally use available energy resources. However, what is considered as an optimal energy management and trading strategy heavily depends on the individual objectives and needs of a microgrid operator. Accordingly, elaborating the most suitable strategy for each particular system configuration and operator need can become quite a complex and time-consuming task, which can massively benefit from computational support. In this article, we introduce a bio-inspired cognitive decision agent architecture for optimized, goal-specific energy management in (interconnected) microgrids, which are additionally connected to the main electricity grid. For evaluating the performance of the architecture, a number of test cases are specified targeting objectives like local photovoltaic energy consumption maximization and financial gain maximization. Obtained outcomes are compared against a modified simulating annealing optimization approach in terms of objective achievement and computational effort. Results demonstrate that the cognitive decision agent architecture yields improved optimization results in

  11. Modular production line optimization: The exPLORE architecture

    Directory of Open Access Journals (Sweden)

    Spinellis Diomidis D.

    2000-01-01

    Full Text Available The general design problem in serial production lines concerns the allocation of resources such as the number of servers, their service rates, and buffers given production-specific constraints, associated costs, and revenue projections. We describe the design of exPLOre: a modular, object-oriented, production line optimization software architecture. An abstract optimization module can be instantiated using a variety of stochastic optimization methods such as simulated annealing and genetic algorithms. Its search space is constrained by a constraint checker while its search direction is guided by a cost analyser which combines the output of a throughput evaluator with the business model. The throughput evaluator can be instantiated using Markovian, generalised queueing network methods, a decomposition, or an expansion method algorithm.

  12. One approach to architectural acoustics in education

    Science.gov (United States)

    Jaffe, J. Christopher

    2003-04-01

    In the fall of 1997, Dean Alan Balfour of the School of Architecture at the Rennselaer Polytechnic Institute asked me to introduce an undergraduate 14 credit certificate course entitled ''Sonics in Architecture.`` Subsequently, the program was expanded to include a Master's Degree in Building Science. This paper discusses the trials and tribulations of building a scientific program in a liberal arts school. In addition, the problem of acquiring the research funds needed to provide tuition assistance for graduate students in Architectural Acoustics is reviewed. Information on the curriculum developed for both the lecture and laboratory courses is provided. I will also share my concerns regarding the teaching methods currently prevalent in many schools of architecture today, and how building science professionals might assist in addressing these issues.

  13. A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2012-05-31

    This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniques in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.

  14. Approaching Technical Issues in Architectural Education

    DEFF Research Database (Denmark)

    Pugnale, Alberto; Parigi, Dario

    2012-01-01

    This paper discusses teaching of technical subjects in architecture, presenting two experimental activities, recently organized at Aalborg University - a two week long workshop and a one day long lecture. From the pedagogical point of view, the activities are strategically placed between conventi......This paper discusses teaching of technical subjects in architecture, presenting two experimental activities, recently organized at Aalborg University - a two week long workshop and a one day long lecture. From the pedagogical point of view, the activities are strategically placed between...

  15. Space Based Radar-System Architecture Design and Optimization for a Space Based Replacement to AWACS

    National Research Council Canada - National Science Library

    Wickert, Douglas

    1997-01-01

    Through a process of system architecture design, system cost modeling, and system architecture optimization, we assess the feasibility of performing the next generation Airborne Warning and Control System (AWACS...

  16. Design and optimizing factors of PACS network architecture

    International Nuclear Information System (INIS)

    Tao Yonghao; Miao Jingtao

    2001-01-01

    Objective: Exploring the design and optimizing factors of picture archiving and communication system (PACS) network architecture. Methods: Based on the PACS of shanghai first hospital to performed the measurements and tests on the requirements of network bandwidth and transmitting rate for different PACS functions and procedures respectively in static and dynamic network traffic situation, utilizing the network monitoring tools which built-in workstations and provided by Windows NT. Results: No obvious difference between switch equipment and HUB when measurements and tests implemented in static situation except route which slow down the rate markedly. In dynamic environment Switch is able to provide higher bandwidth utilizing than HUB and local system scope communication achieved faster transmitting rate than global system. Conclusion: The primary optimizing factors of PACS network architecture design include concise network topology and disassemble tremendous global traffic to multiple distributed local scope network communication to reduce the traffic of network backbone. The most important issue is guarantee essential bandwidth for diagnosis procedure of medical imaging

  17. Fabrication of microfluidic architectures for optimal flow rate and concentration measurement for lab on chip application

    Science.gov (United States)

    Adam, Tijjani; Hashim, U.

    2017-03-01

    Optimum flow in micro channel for sensing purpose is challenging. In this study, The optimizations of the fluid sample flows are made through the design and characterization of the novel microfluidics' architectures to achieve the optimal flow rate in the micro channels. The biocompatibility of the Polydimetylsiloxane (Sylgard 184 silicon elastomer) polymer used to fabricate the device offers avenue for the device to be implemented as the universal fluidic delivery system for bio-molecules sensing in various bio-medical applications. The study uses the following methodological approaches, designing a novel microfluidics' architectures by integrating the devices on a single 4 inches silicon substrate, fabricating the designed microfluidic devices using low-cost solution soft lithography technique, characterizing and validating the flow throughput of urine samples in the micro channels by generating pressure gradients through the devices' inlets. The characterization on the urine samples flow in the micro channels have witnessed the constant flow throughout the devices.

  18. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    Directory of Open Access Journals (Sweden)

    Arne Riekstiņš

    2011-04-01

    Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English

  19. A relational approach to support software architecture analysis

    NARCIS (Netherlands)

    Feijs, L.M.G.; Krikhaar, R.L.; van Ommering, R.C.

    1998-01-01

    This paper reports on our experience with a relational approach to support the analysis of existing software architectures. The analysis options provide for visualization and view calculation. The approach has been applied for reverse engineering. It is also possible to check concrete designs

  20. Secure ASIC Architecture for Optimized Utilization of a Trusted Supply Chain for Common Architecture A and D Applications

    Science.gov (United States)

    2017-03-01

    Secure ASIC Architecture for Optimized Utilization of a Trusted Supply Chain for Common Architecture A&D Applications Ezra Hall, Ray Eberhard...use applications. Furthermore, a product roadmap must be comprehended as part of this platform, offering A&D programs a solution to their...existing solutions for adoption to occur. Additionally, a well-developed roadmap to future secure SoCs, leveraging the value add of future advanced

  1. Design Optimization of Mixed-Criticality Real-Time Applications on Cost-Constrained Partitioned Architectures

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Pop, Paul

    2011-01-01

    In this paper we are interested to implement mixed-criticality hard real-time applications on a given heterogeneous distributed architecture. Applications have different criticality levels, captured by their Safety-Integrity Level (SIL), and are scheduled using static-cyclic scheduling. Mixed......-criticality tasks can be integrated onto the same architecture only if there is enough spatial and temporal separation among them. We consider that the separation is provided by partitioning, such that applications run in separate partitions, and each partition is allocated several time slots on a processor. Tasks...... slots on each processor and (iv) the schedule tables, such that all the applications are schedulable and the development costs are minimized. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real...

  2. Optimized batteries for cars with dual electrical architecture

    Science.gov (United States)

    Douady, J. P.; Pascon, C.; Dugast, A.; Fossati, G.

    During recent years, the increase in car electrical equipment has led to many problems with traditional starter batteries (such as cranking failure due to flat batteries, battery cycling etc.). The main causes of these problems are the double function of the automotive battery (starter and service functions) and the difficulties in designing batteries well adapted to these two functions. In order to solve these problems a new concept — the dual-concept — has been developed with two separate batteries: one battery is dedicated to the starter function and the other is dedicated to the service function. Only one alternator charges the two batteries with a separation device between the two electrical circuits. The starter battery is located in the engine compartment while the service battery is located at the rear of the car. From the analysis of new requirements, battery designs have been optimized regarding the two types of functions: (i) a small battery with high specific power for the starting function; for this function a flooded battery with lead-calcium alloy grids and thin plates is proposed; (ii) for the service function, modified sealed gas-recombinant batteries with cycling and deep-discharge ability have been developed. The various advantages of the dual-concept are studied in terms of starting reliability, battery weight, and voltage supply. The operating conditions of the system and several dual electrical architectures have also been studied in the laboratory and the car. The feasibility of the concept is proved.

  3. Deep learning architecture for iris recognition based on optimal Gabor filters and deep belief network

    Science.gov (United States)

    He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang

    2017-03-01

    Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.

  4. $H_2$ optimal controllers with observer based architecture for continuous-time systems : separation principle

    NARCIS (Netherlands)

    Saberi, A.; Sannuti, P.; Stoorvogel, A.A.

    1994-01-01

    For a general H2 optimal control problem, at first all Hz optimal measurement feedback controllers are characterized and parameterized, and then attention is focused on controllers with observer based architecture. Both full order as well as reduced order observer based H2 optimal controllers are

  5. Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture

    Science.gov (United States)

    Weber, Raphael; Rettberg, Achim

    A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.

  6. Infill architecture: Design approaches for in-between buildings and 'bond' as integrative element

    Directory of Open Access Journals (Sweden)

    Alfirević Đorđe

    2015-01-01

    Full Text Available The aim of the paper is to draw attention to the view that the two key elements in achieving good quality of architecture infill in immediate, current surroundings, are the selection of optimal creative method of infill architecture and adequate application of 'the bond' as integrative element, The success of achievement and the quality of architectural infill mainly depend on the assessment of various circumstances, but also on the professionalism, creativity, sensibility, and finally innovativeness of the architect, In order for the infill procedure to be carried out adequately, it is necessary to carry out the assessment of quality of the current surroundings that the object will be integrated into, and then to choose the creative approach that will allow the object to establish an optimal dialogue with its surroundings, On a wider scale, both theory and the practice differentiate thee main creative approaches to infill objects: amimetic approach (mimesis, bassociative approach and ccontrasting approach, Which of the stated approaches will be chosen depends primarily on the fact whether the existing physical structure into which the object is being infilled is 'distinct', 'specific' or 'indistinct', but it also depends on the inclination of the designer, 'The bond' is a term which in architecture denotes an element or zone of one object, but in some instances it can refer to the whole object which has been articulated in a specific way, with an aim of reaching the solution for the visual conflict as is often the case in situations when there is a clash between the existing objects and the newly designed or reconstructed object, This paper provides in-depth analysis of different types of bonds, such as 'direction as bond', 'cornice as bond', 'structure as bond', 'texture as bond' and 'material as bond', which indicate complexity and multiple layers of the designing process of object interpolation.

  7. Study on Optimization of I and C Architecture for Research Reactors Using Bayesian Networks

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Khaili Ur; Shin, Jinsoo; Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of)

    2013-07-01

    The optimization in terms of redundancy of modules and components in Instrumentation and Control (I and C) architecture is based on cost and availability assuming regulatory requirements are satisfied. The motive of this study is to find an optimized I and C architecture, either in hybrid formation, fully digital or analog, with respect to system availability and relative cost of architecture. The cost of research reactors I and C systems is prone to have effect on marketing competitiveness. As a demonstrative example, the reactor protection system of research reactors is selected. The four cases with different architecture formation were developed with single and double redundancy of bi-stable modules, coincidence processor module, and safety or protection circuit actuation logic. The architecture configurations are transformed to reliability block diagram (RBD) based on logical operation and function of modules. A Bayesian Network (BN) model is constructed from RBD to assess availability. The cost estimation was proposed and reliability cost index RI was suggested.

  8. Study on Optimization of I and C Architecture for Research Reactors Using Bayesian Networks

    International Nuclear Information System (INIS)

    Rahman, Khaili Ur; Shin, Jinsoo; Heo, Gyunyoung

    2013-01-01

    The optimization in terms of redundancy of modules and components in Instrumentation and Control (I and C) architecture is based on cost and availability assuming regulatory requirements are satisfied. The motive of this study is to find an optimized I and C architecture, either in hybrid formation, fully digital or analog, with respect to system availability and relative cost of architecture. The cost of research reactors I and C systems is prone to have effect on marketing competitiveness. As a demonstrative example, the reactor protection system of research reactors is selected. The four cases with different architecture formation were developed with single and double redundancy of bi-stable modules, coincidence processor module, and safety or protection circuit actuation logic. The architecture configurations are transformed to reliability block diagram (RBD) based on logical operation and function of modules. A Bayesian Network (BN) model is constructed from RBD to assess availability. The cost estimation was proposed and reliability cost index RI was suggested

  9. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    Science.gov (United States)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while

  10. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John; Lee, Jon; Margulies, Susan

    2010-01-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  11. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John

    2010-06-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  12. Comparison of Human Exploration Architecture and Campaign Approaches

    Science.gov (United States)

    Goodliff, Kandyce; Cirillo, William; Mattfeld, Bryan; Stromgren, Chel; Shyface, Hilary

    2015-01-01

    As part of an overall focus on space exploration, National Aeronautics and Space Administration (NASA) continues to evaluate potential approaches for sending humans beyond low Earth orbit (LEO). In addition, various external organizations are studying options for beyond LEO exploration. Recent studies include NASA's Evolvable Mars Campaign and Design Reference Architecture (DRA) 5.0, JPL's Minimal Mars Architecture; the Inspiration Mars mission; the Mars One campaign; and the Global Exploration Roadmap (GER). Each of these potential exploration constructs applies unique methods, architectures, and philosophies for human exploration. It is beneficial to compare potential approaches in order to better understand the range of options available for exploration. Since most of these studies were conducted independently, the approaches, ground rules, and assumptions used to conduct the analysis differ. In addition, the outputs and metrics presented for each construct differ substantially. This paper will describe the results of an effort to compare and contrast the results of these different studies under a common set of metrics. The paper will first present a summary of each of the proposed constructs, including a description of the overall approach and philosophy for exploration. Utilizing a common set of metrics for comparison, the paper will present the results of an evaluation of the potential benefits, critical challenges, and uncertainties associated with each construct. The analysis framework will include a detailed evaluation of key characteristics of each construct. These will include but are not limited to: a description of the technology and capability developments required to enable the construct and the uncertainties associated with these developments; an analysis of significant operational and programmatic risks associated with that construct; and an evaluation of the extent to which exploration is enabled by the construct, including the destinations

  13. Dynamical System Approaches to Combinatorial Optimization

    DEFF Research Database (Denmark)

    Starke, Jens

    2013-01-01

    of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization......Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods...... thereof can be used as models for many industrial problems like manufacturing planning and optimization of flexible manufacturing systems. This is illustrated for an example in distributed robotic systems....

  14. Information security architecture an integrated approach to security in the organization

    CERN Document Server

    Killmeyer, Jan

    2000-01-01

    An information security architecture is made up of several components. Each component in the architecture focuses on establishing acceptable levels of control. These controls are then applied to the operating environment of an organization. Functionally, information security architecture combines technical, practical, and cost-effective solutions to provide an adequate and appropriate level of security.Information Security Architecture: An Integrated Approach to Security in the Organization details the five key components of an information security architecture. It provides C-level executives

  15. Architecture

    OpenAIRE

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  16. A single network adaptive critic (SNAC) architecture for optimal control synthesis for a class of nonlinear systems.

    Science.gov (United States)

    Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N

    2006-12-01

    Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.

  17. Design of complex architectures using a three dimension approach : the crosswork case

    NARCIS (Netherlands)

    Seguel Pérez, R.E.; Grefen, P.W.P.J.; Eshuis, H.

    2010-01-01

    In this paper, we present a three dimensional design approach of complex information systems architectures. Key element of this approach is the model transformation cube, which consists of three dimensions along which architecture models can be positioned. Industry architecture frameworks to guide

  18. New approaches to optimization in aerospace conceptual design

    Science.gov (United States)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  19. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    Science.gov (United States)

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  20. Mars Scenario-Based Visioning: Logistical Optimization of Transportation Architectures

    Science.gov (United States)

    1999-01-01

    The purpose of this conceptual design investigation is to examine transportation forecasts for future human Wu missions to Mars. - Scenario-Based Visioning is used to generate possible future demand projections. These scenarios are then coupled with availability, cost, and capacity parameters for indigenously designed Mars Transfer Vehicles (solar electric, nuclear thermal, and chemical propulsion types) and Earth-to-Orbit launch vehicles (current, future, and indigenous) to provide a cost-conscious dual-phase launch manifest to meet such future demand. A simulator named M-SAT (Mars Scenario Analysis Tool) is developed using this method. This simulation is used to examine three specific transportation scenarios to Mars: a limited "flaus and footprints" mission, a More ambitious scientific expedition similar to an expanded version of the Design Reference Mission from NASA, and a long-term colonization scenario. Initial results from the simulation indicate that chemical propulsion systems might be the architecture of choice for all three scenarios. With this mind, "what if' analyses were performed which indicated that if nuclear production costs were reduced by 30% for the colonization scenario, then the nuclear architecture would have a lower life cycle cost than the chemical. Results indicate that the most cost-effective solution to the Mars transportation problem is to plan for segmented development, this involves development of one vehicle at one opportunity and derivatives of that vehicle at subsequent opportunities.

  1. Design health village with the approach of sustainable architecture ...

    African Journals Online (AJOL)

    Journal Home > Vol 8, No 3 (2016) > ... a natural environment and away from the pollution of urban life , traditional medical care, hydrotherapy, sports and ... Keywords: Health; city health; smart; sustainability in architecture; architectural design ...

  2. Optimizing a High Energy Physics (HEP) Toolkit on Heterogeneous Architectures

    CERN Document Server

    Lindal, Yngve Sneen; Jarp, Sverre

    2011-01-01

    A desired trend within high energy physics is to increase particle accelerator luminosities, leading to production of more collision data and higher probabilities of finding interesting physics results. A central data analysis technique used to determine whether results are interesting or not is the maximum likelihood method, and the corresponding evaluation of the negative log-likelihood, which can be computationally expensive. As the amount of data grows, it is important to take benefit from the parallelism in modern computers. This, in essence, means to exploit vector registers and all available cores on CPUs, as well as utilizing co-processors as GPUs. This thesis describes the work done to optimize and parallelize a prototype of a central data analysis tool within the high energy physics community. The work consists of optimizations for multicore processors, GPUs, as well as a mechanism to balance the load between both CPUs and GPUs with the aim to fully exploit the power of modern commodity computers. W...

  3. An Overlay Architecture for Throughput Optimal Multipath Routing

    Science.gov (United States)

    2017-01-14

    maximum throughput. Finally, we propose a threshold-based policy (BP-T) and a heuristic policy (OBP), which dynamically control traffic bifurcations...network stability region is available . Second, given any subset of nodes that are controllable, we also wish to develop an optimal routing policy that...case when tunnels do not overlap. We also develop a heuristic overlay control policy for use on general topologies, and show through simulation that

  4. Modeling, analysis and optimization of network-on-chip communication architectures

    CERN Document Server

    Ogras, Umit Y

    2013-01-01

    Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures. This book explores outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.

  5. A Systems Approach to Developing an Affordable Space Ground Transportation Architecture using a Commonality Approach

    Science.gov (United States)

    Garcia, Jerry L.; McCleskey, Carey M.; Bollo, Timothy R.; Rhodes, Russel E.; Robinson, John W.

    2012-01-01

    This paper presents a structured approach for achieving a compatible Ground System (GS) and Flight System (FS) architecture that is affordable, productive and sustainable. This paper is an extension of the paper titled "Approach to an Affordable and Productive Space Transportation System" by McCleskey et al. This paper integrates systems engineering concepts and operationally efficient propulsion system concepts into a structured framework for achieving GS and FS compatibility in the mid-term and long-term time frames. It also presents a functional and quantitative relationship for assessing system compatibility called the Architecture Complexity Index (ACI). This paper: (1) focuses on systems engineering fundamentals as it applies to improving GS and FS compatibility; (2) establishes mid-term and long-term spaceport goals; (3) presents an overview of transitioning a spaceport to an airport model; (4) establishes a framework for defining a ground system architecture; (5) presents the ACI concept; (6) demonstrates the approach by presenting a comparison of different GS architectures; and (7) presents a discussion on the benefits of using this approach with a focus on commonality.

  6. Optimize-and-Dispatch Architecture for Expressive Ad Auctions

    OpenAIRE

    Parkes, David C.; Sandholm, Tuomas

    2005-01-01

    Ad auctions are generating massive amounts of revenue for online search engines such as Google. Yet, the level of expressiveness provided to participants in ad auctions could be significantly enhanced. An advantage of this could be improved competition and thus improved revenue to a seller of the right to advertise to a stream of search queries. In this paper, we outline the kinds of expressiveness that one might expect to be useful for ad auctions and introduce a high-level “optimize-and-...

  7. Optimizations of Unstructured Aerodynamics Computations for Many-core Architectures

    KAUST Repository

    Al Farhan, Mohammed Ahmed; Keyes, David E.

    2018-01-01

    involving thread and data-level parallelism. Our approach is based upon a multi-level hierarchical distribution of work and data across both the threads and the SIMD units within every hardware core. On a 64-core KNL chip, we achieve nearly 2.9x speedup

  8. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  9. Quantum Resonance Approach to Combinatorial Optimization

    Science.gov (United States)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  10. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    Science.gov (United States)

    Tang, Haijing; Wang, Siye; Zhang, Yanjun

    2013-01-01

    Clustering has become a common trend in very long instruction words (VLIW) architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC) VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC) VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file. PMID:23970841

  11. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    Directory of Open Access Journals (Sweden)

    Haijing Tang

    2013-01-01

    Full Text Available Clustering has become a common trend in very long instruction words (VLIW architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file.

  12. The hybrid thermography approach applied to architectural structures

    Science.gov (United States)

    Sfarra, S.; Ambrosini, D.; Paoletti, D.; Nardi, I.; Pasqualoni, G.

    2017-07-01

    This work contains an overview of infrared thermography (IRT) method and its applications relating to the investigation of architectural structures. In this method, the passive approach is usually used in civil engineering, since it provides a panoramic view of the thermal anomalies to be interpreted also thanks to the use of photographs focused on the region of interest (ROI). The active approach, is more suitable for laboratory or indoor inspections, as well as for objects having a small size. The external stress to be applied is thermal, coming from non-natural apparatus such as lamps or hot / cold air jets. In addition, the latter permits to obtain quantitative information related to defects not detectable to the naked eyes. Very recently, the hybrid thermography (HIRT) approach has been introduced to the attention of the scientific panorama. It can be applied when the radiation coming from the sun, directly arrives (i.e., possibly without the shadow cast effect) on a surface exposed to the air. A large number of thermograms must be collected and a post-processing analysis is subsequently applied via advanced algorithms. Therefore, an appraisal of the defect depth can be obtained passing through the calculation of the combined thermal diffusivity of the materials above the defect. The approach is validated herein by working, in a first stage, on a mosaic sample having known defects while, in a second stage, on a Church built in L'Aquila (Italy) and covered with a particular masonry structure called apparecchio aquilano. The results obtained appear promising.

  13. Space and place concepts analysis based on semiology approach in residential architecture

    Directory of Open Access Journals (Sweden)

    Mojtaba Parsaee

    2015-12-01

    Full Text Available Space and place are among the fundamental concepts in architecture about which many discussions have been held and the complexity and importance of these concepts were focused on. This research has introduced an approach to better cognition of the architectural concepts based on theory and method of semiology in linguistics. Hence, at first the research investigates the concepts of space and place and explains their characteristics in architecture. Then, it reviews the semiology theory and explores its concepts and ideas. After obtaining the principles of theory and also the method of semiology, they are redefined in an architectural system based on an adaptive method. Finally, the research offers a conceptual model which is called the semiology approach by considering the architectural system as a system of signs. The approach can be used to decode the content of meanings and forms and analyses of the architectural mechanism in order to obtain its meanings and concepts. In this way and based on this approach, the residential architecture of the traditional city of Bushehr – Iran was analyzed as a case of study and its concepts were extracted. The results of this research demonstrate the effectiveness of this approach in structure detection and identification of an architectural system. Besides, this approach has the capability to be used in processes of sustainable development and also be a basis for deconstruction of architectural texts. The research methods of this study are qualitative based on comparative and descriptive analyses.

  14. Unifying approach for model transformations in the MOF metamodeling architecture

    NARCIS (Netherlands)

    Ivanov, Ivan; van den Berg, Klaas

    2004-01-01

    In the Meta Object Facility (MOF) metamodeling architecture a number of model transformation scenarios can be identified. It could be expected that a metamodeling architecture will be accompanied by a transformation technology supporting the model transformation scenarios in a uniform way. Despite

  15. Unit 1A: General Approach to the Teaching of Architecture

    DEFF Research Database (Denmark)

    Gammelgaard Nielsen, Anders

    2011-01-01

    An ideal course Ever since the founding of the Aar- hus School of Architecture in 1965 there has been a tradition for lively discussion surrounding the content of the architecture program. The discussion has often been con- ducted from ideological or norma- tive positions, with the tendency to st...

  16. Service-Oriented Architecture Approach to MAGTF Logistics Support Systems

    Science.gov (United States)

    2013-09-01

    Support System-Marine Corps IT Information Technology KPI Key Performance Indicators LCE Logistics Command Element ITV In-transit Visibility LCM...building blocks, options, KPI (key performance indicators), design decisions and the corresponding; the physical attributes which is the second attribute... KPI ) that they impact. h. Layer 8 (Information Architecture) The business intelligence layer and information architecture safeguards the inclusion

  17. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  18. Optimizing Vector-Quantization Processor Architecture for Intelligent Query-Search Applications

    Science.gov (United States)

    Xu, Huaiyu; Mita, Yoshio; Shibata, Tadashi

    2002-04-01

    The architecture of a very large scale integration (VLSI) vector-quantization processor (VQP) has been optimized to develop a general-purpose intelligent query-search agent. The agent performs a similarity-based search in a large-volume database. Although similarity-based search processing is computationally very expensive, latency-free searches have become possible due to the highly parallel maximum-likelihood search architecture of the VQP chip. Three architectures of the VQP chip have been studied and their performances are compared. In order to give reasonable searching results according to the different policies, the concept of penalty function has been introduced into the VQP. An E-commerce real-estate agency system has been developed using the VQP chip implemented in a field-programmable gate array (FPGA) and the effectiveness of such an agency system has been demonstrated.

  19. Optimization of neural network architecture for classification of radar jamming FM signals

    Science.gov (United States)

    Soto, Alberto; Mendoza, Ariadna; Flores, Benjamin C.

    2017-05-01

    The purpose of this study is to investigate several artificial Neural Network (NN) architectures in order to design a cognitive radar system capable of optimally distinguishing linear Frequency-Modulated (FM) signals from bandlimited Additive White Gaussian Noise (AWGN). The goal is to create a theoretical framework to determine an optimal NN architecture to achieve a Probability of Detection (PD) of 95% or higher and a Probability of False Alarm (PFA) of 1.5% or lower at 5 dB Signal to Noise Ratio (SNR). Literature research reveals that the frequency-domain power spectral densities characterize a signal more efficiently than its time-domain counterparts. Therefore, the input data is preprocessed by calculating the magnitude square of the Discrete Fourier Transform of the digitally sampled bandlimited AWGN and linear FM signals to populate a matrix containing N number of samples and M number of spectra. This matrix is used as input for the NN, and the spectra are divided as follows: 70% for training, 15% for validation, and 15% for testing. The study begins by experimentally deducing the optimal number of hidden neurons (1-40 neurons), then the optimal number of hidden layers (1-5 layers), and lastly, the most efficient learning algorithm. The training algorithms examined are: Resilient Backpropagation, Scaled Conjugate Gradient, Conjugate Gradient with Powell/Beale Restarts, Polak-Ribiére Conjugate Gradient, and Variable Learning Rate Backpropagation. We determine that an architecture with ten hidden neurons (or higher), one hidden layer, and a Scaled Conjugate Gradient for training algorithm encapsulates an optimal architecture for our application.

  20. Robust Portfolio Optimization using CAPM Approach

    Directory of Open Access Journals (Sweden)

    mohsen gharakhani

    2013-08-01

    Full Text Available In this paper, a new robust model of multi-period portfolio problem has been developed. One of the key concerns in any asset allocation problem is how to cope with uncertainty about future returns. There are some approaches in the literature for this purpose including stochastic programming and robust optimization. Applying these techniques to multi-period portfolio problem may increase the problem size in a way that the resulting model is intractable. In this paper, a novel approach has been proposed to formulate multi-period portfolio problem as an uncertain linear program assuming that asset return follows the single-index factor model. Robust optimization technique has been also used to solve the problem. In order to evaluate the performance of the proposed model, a numerical example has been applied using simulated data.

  1. A portable approach for PIC on emerging architectures

    Science.gov (United States)

    Decyk, Viktor

    2016-03-01

    A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.

  2. Methodological approach to strategic performance optimization

    OpenAIRE

    Hell, Marko; Vidačić, Stjepan; Garača, Željko

    2009-01-01

    This paper presents a matrix approach to the measuring and optimization of organizational strategic performance. The proposed model is based on the matrix presentation of strategic performance, which follows the theoretical notions of the balanced scorecard (BSC) and strategy map methodologies, initially developed by Kaplan and Norton. Development of a quantitative record of strategic objectives provides an arena for the application of linear programming (LP), which is a mathematical tech...

  3. Time and Power Optimizations in FPGA-Based Architectures for Polyphase Channelizers

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Harris, Fred; Koch, Peter

    2012-01-01

    This paper presents the time and power optimization considerations for Field Programmable Gate Array (FPGA) based architectures for a polyphase filter bank channelizer with an embedded square root shaping filter in its polyphase engine. This configuration performs two different re-sampling tasks......% slice register resources of a Xilinx Virtex-5 FPGA, operating at 400 and 480 MHz, and consuming 1.9 and 2.6 Watts of dynamic power, respectively....

  4. Optimization of Strategies and Models Review for Optimal Technologies - Based On Fuzzy Schemes for Green Architecture

    OpenAIRE

    Ghada Elshafei; Abdelazim Negm

    2015-01-01

    Recently, the green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives in green buildings should be designed t...

  5. Design Considerations. An Interior Architectural Design Approach to Interiors

    Science.gov (United States)

    Sawyer, William C.

    1971-01-01

    The State University Construction Fund, utilizing the nation's top professional talents, must design by contract, within fixed budgets and strict time schedules, quality architecture for 32 campuses in New York State. (Author)

  6. A dynamic optimization-based architecture for polygeneration microgrids with tri-generation, renewables, storage systems and electrical vehicles

    International Nuclear Information System (INIS)

    Bracco, Stefano; Delfino, Federico; Pampararo, Fabio; Robba, Michela; Rossi, Mansueto

    2015-01-01

    Highlights: • We describe two national special projects on smart grid. • We developed dynamic decision model based on a MPC architecture. • We developed an optimization model for microgrids, for a specific case study. - Abstract: An overall architecture, or Energy Management System (EMS), based on a dynamic optimization model to minimize operating costs and CO 2 emissions is formalized and applied to the University of Genova Savona Campus test-bed facilities consisting of a Smart Polygeneration Microgrid (SPM) and a Sustainable Energy Building (SEB) connected to such microgrid. The electric grid is a three phase low voltage distribution system, connecting many different technologies: three cogeneration micro gas turbines fed by natural gas, a photovoltaic field, three cogeneration Concentrating Solar Powered (CSP) systems (equipped with Stirling engines), an absorption chiller equipped with a storage tank, two types of electrical storage based on batteries technology (long term Na–Ni and short term Li-Ion ion), two electric vehicles charging stations, other electrical devices (inverters and smart metering systems), etc. The EMS can be used both for microgrids approximated as single bus bar (or one node) and for microgrids in which all buses are taken into account. The optimal operation of the microgrid is based on a central controller that receives forecasts and data from a SCADA system and that can schedule all dispatchable plants in the day ahead or in real time through an approach based on Model Predictive Control (MPC). The architecture is tested and applied to the case study of the Savona Campus

  7. A lightweight approach for designing enterprise architectures using BPMN : an application in hospitals

    NARCIS (Netherlands)

    Barros, O.; Seguel Pérez, R.E.; Quezada, A.; Dijkman, R.; Hofstetter, J.; Koehler, J.

    2011-01-01

    An Enterprise Architecture (EA) comprises different models at different levels of abstraction. Since existing EA design approaches, e.g. MDA, use UML for modeling, the design of the architecture becomes complex and time consuming. In this paper, we present an integrated and lightweight design

  8. Global optimization driven by genetic algorithms for disruption predictors based on APODIS architecture

    Energy Technology Data Exchange (ETDEWEB)

    Rattá, G.A., E-mail: giuseppe.ratta@ciemat.es [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain); Vega, J. [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain); Murari, A. [Consorzio RFX, Associazione EURATOM/ENEA per la Fusione, Padua (Italy); Dormido-Canto, S. [Dpto. de Informática y Automática, Universidad Nacional de Educación a Distancia, Madrid (Spain); Moreno, R. [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • A global optimization method based on genetic algorithms was developed. • It allowed improving the prediction of disruptions using APODIS architecture. • It also provides the potential opportunity to develop a spectrum of future predictors using different training datasets. • The future analysis of how their structures reassemble and evolve in each test may help to improve the development of disruption predictors for ITER. - Abstract: Since year 2010, the APODIS architecture has proven its accuracy predicting disruptions in JET tokamak. Nevertheless, it has shown margins for improvements, fact indisputable after the enhanced performances achieved in posterior upgrades. In this article, a complete optimization driven by Genetic Algorithms (GA) is applied to it aiming at considering all possible combination of signals, signal features, quantity of models, their characteristics and internal parameters. This global optimization targets the creation of the best possible system with a reduced amount of required training data. The results harbor no doubts about the reliability of the global optimization method, allowing to outperform the ones of previous versions: 91.77% of predictions (89.24% with an anticipation higher than 10 ms) with a 3.55% of false alarms. Beyond its effectiveness, it also provides the potential opportunity to develop a spectrum of future predictors using different training datasets.

  9. Global optimization driven by genetic algorithms for disruption predictors based on APODIS architecture

    International Nuclear Information System (INIS)

    Rattá, G.A.; Vega, J.; Murari, A.; Dormido-Canto, S.; Moreno, R.

    2016-01-01

    Highlights: • A global optimization method based on genetic algorithms was developed. • It allowed improving the prediction of disruptions using APODIS architecture. • It also provides the potential opportunity to develop a spectrum of future predictors using different training datasets. • The future analysis of how their structures reassemble and evolve in each test may help to improve the development of disruption predictors for ITER. - Abstract: Since year 2010, the APODIS architecture has proven its accuracy predicting disruptions in JET tokamak. Nevertheless, it has shown margins for improvements, fact indisputable after the enhanced performances achieved in posterior upgrades. In this article, a complete optimization driven by Genetic Algorithms (GA) is applied to it aiming at considering all possible combination of signals, signal features, quantity of models, their characteristics and internal parameters. This global optimization targets the creation of the best possible system with a reduced amount of required training data. The results harbor no doubts about the reliability of the global optimization method, allowing to outperform the ones of previous versions: 91.77% of predictions (89.24% with an anticipation higher than 10 ms) with a 3.55% of false alarms. Beyond its effectiveness, it also provides the potential opportunity to develop a spectrum of future predictors using different training datasets.

  10. Software representation methodology for agile application development: An architectural approach

    Directory of Open Access Journals (Sweden)

    Alejandro Paolo Daza Corredor

    2016-06-01

    Full Text Available The generation of Web applications represents the execution of repetitive tasks, this process involves determining information structures, the generation of different types of components and finally deployment tasks and tuning applications. In many applications of this type are coincident components generated from application to application. Current trends in software engineering as MDE, MDA or MDD pretend to automate the generation of applications based on structuring a model to apply transformations to the achievement of the application. This document intends to translate an architectural foundation that facilitates the generation of these applications relying on model-driven architecture but without ignoring the existence and relevance of existing trends mentioned in this summary architectural models.

  11. Backpropagation architecture optimization and an application in nuclear power plant diagnostics

    International Nuclear Information System (INIS)

    Basu, A.; Bartlett, E.B.

    1993-01-01

    This paper presents a Dynamic Node Architecture (DNA) scheme to optimize the architecture of backpropagation Artificial Neural Networks (ANNs). This network scheme is used to develop an ANN based diagnostic adviser capable of identifying the operating status of a nuclear power plant. Specifically, a ''root'' network is trained to diagnose if the plant is in a normal operating condition or not. In the event of an abnormal condition, and other ''classifier'' network is trained to recognize the particular transient taking place. these networks are trained using plant instrumentation data gathered during simulations of the various transients and normal operating conditions at the Iowa Electric Light and Power Company's Duane Arnold Energy Center (DAEC) operator training simulator

  12. Backpropagation architecture optimization and an application in nuclear power plant diagnostics

    International Nuclear Information System (INIS)

    Basu, A.; Bartlett, E.B.

    1993-01-01

    This paper presents a Dynamic Node Architecture (DNA) scheme to optimize the architecture of backpropagation Artificial Neural Networks (ANNs). This network scheme is used to develop an ANN based diagnostic adviser capable of identifying the operating status of a nuclear power plant. Specifically, a root network is trained to diagnose if the plant is in a normal operating condition or not. In the event of an abnormal condition, another classifier network is trained to recognize the particular transient taking place. These networks are trained using plant instrumentation data gathered during simulations of the various transients and normal operating conditions at, the Iowa Electric Light and Power Company's Duane Arnold Energy Center (DAEC) operator training simulator

  13. Architecture Design Approaches and Issues in Cross Layer Systems

    DEFF Research Database (Denmark)

    Cattoni, Andrea Fabio; Sørensen, Troels Bundgaard; Mogensen, Preben

    2012-01-01

    the traditional protocol stack design methodology. However, Cross Layer also carries a risk due to possibly unexpected and undesired effects. In this chapter we want to provide architecture designers with a set of tools and recommendations synthesized from an analysis of the state of art, but enriched...

  14. Lifelong Learning in Architectural Design Studio: The Learning Contract Approach

    Science.gov (United States)

    Hassanpour, B.; Che-Ani, A. I.; Usman, I. M. S.; Johar, S.; Tawil, N. M.

    2015-01-01

    Avant-garde educational systems are striving to find lifelong learning methods. Different fields and majors have tested a variety of proposed models and found varying difficulties and strengths. Architecture is one of the most critical areas of education because of its special characteristics, such as learning by doing and complicated evaluation…

  15. The Integration of Interior Architecture Education with Digital Design Approaches

    Science.gov (United States)

    Yazicioglu, Deniz Ayse

    2011-01-01

    It is inevitable that as a result of progress in technology and the changes in the ways with which design is conceived, interior architecture schools should be updated according to these requirements and that new educational processes should be tried out. It is for this reason that the scope and aim of this study have been determined as being the…

  16. Quantitative Architectural Analysis: A New Approach to Cortical Mapping

    Science.gov (United States)

    Schleicher, Axel; Morosan, Patricia; Amunts, Katrin; Zilles, Karl

    2009-01-01

    Results from functional imaging studies are often still interpreted using the classical architectonic brain maps of Brodmann and his successors. One obvious weakness in traditional, architectural mapping is the subjective nature of localizing borders between cortical areas by means of a purely visual, microscopical examination of histological…

  17. An Architectural Style for Optimizing System Qualities in Adaptive Embedded Systems using Multi-Objective Optimization

    NARCIS (Netherlands)

    de Roo, Arjan; Sözer, Hasan; Aksit, Mehmet

    Customers of today's complex embedded systems demand the optimization of multiple system qualities under varying operational conditions. To be able to influence the system qualities, the system must have parameters that can be adapted. Constraints may be defined on the value of these parameters.

  18. Integrating Environmental and Information Systems Management: An Enterprise Architecture Approach

    Science.gov (United States)

    Noran, Ovidiu

    Environmental responsibility is fast becoming an important aspect of strategic management as the reality of climate change settles in and relevant regulations are expected to tighten significantly in the near future. Many businesses react to this challenge by implementing environmental reporting and management systems. However, the environmental initiative is often not properly integrated in the overall business strategy and its information system (IS) and as a result the management does not have timely access to (appropriately aggregated) environmental information. This chapter argues for the benefit of integrating the environmental management (EM) project into the ongoing enterprise architecture (EA) initiative present in all successful companies. This is done by demonstrating how a reference architecture framework and a meta-methodology using EA artefacts can be used to co-design the EM system, the organisation and its IS in order to achieve a much needed synergy.

  19. Hybrid Cloud Computing Architecture Optimization by Total Cost of Ownership Criterion

    Directory of Open Access Journals (Sweden)

    Elena Valeryevna Makarenko

    2014-12-01

    Full Text Available Achieving the goals of information security is a key factor in the decision to outsource information technology and, in particular, to decide on the migration of organizational data, applications, and other resources to the infrastructure, based on cloud computing. And the key issue in the selection of optimal architecture and the subsequent migration of business applications and data to the cloud organization information environment is the question of the total cost of ownership of IT infrastructure. This paper focuses on solving the problem of minimizing the total cost of ownership cloud.

  20. Optimizing root system architecture in biofuel crops for sustainable energy production and soil carbon sequestration.

    Science.gov (United States)

    To, Jennifer Pc; Zhu, Jinming; Benfey, Philip N; Elich, Tedd

    2010-09-08

    Root system architecture (RSA) describes the dynamic spatial configuration of different types and ages of roots in a plant, which allows adaptation to different environments. Modifications in RSA enhance agronomic traits in crops and have been implicated in soil organic carbon content. Together, these fundamental properties of RSA contribute to the net carbon balance and overall sustainability of biofuels. In this article, we will review recent data supporting carbon sequestration by biofuel crops, highlight current progress in studying RSA, and discuss future opportunities for optimizing RSA for biofuel production and soil carbon sequestration.

  1. A perturbed martingale approach to global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sarkar, Saikat [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Roy, Debasish, E-mail: royd@civil.iisc.ernet.in [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Vasu, Ram Mohan [Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore 560012 (India)

    2014-08-01

    A new global stochastic search, guided mainly through derivative-free directional information computable from the sample statistical moments of the design variables within a Monte Carlo setup, is proposed. The search is aided by imparting to the directional update term additional layers of random perturbations referred to as ‘coalescence’ and ‘scrambling’. A selection step, constituting yet another avenue for random perturbation, completes the global search. The direction-driven nature of the search is manifest in the local extremization and coalescence components, which are posed as martingale problems that yield gain-like update terms upon discretization. As anticipated and numerically demonstrated, to a limited extent, against the problem of parameter recovery given the chaotic response histories of a couple of nonlinear oscillators, the proposed method appears to offer a more rational, more accurate and faster alternative to most available evolutionary schemes, prominently the particle swarm optimization. - Highlights: • Evolutionary global optimization is posed as a perturbed martingale problem. • Resulting search via additive updates is a generalization over Gateaux derivatives. • Additional layers of random perturbation help avoid trapping at local extrema. • The approach ensures efficient design space exploration and high accuracy. • The method is numerically assessed via parameter recovery of chaotic oscillators.

  2. Outage optimization - the US experience and approach

    International Nuclear Information System (INIS)

    LaPlatney, J.

    2007-01-01

    Sustainable development of Nuclear Energy depends heavily on excellent performance of the existing fleet which in turn depends heavily on the performance of planned outages. Some reactor fleets, for example Finland and Germany, have demonstrated sustained good outage performance from their start of commercial operation. Others, such as the US, have improved performance over time. The principles behind a successful outage optimization process are: -) duration is not sole measure of outage success, -) outage work must be performed safely, -) scope selection must focus on improving plant material condition to improve reliability, -) all approved outage work must be completed, -) work must be done cost effectively, -) post-outage plant reliability is a key measure of outage success, and -) outage lessons learned must be effectively implemented to achieve continuous improvement. This approach has proven its superiority over simple outage shortening, and has yielded good results in the US fleet over the past 15 years

  3. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    Science.gov (United States)

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  4. From Smart-Eco Building to High-Performance Architecture: Optimization of Energy Consumption in Architecture of Developing Countries

    Science.gov (United States)

    Mahdavinejad, M.; Bitaab, N.

    2017-08-01

    Search for high-performance architecture and dreams of future architecture resulted in attempts towards meeting energy efficient architecture and planning in different aspects. Recent trends as a mean to meet future legacy in architecture are based on the idea of innovative technologies for resource efficient buildings, performative design, bio-inspired technologies etc. while there are meaningful differences between architecture of developed and developing countries. Significance of issue might be understood when the emerging cities are found interested in Dubaization and other related booming development doctrines. This paper is to analyze the level of developing countries’ success to achieve smart-eco buildings’ goals and objectives. Emerging cities of West of Asia are selected as case studies of the paper. The results of the paper show that the concept of high-performance architecture and smart-eco buildings are different in developing countries in comparison with developed countries. The paper is to mention five essential issues in order to improve future architecture of developing countries: 1- Integrated Strategies for Energy Efficiency, 2- Contextual Solutions, 3- Embedded and Initial Energy Assessment, 4- Staff and Occupancy Wellbeing, 5- Life-Cycle Monitoring.

  5. An Enhanced System Architecture for Optimized Demand Side Management in Smart Grid

    Directory of Open Access Journals (Sweden)

    Anzar Mahmood

    2016-04-01

    Full Text Available Demand Side Management (DSM through optimization of home energy consumption in the smart grid environment is now one of the well-known research areas. Appliance scheduling has been done through many different algorithms to reduce peak load and, consequently, the Peak to Average Ratio (PAR. This paper presents a Comprehensive Home Energy Management Architecture (CHEMA with integration of multiple appliance scheduling options and enhanced load categorization in a smart grid environment. The CHEMA model consists of six layers and has been modeled in Simulink with an embedded MATLAB code. A single Knapsack optimization technique is used for scheduling and four different cases of cost reduction are modeled at the second layer of CHEMA. Fault identification and electricity theft control have also been added in CHEMA. Furthermore, carbon footprint calculations have been incorporated in order to make the users aware of environmental concerns. Simulation results prove the effectiveness of the proposed model.

  6. Approaches Regarding Business Logic Modeling in Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2011-01-01

    Full Text Available As part of the Service Oriented Computing (SOC, Service Oriented Architecture (SOA is a technology that has been developing for almost a decade and during this time there have been published many studies, papers and surveys that are referring to the advantages of projects using it. In this article we discuss some ways of using SOA in the business environment, as a result of the need to reengineer the internal business processes with the scope of moving forward towards providing and using standardized services and achieving enterprise interoperability.

  7. High-resolution microwave diagnostics of architectural components by particle swarm optimization

    Science.gov (United States)

    Genovesi, Simone; Salerno, Emanuele; Monorchio, Agostino; Manara, Giuliano

    2010-05-01

    We present a very simple monostatic setup for coherent multifrequency microwave measurements, and an optimization procedure to reconstruct high-resolution permittivity profiles of layered objects from complex reflection coefficients. This system is capable of precisely locating internal inhomogeneities in dielectric bodies, and can be applied to on-site diagnosis of architectural components. While limiting the imaging possibilities to 1D permittivity profiles, the monostatic geometry has an important advantage over multistatic tomographic systems, since these are normally confined to laboratories, and on-site applications are difficult to devise. The sensor is a transmitting-receiving microwave antenna, and the complex reflection coefficients are measured at a number of discrete frequencies over the system passband by using a general-purpose vector network analyzer. A dedicated instrument could also be designed, thus realizing an unexpensive, easy-to-handle system. The profile reconstruction algorithm is based on the optimization of an objective functional that includes a data-fit term and a regularization term. The first consists in the norm of the complex vector difference between the measured data and the data computed by a forward solver from the current estimate of the profile function. The regularization term enforces a piecewise smooth model for the solution, based on two 1D interacting Markov random fields: the intensity field, which models the continuous permittivity values, and the binary line field, which accounts for the possible presence of discontinuities in the profile. The data-fit and the regularization terms are balanced through a tunable regularization coefficient. By virtue of this prior model, the final result is robust against noise, and overcomes the usual limitations in spatial resolution induced by the wavelengths of the probing radiations. Indeed, the accuracy in the location of the discontinuities is only limited by the system noise and

  8. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  9. Integrated Nationwide Electronic Health Records system: Semi-distributed architecture approach.

    Science.gov (United States)

    Fragidis, Leonidas L; Chatzoglou, Prodromos D; Aggelidis, Vassilios P

    2016-11-14

    The integration of heterogeneous electronic health records systems by building an interoperable nationwide electronic health record system provides undisputable benefits in health care, like superior health information quality, medical errors prevention and cost saving. This paper proposes a semi-distributed system architecture approach for an integrated national electronic health record system incorporating the advantages of the two dominant approaches, the centralized architecture and the distributed architecture. The high level design of the main elements for the proposed architecture is provided along with diagrams of execution and operation and data synchronization architecture for the proposed solution. The proposed approach effectively handles issues related to redundancy, consistency, security, privacy, availability, load balancing, maintainability, complexity and interoperability of citizen's health data. The proposed semi-distributed architecture offers a robust interoperability framework without healthcare providers to change their local EHR systems. It is a pragmatic approach taking into account the characteristics of the Greek national healthcare system along with the national public administration data communication network infrastructure, for achieving EHR integration with acceptable implementation cost.

  10. The Enactive Approach to Architectural Experience: A Neurophysiological Perspective on Embodiment, Motivation, and Affordances.

    Science.gov (United States)

    Jelić, Andrea; Tieri, Gaetano; De Matteis, Federico; Babiloni, Fabio; Vecchiato, Giovanni

    2016-01-01

    Over the last few years, the efforts to reveal through neuroscientific lens the relations between the mind, body, and built environment have set a promising direction of using neuroscience for architecture. However, little has been achieved thus far in developing a systematic account that could be employed for interpreting current results and providing a consistent framework for subsequent scientific experimentation. In this context, the enactive perspective is proposed as a guide to studying architectural experience for two key reasons. Firstly, the enactive approach is specifically selected for its capacity to account for the profound connectedness of the organism and the world in an active and dynamic relationship, which is primarily shaped by the features of the body. Thus, particular emphasis is placed on the issues of embodiment and motivational factors as underlying constituents of the body-architecture interactions. Moreover, enactive understanding of the relational coupling between body schema and affordances of architectural spaces singles out the two-way bodily communication between architecture and its inhabitants, which can be also explored in immersive virtual reality settings. Secondly, enactivism has a strong foothold in phenomenological thinking that corresponds to the existing phenomenological discourse in architectural theory and qualitative design approaches. In this way, the enactive approach acknowledges the available common ground between neuroscience and architecture and thus allows a more accurate definition of investigative goals. Accordingly, the outlined model of architectural subject in enactive terms-that is, a model of a human being as embodied, enactive, and situated agent, is proposed as a basis of neuroscientific and phenomenological interpretation of architectural experience.

  11. The enactive approach to architectural experience: a neurophysiological perspective on embodiment, motivation, and affordances

    Directory of Open Access Journals (Sweden)

    Andrea eJelic

    2016-03-01

    Full Text Available Over the last few years, the efforts to reveal through neuroscientific lens the relations between the mind, body, and built environment have set a promising direction of using neuroscience for architecture. However, little has been achieved thus far in developing a systematic account that could be employed for interpreting current results and providing a consistent framework for subsequent scientific experimentation. In this context, the enactive perspective is proposed as a guide to studying architectural experience for two key reasons. Firstly, the enactive approach is specifically selected for its capacity to account for the profound connectedness of the organism and the world in an active and dynamic relationship, which is primarily shaped by the features of the body. Thus, particular emphasis is placed on the issues of embodiment and motivational factors as underlying constituents of the body-architecture interactions. Moreover, enactive understanding of the relational coupling between body schema and affordances of architectural spaces singles out the two-way bodily communication between architecture and its inhabitants, which can be also explored in immersive virtual reality settings. Secondly, enactivism has a strong foothold in phenomenological thinking that corresponds to the existing phenomenological discourse in architectural theory and qualitative design approaches. In this way, the enactive approach acknowledges the available common ground between neuroscience and architecture and thus allows a more accurate definition of investigative goals. Accordingly, the outlined model of architectural subject in enactive terms – that is, a model of a human being as embodied, enactive, and situated agent, is proposed as a basis of neuroscientific and phenomenological interpretation of architectural experience.

  12. How to ensure sustainable interoperability in heterogeneous distributed systems through architectural approach.

    Science.gov (United States)

    Pape-Haugaard, Louise; Frank, Lars

    2011-01-01

    A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.

  13. LPI Optimization Framework for Target Tracking in Radar Network Architectures Using Information-Theoretic Criteria

    Directory of Open Access Journals (Sweden)

    Chenguang Shi

    2014-01-01

    Full Text Available Widely distributed radar network architectures can provide significant performance improvement for target detection and localization. For a fixed radar network, the achievable target detection performance may go beyond a predetermined threshold with full transmitted power allocation, which is extremely vulnerable in modern electronic warfare. In this paper, we study the problem of low probability of intercept (LPI design for radar network and propose two novel LPI optimization schemes based on information-theoretic criteria. For a predefined threshold of target detection, Schleher intercept factor is minimized by optimizing transmission power allocation among netted radars in the network. Due to the lack of analytical closed-form expression for receiver operation characteristics (ROC, we employ two information-theoretic criteria, namely, Bhattacharyya distance and J-divergence as the metrics for target detection performance. The resulting nonconvex and nonlinear LPI optimization problems associated with different information-theoretic criteria are cast under a unified framework, and the nonlinear programming based genetic algorithm (NPGA is used to tackle the optimization problems in the framework. Numerical simulations demonstrate that our proposed LPI strategies are effective in enhancing the LPI performance for radar network.

  14. Contingent self-definition and amorphous regions: a dynamic approach to place brand architecture

    OpenAIRE

    Dinnie, Keith

    2017-01-01

    This article explores the concept of contingent self-definition, whereby place brands employ flexible self-definitional approaches in constructing their place brand architecture. Adopting a view of regions as social constructs, the article builds on and extends previous work on place brand architecture by identifying the underlying factors that drive contingent self-definition decisions. Based on an empirical study of professionals tasked with managing region brands in the Netherlands, eleven...

  15. Managing the Evolution of an Enterprise Architecture using a MAS-Product-Line Approach

    Science.gov (United States)

    Pena, Joaquin; Hinchey, Michael G.; Resinas, manuel; Sterritt, Roy; Rash, James L.

    2006-01-01

    We view an evolutionary system ns being n software product line. The core architecture is the unchanging part of the system, and each version of the system may be viewed as a product from the product line. Each "product" may be described as the core architecture with sonre agent-based additions. The result is a multiagent system software product line. We describe an approach to such n Software Product Line-based approach using the MaCMAS Agent-Oriented nzethoclology. The approach scales to enterprise nrchitectures as a multiagent system is an approprinre means of representing a changing enterprise nrchitectclre nnd the inferaction between components in it.

  16. A Cooperative Coevolution Approach to Automate Pattern-based Software Architectural Synthesis

    NARCIS (Netherlands)

    Xu, Y.R.; Liang, P.

    2014-01-01

    To reuse successful experience in software architecture design, architects use architectural patterns as reusable architectural knowledge for architectural synthesis. However, it has been observed that the resulting architecture does not always conform to the initial architectural patterns employed.

  17. An Integrated Modeling Approach to Evaluate and Optimize Data Center Sustainability, Dependability and Cost

    Directory of Open Access Journals (Sweden)

    Gustavo Callou

    2014-01-01

    Full Text Available Data centers have evolved dramatically in recent years, due to the advent of social networking services, e-commerce and cloud computing. The conflicting requirements are the high availability levels demanded against the low sustainability impact and cost values. The approaches that evaluate and optimize these requirements are essential to support designers of data center architectures. Our work aims to propose an integrated approach to estimate and optimize these issues with the support of the developed environment, Mercury. Mercury is a tool for dependability, performance and energy flow evaluation. The tool supports reliability block diagrams (RBD, stochastic Petri nets (SPNs, continuous-time Markov chains (CTMC and energy flow (EFM models. The EFM verifies the energy flow on data center architectures, taking into account the energy efficiency and power capacity that each device can provide (assuming power systems or extract (considering cooling components. The EFM also estimates the sustainability impact and cost issues of data center architectures. Additionally, a methodology is also considered to support the modeling, evaluation and optimization processes. Two case studies are presented to illustrate the adopted methodology on data center power systems.

  18. Optimizing the Betts-Miller-Janjic cumulus parameterization with Intel Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Huang, Melin; Huang, Bormin; Huang, Allen H.-L.

    2015-10-01

    The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.

  19. Hardware Genetic Algorithm Optimization by Critical Path Analysis using a Custom VLSI Architecture

    Directory of Open Access Journals (Sweden)

    Farouk Smith

    2015-07-01

    Full Text Available This paper propose a Virtual-Field Programmable Gate Array (V-FPGA architecture that allows direct access to its configuration bits to facilitate hardware evolution, thereby allowing any combinational or sequential digital circuit to be realized. By using the V-FPGA, this paper investigates two possible ways of making evolutionary hardware systems more scalable: by optimizing the system’s genetic algorithm (GA; and by decomposing the solution circuit into smaller, evolvable sub-circuits. GA optimization is done by: omitting a canonical GA’s crossover operator (i.e. by using a 1+λ algorithm; applying evolution constraints; and optimizing the fitness function. A noteworthy contribution this research has made is the in-depth analysis of the phenotypes’ CPs. Through analyzing the CPs, it has been shown that a great amount of insight can be gained into a phenotype’s fitness. We found that as the number of columns in the Cartesian Genetic Programming array increases, so the likelihood of an external output being placed in the column decreases. Furthermore, the number of used LEs per column also substantially decreases per added column. Finally, we demonstrated the evolution of a state-decomposed control circuit. It was shown that the evolution of each state’s sub-circuit was possible, and suggest that modular evolution can be a successful tool when dealing with scalability.

  20. Tai Chi Chuan Optimizes the Functional Organization of the Intrinsic Human Brain Architecture in Older Adults

    Directory of Open Access Journals (Sweden)

    Gao-Xia eWei

    2014-04-01

    Full Text Available Whether Tai Chi Chuan (TCC can influence the intrinsic functional architecture of the human brain remains unclear. To examine TCC-associated changes in functional connectomes, resting-state functional magnetic resonance images were acquired from 40 older individuals including 22 experienced TCC practitioners (experts and 18 demographically matched TCC-naïve healthy controls, and their local functional homogeneities across the cortical mantle were compared. Compared to the controls, the TCC experts had significantly greater and more experience-dependent functional homogeneity in the right postcentral gyrus (PosCG and less functional homogeneity in the left anterior cingulate cortex (ACC and the right dorsal lateral prefrontal cortex (DLPFC. Increased functional homogeneity in the PosCG was correlated with TCC experience. Intriguingly, decreases in functional homogeneity (improved functional specialization in the left ACC and increases in functional homogeneity (improved functional integration in the right PosCG both predicted performance gains on attention network behavior tests. These findings provide evidence for the functional plasticity of the brain’s intrinsic architecture toward optimizing locally functional organization, with great implications for understanding the effects of TCC on cognition, behavior and health in aging population.

  1. Two-Channel Transparency-Optimized Control Architectures in Bilateral Teleoperation With Time Delay.

    Science.gov (United States)

    Kim, Jonghyun; Chang, Pyung Hun; Park, Hyung-Soon

    2013-01-01

    This paper introduces transparency-optimized control architectures (TOCAs) using two communication channels. Two classes of two-channel TOCAs are found, thereby showing that two channels are sufficient to achieve transparency. These TOCAs achieve a greater level of transparency but poorer stability than three-channel TOCAs and four-channel TOCAs. Stability of the two-channel TOCAs has been enhanced while minimizing transparency degradation by adding a filter; and a combined use of the two classes of two-channel TOCAs is proposed for both free space and constrained motion, which involve switching between two TOCAs for transition between free space and constrained motions. The stability condition of the switched teleoperation system is derived for practical applications. Through the one degree-of-freedom (DOF) experiment, the proposed two-channel TOCAs were shown to operate stably, while achieving better transparency under time delay than the other TOCAs.

  2. Optimized readout configuration for PIXE spectrometers based on Silicon Drift Detectors: Architecture and performance

    International Nuclear Information System (INIS)

    Alberti, R.; Grassi, N.; Guazzoni, C.; Klatka, T.

    2009-01-01

    An optimized readout configuration based on a charge preamplifier with pulsed-reset has been designed for Silicon Drift Detectors (SDDs) to be used in Particle Induced X-ray Emission (PIXE) measurements. The customized readout electronics is able to manage the large pulses originated by the protons backscattered from the target material that would otherwise cause significant degradation of X-ray spectra and marked increase in dead time. In this way, the excellent performance of SDDs can be exploited in high-quality proton-induced spectroscopy of low- and medium-energy X-rays. This paper describes the designed readout architecture and the performance characterization carried out in a PIXE setup with MeV proton beams.

  3. Two-Channel Transparency-Optimized Control Architectures in Bilateral Teleoperation With Time Delay

    Science.gov (United States)

    Kim, Jonghyun; Chang, Pyung Hun; Park, Hyung-Soon

    2013-01-01

    This paper introduces transparency-optimized control architectures (TOCAs) using two communication channels. Two classes of two-channel TOCAs are found, thereby showing that two channels are sufficient to achieve transparency. These TOCAs achieve a greater level of transparency but poorer stability than three-channel TOCAs and four-channel TOCAs. Stability of the two-channel TOCAs has been enhanced while minimizing transparency degradation by adding a filter; and a combined use of the two classes of two-channel TOCAs is proposed for both free space and constrained motion, which involve switching between two TOCAs for transition between free space and constrained motions. The stability condition of the switched teleoperation system is derived for practical applications. Through the one degree-of-freedom (DOF) experiment, the proposed two-channel TOCAs were shown to operate stably, while achieving better transparency under time delay than the other TOCAs. PMID:23833548

  4. Game-theoretic approaches to optimal risk sharing

    NARCIS (Netherlands)

    Boonen, T.J.

    2014-01-01

    This Ph.D. thesis studies optimal risk capital allocation and optimal risk sharing. The first chapter deals with the problem of optimally allocating risk capital across divisions within a financial institution. To do so, an asymptotic approach is used to generalize the well-studied Aumann-Shapley

  5. IT Confidentiality Risk Assessment for an Architecture-Based Approach

    NARCIS (Netherlands)

    Morali, A.; Zambon, Emmanuele; Etalle, Sandro; Overbeek, Paul

    2008-01-01

    Information systems require awareness of risks and a good understanding of vulnerabilities and their exploitations. In this paper, we propose a novel approach for the systematic assessment and analysis of confidentiality risks caused by disclosure of operational and functional information. The

  6. Group Counseling Optimization: A Novel Approach

    Science.gov (United States)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  7. Kantian Optimization: An Approach to Cooperative Behavior

    OpenAIRE

    John E. Roemer

    2014-01-01

    Although evidence accrues in biology, anthropology and experimental economics that homo sapiens is a cooperative species, the reigning assumption in economic theory is that individuals optimize in an autarkic manner (as in Nash and Walrasian equilibrium). I here postulate a cooperative kind of optimizing behavior, called Kantian. It is shown that in simple economic models, when there are negative externalities (such as congestion effects from use of a commonly owned resource) or positive exte...

  8. Non-technical approach to the challenges of ecological architecture: Learning from Van der Laan

    Directory of Open Access Journals (Sweden)

    María-Jesús González-Díaz

    2016-06-01

    Full Text Available Up to now, ecology has a strong influence on the development of technical and instrumental aspects of architecture, such as renewable and efficient of resources and energy, CO2 emissions, air quality, water reuse, some social and economical aspects. These concepts define the physical keys and codes of the current ׳sustainable׳ architecture, normally instrumental but rarely and insufficiently theorised. But is not there another way of bringing us to nature? We need a theoretical referent. This is where we place the Van der Laan׳s thoughts: he considers that art completes nature and he builds his theoretical discourse on it, trying to better understand many aspects of architecture. From a conceptual point of view, we find in his works sense of timelessness, universality, special attention on the ׳locus׳ and a strict sense of proportions and use of materials according to nature. Could these concepts complement our current sustainable architecture? How did Laan apply the current codes of ecology in his architecture? His work may help us to get a theoretical interpretation of nature and not only physical. This paper develops this idea through the comparison of thoughts and works of Laan with the current technical approach to ׳sustainable׳ architecture.

  9. Research on Heat Dissipation of Electric Vehicle Based on Safety Architecture Optimization

    Science.gov (United States)

    Zhou, Chao; Guo, Yajuan; Huang, Wei; Jiang, Haitao; Wu, Liwei

    2017-10-01

    In order to solve the problem of excessive temperature in the discharge process of lithium-ion battery and the temperature difference between batteries, a heat dissipation of electric vehicle based on safety architecture optimization is designed. The simulation is used to optimize the temperature field of the heat dissipation of the battery. A reasonable heat dissipation control scheme is formulated to achieve heat dissipation requirements. The results show that the ideal working temperature range of the lithium ion battery is 20?∼45?, and the temperature difference between the batteries should be controlled within 5?. A cooling fan is arranged at the original air outlet of the battery model, and the two cooling fans work in turn to realize the reciprocating flow. The temperature difference is controlled within 5? to ensure the good temperature uniformity between the batteries of the electric vehicle. Based on the above finding, it is concluded that the heat dissipation design for electric vehicle batteries is safe and effective, which is the most effective methods to ensure battery life and vehicle safety.

  10. Optimization approaches for robot trajectory planning

    Directory of Open Access Journals (Sweden)

    Carlos Llopis-Albert

    2018-03-01

    Full Text Available The development of optimal trajectory planning algorithms for autonomous robots is a key issue in order to efficiently perform the robot tasks. This problem is hampered by the complex environment regarding the kinematics and dynamics of robots with several arms and/or degrees of freedom (dof, the design of collision-free trajectories and the physical limitations of the robots. This paper presents a review about the existing robot motion planning techniques and discusses their pros and cons regarding completeness, optimality, efficiency, accuracy, smoothness, stability, safety and scalability.

  11. Franz Kafka in the Design Studio: A Hermeneutic-Phenomenological Approach to Architectural Design Education

    Science.gov (United States)

    Hisarligil, Beyhan Bolak

    2012-01-01

    This article demonstrates the outcomes of taking a hermeneutic phenomenological approach to architectural design and discusses the potentials for imaginative reasoning in design education. This study tests the use of literature as a verbal form of art and design and the contribution it can make to imaginative design processes--which are all too…

  12. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    Science.gov (United States)

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  13. Ducted wind turbine optimization : A numerical approach

    NARCIS (Netherlands)

    Dighe, V.V.; De Oliveira Andrade, G.L.; van Bussel, G.J.W.

    2017-01-01

    The practice of ducting wind turbines has shown a beneficial effect on the overall performance, when compared to an open turbine of the same rotor diameter1. However, an optimization study specifically for ducted wind turbines (DWT’s) is missing or incomplete. This work focuses on a numerical

  14. Russian Loanword Adaptation in Persian; Optimal Approach

    Science.gov (United States)

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  15. A Process Algebraic Approach to Software Architecture Design

    Science.gov (United States)

    Aldini, Alessandro; Bernardo, Marco; Corradini, Flavio

    Process algebra is a formal tool for the specification and the verification of concurrent and distributed systems. It supports compositional modeling through a set of operators able to express concepts like sequential composition, alternative composition, and parallel composition of action-based descriptions. It also supports mathematical reasoning via a two-level semantics, which formalizes the behavior of a description by means of an abstract machine obtained from the application of structural operational rules and then introduces behavioral equivalences able to relate descriptions that are syntactically different. In this chapter, we present the typical behavioral operators and operational semantic rules for a process calculus in which no notion of time, probability, or priority is associated with actions. Then, we discuss the three most studied approaches to the definition of behavioral equivalences - bisimulation, testing, and trace - and we illustrate their congruence properties, sound and complete axiomatizations, modal logic characterizations, and verification algorithms. Finally, we show how these behavioral equivalences and some of their variants are related to each other on the basis of their discriminating power.

  16. Optimization of nonlinear controller with an enhanced biogeography approach

    Directory of Open Access Journals (Sweden)

    Mohammed Salem

    2014-07-01

    Full Text Available This paper is dedicated to the optimization of nonlinear controllers basing of an enhanced Biogeography Based Optimization (BBO approach. Indeed, The BBO is combined to a predator and prey model where several predators are used with introduction of a modified migration operator to increase the diversification along the optimization process so as to avoid local optima and reach the optimal solution quickly. The proposed approach is used in tuning the gains of PID controller for nonlinear systems. Simulations are carried out over a Mass spring damper and an inverted pendulum and has given remarkable results when compared to genetic algorithm and BBO.

  17. CLUSTER ENERGY OPTIMIZATION: A THEORETICAL APPROACH

    OpenAIRE

    Vikram Yadav; G. Sahoo

    2013-01-01

    The optimization of energy consumption in the cloud computing environment is the question how to use various energy conservation strategies to efficiently allocate resources. The need of differentresources in cloud environment is unpredictable. It is observed that load management in cloud is utmost needed in order to provide QOS. The jobs at over-loaded physical machine are shifted to under-loadedphysical machine and turning the idle machine off in order to provide green cloud. For energy opt...

  18. Application of ant colony Algorithm and particle swarm optimization in architectural design

    Science.gov (United States)

    Song, Ziyi; Wu, Yunfa; Song, Jianhua

    2018-02-01

    By studying the development of ant colony algorithm and particle swarm algorithm, this paper expounds the core idea of the algorithm, explores the combination of algorithm and architectural design, sums up the application rules of intelligent algorithm in architectural design, and combines the characteristics of the two algorithms, obtains the research route and realization way of intelligent algorithm in architecture design. To establish algorithm rules to assist architectural design. Taking intelligent algorithm as the beginning of architectural design research, the authors provide the theory foundation of ant colony Algorithm and particle swarm algorithm in architectural design, popularize the application range of intelligent algorithm in architectural design, and provide a new idea for the architects.

  19. Design Buildings Optimally: A Lifecycle Assessment Approach

    KAUST Repository

    Hosny, Ossama

    2013-01-01

    This paper structures a generic framework to support optimum design for multi-buildings in desert environment. The framework is targeting an environmental friendly design with minimum lifecycle cost, using Genetic Algorithms (Gas). GAs function through a set of success measures which evaluates the design, formulates a proper objective, and reflects possible tangible/intangible constraints. The framework optimizes the design and categorizes it under a certain environmental category at minimum Life Cycle Cost (LCC). It consists of three main modules: (1) a custom Building InformationModel (BIM) for desert buildings with a compatibility checker as a central interactive database; (2) a system evaluator module to evaluate the proposed success measures for the design; and (3) a GAs optimization module to ensure optimum design. The framework functions through three levels: the building components, integrated building, and multi-building levels. At the component level the design team should be able to select components in a designed sequence to ensure compatibility among various components, while at the building level; the team can relatively locate and orient each individual building. Finally, at the multi-building (compound) level the whole design can be evaluated using success measures of natural light, site capacity, shading impact on natural lighting, thermal change, visual access and energy saving. The framework through genetic algorithms optimizes the design by determining proper types of building components and relative buildings locations and orientations which ensure categorizing the design under a specific category or meet certain preferences at minimum lifecycle cost.

  20. Convenience of Statistical Approach in Studies of Architectural Ornament and Other Decorative Elements Specific Application

    Science.gov (United States)

    Priemetz, O.; Samoilov, K.; Mukasheva, M.

    2017-11-01

    An ornament is an actual phenomenon of the architecture modern theory, a common element in the practice of design and construction. It has been an important aspect of shaping for millennia. The description of the methods of its application occupies a large place in the studies on the theory and practice of architecture. However, the problem of the saturation of compositions with ornamentation, the specificity of its themes and forms have not been sufficiently studied yet. This aspect requires accumulation of additional knowledge. The application of quantitative methods for the plastic solutions types and a thematic diversity of facade compositions of buildings constructed in different periods creates another tool for an objective analysis of ornament development. It demonstrates the application of this approach for studying the features of the architectural development in Kazakhstan at the end of the XIX - XXI centuries.

  1. The Critical Approach of ‘Plug’ in Re-Conceptualisation of Architectural Program

    Directory of Open Access Journals (Sweden)

    Bahar Beslioglu

    2014-03-01

    Full Text Available This paper explores the issue of ‘plug’ in designing program within particular experimental studies in architecture. There was what could be called a critical ‘elaboration’ of program in Archigram’s 1964 ‘Plug-In’ City project, while intriguingly the critical approach taken in the 2001 ‘Un-Plug’ project of Francois Roche and Stephanie Lavaux hinted at a ‘re-evaluation’ of ‘plug’ related to program in architecture. The embedded criticism and creative programmatic suggestions in both projects will be discussed from the point of view of using the accumulated urbanscape as a potential for contemplation, a theme that has also been elaborated, both theoretically and experimentally, by the artist/architect Gordon Matta-Clark in his 1978 ‘Balloon Housing’ project. These experimentations - about the ‘plug’ - need to be discussed in order to understand their contributions as traceable sources to program issue in contemporary architecture.

  2. Blended Design Approach of Long Span Structure and Malay Traditional Architecture

    Science.gov (United States)

    Sundari, Titin

    2017-12-01

    The growing population in the world is so fast, which is followed by the increasing need of some new and large activities. Architects face the problem on how to facilitate buildings with various activities such as for large meeting, conference, indoors gymnasium and sports, and many others. The long span structure of building is one of the solutions to solve that problem. Generally, large buildings which implemented this structure will look as a technological, modern and futuristic ones or even neo futuristic performance. But on the other hand, many people still want to enjoy the specific and unique senses of local traditional architecture. So is the Malay people who want an easy pleasant large facilities which can be fulfilled by implementing modern long span building structure technology. In the same time, their unique sense of Malay traditional architecture can still be maintained. To overcome this double problems of design, it needs a blended design approach of long span structure and Malay Traditional Architecture.

  3. A hybrid approach for biobjective optimization

    DEFF Research Database (Denmark)

    Stidsen, Thomas Jacob Riis; Andersen, Kim Allan

    2018-01-01

    to singleobjective problems is that no standard multiobjective solvers exist and specialized algorithms need to be programmed from scratch.In this article we will present a hybrid approach, which operates both in decision space and in objective space. The approach enables massive efficient parallelization and can...... be used to a wide variety of biobjective Mixed Integer Programming models. We test the approach on the biobjective extension of the classic traveling salesman problem, on the standard datasets, and determine the full set of nondominated points. This has only been done once before (Florios and Mavrotas...

  4. Progress on the design of a data push architecture for an array of optimized time tagging pixels

    International Nuclear Information System (INIS)

    Shapiro, S.; Cords, D.; Mani, S.; Holbrook, B.; Atlas, E.

    1993-06-01

    A pixel array has been proposed which features a completely data driven architecture. A pixel cell has been designed that has been optimized for this readout. It retains the features of preceding designs which allow low noise operation, time stamping, analog signal processing, XY address recording, ghost elimination and sparse data transmission. The pixel design eliminates a number of problems inherent in previous designs, by the use of sampled data techniques, destructive readout, and current mode output drivers. This architecture and pixel design is directed at applications such as a forward spectrometer at the SSC, an e + e - B factory at SLAC, and fixed target experiments at FNAL

  5. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    Science.gov (United States)

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  6. An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Kandemir, Mahmut Taylan [PSU; Choudary, Alok [Northwestern; Thakur, Rajeev [ANL

    2014-03-01

    In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.

  7. Multiobjective Optimization Methodology A Jumping Gene Approach

    CERN Document Server

    Tang, KS

    2012-01-01

    Complex design problems are often governed by a number of performance merits. These markers gauge how good the design is going to be, but can conflict with the performance requirements that must be met. The challenge is reconciling these two requirements. This book introduces a newly developed jumping gene algorithm, designed to address the multi-functional objectives problem and supplies a viably adequate solution in speed. The text presents various multi-objective optimization techniques and provides the technical know-how for obtaining trade-off solutions between solution spread and converg

  8. The constraints satisfaction problem approach in the design of an architectural functional layout

    Science.gov (United States)

    Zawidzki, Machi; Tateyama, Kazuyoshi; Nishikawa, Ikuko

    2011-09-01

    A design support system with a new strategy for finding the optimal functional configurations of rooms for architectural layouts is presented. A set of configurations satisfying given constraints is generated and ranked according to multiple objectives. The method can be applied to problems in architectural practice, urban or graphic design-wherever allocation of related geometrical elements of known shape is optimized. Although the methodology is shown using simplified examples-a single story residential building with two apartments each having two rooms-the results resemble realistic functional layouts. One example of a practical size problem of a layout of three apartments with a total of 20 rooms is demonstrated, where the generated solution can be used as a base for a realistic architectural blueprint. The discretization of design space is discussed, followed by application of a backtrack search algorithm used for generating a set of potentially 'good' room configurations. Next the solutions are classified by a machine learning method (FFN) as 'proper' or 'improper' according to the internal communication criteria. Examples of interactive ranking of the 'proper' configurations according to multiple criteria and choosing 'the best' ones are presented. The proposed framework is general and universal-the criteria, parameters and weights can be individually defined by a user and the search algorithm can be adjusted to a specific problem.

  9. A Principled Approach to the Specification of System Architectures for Space Missions

    Science.gov (United States)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  10. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  11. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  12. Optimization approaches to volumetric modulated arc therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  13. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    Science.gov (United States)

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  14. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    Energy Technology Data Exchange (ETDEWEB)

    Wijayasekara, Dumidu, E-mail: wija2589@vandals.uidaho.edu [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Manic, Milos [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Sabharwall, Piyush [Idaho National Laboratory, Idaho Falls, ID (United States); Utgikar, Vivek [Department of Chemical Engineering, University of Idaho, Idaho Falls, ID 83402 (United States)

    2011-07-15

    Highlights: > Performance prediction of PCHE using artificial neural networks. > Evaluating artificial neural network performance for PCHE modeling. > Selection of over-training resilient artificial neural networks. > Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the testing

  15. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    International Nuclear Information System (INIS)

    Wijayasekara, Dumidu; Manic, Milos; Sabharwall, Piyush; Utgikar, Vivek

    2011-01-01

    Highlights: → Performance prediction of PCHE using artificial neural networks. → Evaluating artificial neural network performance for PCHE modeling. → Selection of over-training resilient artificial neural networks. → Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the

  16. An approach for optimizing arc welding applications

    International Nuclear Information System (INIS)

    Chapuis, Julien

    2011-01-01

    The dynamic and transport mechanisms involved in the arc plasma and the weld pool of arc welding operations are numerous and strongly coupled. They produce a medium the magnitudes of which exhibit rapid time variations and very marked gradients which make any experimental analysis complex in this disrupted environment. In this work, we study the TIG and MIG processes. An experimental platform was developed to allow synchronized measurement of various physical quantities associated with welding (process parameters, temperatures, clamping forces, metal transfer, etc.). Numerical libraries dedicated to applied studies in arc welding are developed. They enable the treatment of a large flow of data (signals, images) with a systematic and global method. The advantages of this approach for the enrichment of numerical simulation and arc process control are shown in different situations. Finally, this experimental approach is used in the context of the chosen application to obtain rich measurements to describe the dynamic behavior of the weld pool in P-GMAW. Dimensional analysis of these experimental measurements allows to identify the predominant mechanisms involved and to determine experimentally the characteristic times associated. This type of approach includes better description of the behavior of a macro-drop of molten metal or the phenomena occurring in the humping instabilities. (author)

  17. Parametric Approach to Assessing Performance of High-Lift Device Active Flow Control Architectures

    Directory of Open Access Journals (Sweden)

    Yu Cai

    2017-02-01

    Full Text Available Active Flow Control is at present an area of considerable research, with multiple potential aircraft applications. While the majority of research has focused on the performance of the actuators themselves, a system-level perspective is necessary to assess the viability of proposed solutions. This paper demonstrates such an approach, in which major system components are sized based on system flow and redundancy considerations, with the impacts linked directly to the mission performance of the aircraft. Considering the case of a large twin-aisle aircraft, four distinct active flow control architectures that facilitate the simplification of the high-lift mechanism are investigated using the demonstrated approach. The analysis indicates a very strong influence of system total mass flow requirement on architecture performance, both for a typical mission and also over the entire payload-range envelope of the aircraft.

  18. Biased Monte Carlo optimization: the basic approach

    International Nuclear Information System (INIS)

    Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo

    2005-01-01

    It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly

  19. Proceedings International Workshop on Formal Engineering approaches to Software Components and Architectures

    OpenAIRE

    Kofroň, Jan; Tumova, Jana

    2017-01-01

    These are the proceedings of the 14th International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA). The workshop was held on April 22, 2017 in Uppsala (Sweden) as a satellite event to the European Joint Conference on Theory and Practice of Software (ETAPS'17). The aim of the FESCA workshop is to bring together junior researchers from formal methods, software engineering, and industry interested in the development and application of formal modelling ...

  20. Proceedings 10th International Workshop on Formal Engineering Approaches to Software Components and Architectures

    OpenAIRE

    Buhnova, Barbora; Happe, Lucia; Kofroň, Jan

    2013-01-01

    These are the proceedings of the 10th International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA). The workshop was held on March 23, 2013 in Rome (Italy) as a satellite event to the European Joint Conference on Theory and Practice of Software (ETAPS'13). The aim of the FESCA workshop is to bring together both young and senior researchers from formal methods, software engineering, and industry interested in the development and application of formal...

  1. Reliability-based optimal structural design by the decoupling approach

    International Nuclear Information System (INIS)

    Royset, J.O.; Der Kiureghian, A.; Polak, E.

    2001-01-01

    A decoupling approach for solving optimal structural design problems involving reliability terms in the objective function, the constraint set or both is discussed and extended. The approach employs a reformulation of each problem, in which reliability terms are replaced by deterministic functions. The reformulated problems can be solved by existing semi-infinite optimization algorithms and computational reliability methods. It is shown that the reformulated problems produce solutions that are identical to those of the original problems when the limit-state functions defining the reliability problem are affine. For nonaffine limit-state functions, approximate solutions are obtained by solving series of reformulated problems. An important advantage of the approach is that the required reliability and optimization calculations are completely decoupled, thus allowing flexibility in the choice of the optimization algorithm and the reliability computation method

  2. An Update on Design Tools for Optimization of CMC 3D Fiber Architectures

    Science.gov (United States)

    Lang, J.; DiCarlo, J.

    2012-01-01

    Objective: Describe and up-date progress for NASA's efforts to develop 3D architectural design tools for CMC in general and for SIC/SiC composites in particular. Describe past and current sequential work efforts aimed at: Understanding key fiber and tow physical characteristics in conventional 2D and 3D woven architectures as revealed by microstructures in the literature. Developing an Excel program for down-selecting and predicting key geometric properties and resulting key fiber-controlled properties for various conventional 3D architectures. Developing a software tool for accurately visualizing all the key geometric details of conventional 3D architectures. Validating tools by visualizing and predicting the Internal geometry and key mechanical properties of a NASA SIC/SIC panel with a 3D orthogonal architecture. Applying the predictive and visualization tools toward advanced 3D orthogonal SiC/SIC composites, and combining them into a user-friendly software program.

  3. Performance Optimization in Sport: A Psychophysiological Approach

    Directory of Open Access Journals (Sweden)

    Selenia di Fronso

    2017-11-01

    Full Text Available ABSTRACT In the last 20 years, there was a growing interest in the study of the theoretical and applied issues surrounding psychophysiological processes underlying performance. The psychophysiological monitoring, which enables the study of these processes, consists of the assessment of the activation and functioning level of the organism using a multidimensional approach. In sport, it can be used to attain a better understanding of the processes underlying athletic performance and to improve it. The most frequently used ecological techniques include electromyography (EMG, electrocardiography (ECG, electroencephalography (EEG, and the assessment of electrodermal activity and breathing rhythm. The purpose of this paper is to offer an overview of the use of these techniques in applied interventions in sport and physical exercise and to give athletes, coaches and sport psychology experts new insights for performance improvement.

  4. A "Hybrid" Approach for Synthesizing Optimal Controllers of Hybrid Systems

    DEFF Research Database (Denmark)

    Zhao, Hengjun; Zhan, Naijun; Kapur, Deepak

    2012-01-01

    to discretization manageable and within bounds. A major advantage of our approach is not only that it avoids errors due to numerical computation, but it also gives a better optimal controller. In order to illustrate our approach, we use the real industrial example of an oil pump provided by the German company HYDAC...

  5. An Optimization Approach to the Dynamic Allocation of Economic Capital

    NARCIS (Netherlands)

    Laeven, R.J.A.; Goovaerts, M.J.

    2004-01-01

    We propose an optimization approach to allocating economic capital, distinguishing between an allocation or raising principle and a measure for the risk residual. The approach is applied both at the aggregate (conglomerate) level and at the individual (subsidiary) level and yields an integrated

  6. A practical multiscale approach for optimization of structural damping

    DEFF Research Database (Denmark)

    Andreassen, Erik; Jensen, Jakob Søndergaard

    2016-01-01

    A simple and practical multiscale approach suitable for topology optimization of structural damping in a component ready for additive manufacturing is presented.The approach consists of two steps: First, the homogenized loss factor of a two-phase material is maximized. This is done in order...

  7. Memory transfer optimization for a lattice Boltzmann solver on Kepler architecture nVidia GPUs

    Science.gov (United States)

    Mawson, Mark J.; Revell, Alistair J.

    2014-10-01

    The Lattice Boltzmann method (LBM) for solving fluid flow is naturally well suited to an efficient implementation for massively parallel computing, due to the prevalence of local operations in the algorithm. This paper presents and analyses the performance of a 3D lattice Boltzmann solver, optimized for third generation nVidia GPU hardware, also known as 'Kepler'. We provide a review of previous optimization strategies and analyse data read/write times for different memory types. In LBM, the time propagation step (known as streaming), involves shifting data to adjacent locations and is central to parallel performance; here we examine three approaches which make use of different hardware options. Two of which make use of 'performance enhancing' features of the GPU; shared memory and the new shuffle instruction found in Kepler based GPUs. These are compared to a standard transfer of data which relies instead on optimized storage to increase coalesced access. It is shown that the more simple approach is most efficient; since the need for large numbers of registers per thread in LBM limits the block size and thus the efficiency of these special features is reduced. Detailed results are obtained for a D3Q19 LBM solver, which is benchmarked on nVidia K5000M and K20C GPUs. In the latter case the use of a read-only data cache is explored, and peak performance of over 1036 Million Lattice Updates Per Second (MLUPS) is achieved. The appearance of a periodic bottleneck in the solver performance is also reported, believed to be hardware related; spikes in iteration-time occur with a frequency of around 11 Hz for both GPUs, independent of the size of the problem.

  8. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  9. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    Science.gov (United States)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  10. Optimizing Groundwater Monitoring Networks Using Integrated Statistical and Geostatistical Approaches

    Directory of Open Access Journals (Sweden)

    Jay Krishna Thakur

    2015-08-01

    Full Text Available The aim of this work is to investigate new approaches using methods based on statistics and geo-statistics for spatio-temporal optimization of groundwater monitoring networks. The formulated and integrated methods were tested with the groundwater quality data set of Bitterfeld/Wolfen, Germany. Spatially, the monitoring network was optimized using geo-statistical methods. Temporal optimization of the monitoring network was carried out using Sen’s method (1968. For geostatistical network optimization, a geostatistical spatio-temporal algorithm was used to identify redundant wells in 2- and 2.5-D Quaternary and Tertiary aquifers. Influences of interpolation block width, dimension, contaminant association, groundwater flow direction and aquifer homogeneity on statistical and geostatistical methods for monitoring network optimization were analysed. The integrated approach shows 37% and 28% redundancies in the monitoring network in Quaternary aquifer and Tertiary aquifer respectively. The geostatistical method also recommends 41 and 22 new monitoring wells in the Quaternary and Tertiary aquifers respectively. In temporal optimization, an overall optimized sampling interval was recommended in terms of lower quartile (238 days, median quartile (317 days and upper quartile (401 days in the research area of Bitterfeld/Wolfen. Demonstrated methods for improving groundwater monitoring network can be used in real monitoring network optimization with due consideration given to influencing factors.

  11. Pushouts in software architecture design

    OpenAIRE

    Riché, T. L.; Gonçalves, Rui; Marker, B.; Batory, D.

    2012-01-01

    A classical approach to program derivation is to progressively extend a simple specification and then incrementally refine it to an implementation. We claim this approach is hard or impractical when reverse engineering legacy software architectures. We present a case study that shows optimizations and pushouts--in addition to refinements and extensions--are essential for practical stepwise development of complex software architectures. NSF CCF 0724979 NSF CNS 0509338 NSF CCF 0917167 ...

  12. Beyond Information Architecture: A Systems Integration Approach to Web-site Design

    Directory of Open Access Journals (Sweden)

    Krisellen Maloney

    2017-09-01

    Full Text Available Users' needs and expectations regarding access to information have fundamentally changed, creating a disconnect between how users expect to use a library Web site and how the site was designed. At the same time, library technical infrastructures include legacy systems that were not designedf or the Web environment. The authors propose a framework that combines elements of information architecture with approaches to incremental system design and implementation. The framework allows for the development of a Web site that is responsive to changing user needs, while recognizing the need for libraries to adopt a cost-effective approach to implementation and maintenance.

  13. A novel approach for optimal chiller loading using particle swarm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ardakani, A. Jahanbani; Ardakani, F. Fattahi; Hosseinian, S.H. [Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Hafez Avenue, Tehran 15875-4413 (Iran, Islamic Republic of)

    2008-07-01

    This study employs two new methods to solve optimal chiller loading (OCL) problem. These methods are continuous genetic algorithm (GA) and particle swarm optimization (PSO). Because of continuous nature of variables in OCL problem, continuous GA and PSO easily overcome deficiencies in other conventional optimization methods. Partial load ratio (PLR) of the chiller is chosen as the variable to be optimized and consumption power of the chiller is considered as fitness function. Both of these methods find the optimal solution while the equality constraint is exactly satisfied. Some of the major advantages of proposed approaches over other conventional methods can be mentioned as fast convergence, escaping from getting into local optima, simple implementation as well as independency of the solution from the problem. Abilities of proposed methods are examined with reference to an example system. To demonstrate these abilities, results are compared with binary genetic algorithm method. The proposed approaches can be perfectly applied to air-conditioning systems. (author)

  14. Horsetail matching: a flexible approach to optimization under uncertainty

    Science.gov (United States)

    Cook, L. W.; Jarrett, J. P.

    2018-04-01

    It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.

  15. A novel system architecture for the national integration of electronic health records: a semi-centralized approach.

    Science.gov (United States)

    AlJarullah, Asma; El-Masri, Samir

    2013-08-01

    The goal of a national electronic health records integration system is to aggregate electronic health records concerning a particular patient at different healthcare providers' systems to provide a complete medical history of the patient. It holds the promise to address the two most crucial challenges to the healthcare systems: improving healthcare quality and controlling costs. Typical approaches for the national integration of electronic health records are a centralized architecture and a distributed architecture. This paper proposes a new approach for the national integration of electronic health records, the semi-centralized approach, an intermediate solution between the centralized architecture and the distributed architecture that has the benefits of both approaches. The semi-centralized approach is provided with a clearly defined architecture. The main data elements needed by the system are defined and the main system modules that are necessary to achieve an effective and efficient functionality of the system are designed. Best practices and essential requirements are central to the evolution of the proposed architecture. The proposed architecture will provide the basis for designing the simplest and the most effective systems to integrate electronic health records on a nation-wide basis that maintain integrity and consistency across locations, time and systems, and that meet the challenges of interoperability, security, privacy, maintainability, mobility, availability, scalability, and load balancing.

  16. System Approach of Logistic Costs Optimization Solution in Supply Chain

    OpenAIRE

    Majerčák, Peter; Masárová, Gabriela; Buc, Daniel; Majerčáková, Eva

    2013-01-01

    This paper is focused on the possibility of using the costs simulation in supply chain, which are on relative high level. Our goal is to determine the costs using logistic costs optimization which must necessarily be used in business activities in the supply chain management. The paper emphasizes the need to perform not isolated optimization in the whole supply chain. Our goal is to compare classic approach, when every part tracks its costs isolated, a try to minimize them, with the system (l...

  17. Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Jui-Yu Wu

    2013-01-01

    Full Text Available Stochastic global optimization (SGO algorithms such as the particle swarm optimization (PSO approach have become popular for solving unconstrained global optimization (UGO problems. The PSO approach, which belongs to the swarm intelligence domain, does not require gradient information, enabling it to overcome this limitation of traditional nonlinear programming methods. Unfortunately, PSO algorithm implementation and performance depend on several parameters, such as cognitive parameter, social parameter, and constriction coefficient. These parameters are tuned by using trial and error. To reduce the parametrization of a PSO method, this work presents two efficient hybrid SGO approaches, namely, a real-coded genetic algorithm-based PSO (RGA-PSO method and an artificial immune algorithm-based PSO (AIA-PSO method. The specific parameters of the internal PSO algorithm are optimized using the external RGA and AIA approaches, and then the internal PSO algorithm is applied to solve UGO problems. The performances of the proposed RGA-PSO and AIA-PSO algorithms are then evaluated using a set of benchmark UGO problems. Numerical results indicate that, besides their ability to converge to a global minimum for each test UGO problem, the proposed RGA-PSO and AIA-PSO algorithms outperform many hybrid SGO algorithms. Thus, the RGA-PSO and AIA-PSO approaches can be considered alternative SGO approaches for solving standard-dimensional UGO problems.

  18. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    Science.gov (United States)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software

  19. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  20. Design of pressure vessels using shape optimization: An integrated approach

    Energy Technology Data Exchange (ETDEWEB)

    Carbonari, R.C., E-mail: ronny@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Munoz-Rojas, P.A., E-mail: pablo@joinville.udesc.br [Department of Mechanical Engineering, Universidade do Estado de Santa Catarina, Bom Retiro, Joinville, SC 89223-100 (Brazil); Andrade, E.Q., E-mail: edmundoq@petrobras.com.br [CENPES, PDP/Metodos Cientificos, Petrobras (Brazil); Paulino, G.H., E-mail: paulino@uiuc.edu [Newmark Laboratory, Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, 205 North Mathews Av., Urbana, IL 61801 (United States); Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, 158 Mechanical Engineering Building, 1206 West Green Street, Urbana, IL 61801-2906 (United States); Nishimoto, K., E-mail: knishimo@usp.br [Department of Naval Architecture and Ocean Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Silva, E.C.N., E-mail: ecnsilva@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil)

    2011-05-15

    Previous papers related to the optimization of pressure vessels have considered the optimization of the nozzle independently from the dished end. This approach generates problems such as thickness variation from nozzle to dished end (coupling cylindrical region) and, as a consequence, it reduces the optimality of the final result which may also be influenced by the boundary conditions. Thus, this work discusses shape optimization of axisymmetric pressure vessels considering an integrated approach in which the entire pressure vessel model is used in conjunction with a multi-objective function that aims to minimize the von-Mises mechanical stress from nozzle to head. Representative examples are examined and solutions obtained for the entire vessel considering temperature and pressure loading. It is noteworthy that different shapes from the usual ones are obtained. Even though such different shapes may not be profitable considering present manufacturing processes, they may be competitive for future manufacturing technologies, and contribute to a better understanding of the actual influence of shape in the behavior of pressure vessels. - Highlights: > Shape optimization of entire pressure vessel considering an integrated approach. > By increasing the number of spline knots, the convergence stability is improved. > The null angle condition gives lower stress values resulting in a better design. > The cylinder stresses are very sensitive to the cylinder length. > The shape optimization of the entire vessel must be considered for cylinder length.

  1. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    Science.gov (United States)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  2. Thread-level parallelization and optimization of NWChem for the Intel MIC architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); de Jong, Wibe [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-01-01

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments. In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant e ort was required to safely and efeciently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI+OpenMP hybrid implementations attain up to 65× better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6× better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.

  3. Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Williams, Samuel; Jong, Wibe de; Oliker, Leonid

    2014-10-10

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments. In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.

  4. A simplified lumped model for the optimization of post-buckled beam architecture wideband generator

    Science.gov (United States)

    Liu, Weiqun; Formosa, Fabien; Badel, Adrien; Hu, Guangdi

    2017-11-01

    Buckled beams structures are a classical kind of bistable energy harvesters which attract more and more interests because of their capability to scavenge energy over a large frequency band in comparison with linear generator. The usual modeling approach uses the Galerkin mode discretization method with relatively high complexity, while the simplification with a single-mode solution lacks accuracy. It stems on the optimization of the energy potential features to finally define the physical and geometrical parameters. Therefore, in this paper, a simple lumped model is proposed with explicit relationship between the potential shape and parameters to allow efficient design of bistable beams based generator. The accuracy of the approximation model is studied with the effectiveness of application analyzed. Moreover, an important fact, that the bending stiffness has little influence on the potential shape with low buckling level and the sectional area determined, is found. This feature extends the applicable range of the model by utilizing the design of high moment of inertia. Numerical investigations demonstrate that the proposed model is a simple and reliable tool for design. An optimization example of using the proposed model is demonstrated with satisfactory performance.

  5. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  6. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  7. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  8. Construction of a Hierarchical Architecture of Covalent Organic Frameworks via a Postsynthetic Approach.

    Science.gov (United States)

    Zhang, Gen; Tsujimoto, Masahiko; Packwood, Daniel; Duong, Nghia Tuan; Nishiyama, Yusuke; Kadota, Kentaro; Kitagawa, Susumu; Horike, Satoshi

    2018-02-21

    Covalent organic frameworks (COFs) represent an emerging class of crystalline porous materials that are constructed by the assembly of organic building blocks linked via covalent bonds. Several strategies have been developed for the construction of new COF structures; however, a facile approach to fabricate hierarchical COF architectures with controlled domain structures remains a significant challenge, and has not yet been achieved. In this study, a dynamic covalent chemistry (DCC)-based postsynthetic approach was employed at the solid-liquid interface to construct such structures. Two-dimensional imine-bonded COFs having different aromatic groups were prepared, and a homogeneously mixed-linker structure and a heterogeneously core-shell hollow structure were fabricated by controlling the reactivity of the postsynthetic reactions. Solid-state nuclear magnetic resonance (NMR) spectroscopy and transmission electron microscopy (TEM) confirmed the structures. COFs prepared by a postsynthetic approach exhibit several functional advantages compared with their parent phases. Their Brunauer-Emmett-Teller (BET) surface areas are 2-fold greater than those of their parent phases because of the higher crystallinity. In addition, the hydrophilicity of the material and the stepwise adsorption isotherms of H 2 O vapor in the hierarchical frameworks were precisely controlled, which was feasible because of the distribution of various domains of the two COFs by controlling the postsynthetic reaction. The approach opens new routes for constructing COF architectures with functionalities that are not possible in a single phase.

  9. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    Science.gov (United States)

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness.

  10. Parameterization of Fuel-Optimal Synchronous Approach Trajectories to Tumbling Targets

    Directory of Open Access Journals (Sweden)

    David Charles Sternberg

    2018-04-01

    Full Text Available Docking with potentially tumbling Targets is a common element of many mission architectures, including on-orbit servicing and active debris removal. This paper studies synchronized docking trajectories as a way to ensure the Chaser satellite remains on the docking axis of the tumbling Target, thereby reducing collision risks and enabling persistent onboard sensing of the docking location. Chaser satellites have limited computational power available to them and the time allowed for the determination of a fuel optimal trajectory may be limited. Consequently, parameterized trajectories that approximate the fuel optimal trajectory while following synchronous approaches may be used to provide a computationally efficient means of determining near optimal trajectories to a tumbling Target. This paper presents a method of balancing the computation cost with the added fuel expenditure required for parameterization, including the selection of a parameterization scheme, the number of parameters in the parameterization, and a means of incorporating the dynamics of a tumbling satellite into the parameterization process. Comparisons of the parameterized trajectories are made with the fuel optimal trajectory, which is computed through the numerical propagation of Euler’s equations. Additionally, various tumble types are considered to demonstrate the efficacy of the presented computation scheme. With this parameterized trajectory determination method, Chaser satellites may perform terminal approach and docking maneuvers with both fuel and computational efficiency.

  11. AN ARCHITECTURAL APPROACH FOR QUALITY IMPROVING OF ANDROID APPLICATIONS DEVELOPMENT WHICH IMPLEMENTED TO COMMUNICATION APPLICATION FOR MECHATRONICS ROBOT LABORATORY ONAFT

    Directory of Open Access Journals (Sweden)

    V. Makarenko

    2017-11-01

    Full Text Available Developing a proper system architecture is a critical factor for the success of the project. After the analysisphase is complete, system design begins. For an effective solution developing it is very important that it will be flexible andscalable. During the system design, its component composition and development tools are determined. The system designphase is an opportunity to maximize the speed and effectiveness of subsequent development.There are quite a lot of architectural approaches for building systems. Despite their small differences, they have much incommon. They all define ways of splitting the application into separate layers. At the same time, in each system, at least, thereis a layer containing the business logic of the application, a layer of data interaction and a layer for displaying data.The "Clean Architecture" approach has been analyzed and adapted to the communication application for mechatronicsrobot laboratory developing. This approach allows to solve all the problems while building the application architecture: itmakes the code modular, tested and easily readable, and also positively affects the quality of development.New architectural components which was introduced by Google in 2017 was considered. The analysis showed that theArchitecture Components fit well into the concept and will interact with the "Clean Architecture" approach. Dagger 2framework was applied for a complete abstraction and simplify testing. Also, it is planned to implement the RxJava library.

  12. Optimized GF(2k) ONB type I multiplier architecture based on the Massey-Omura multiplication pattern

    International Nuclear Information System (INIS)

    Fournaris, A P; Koufopavlou, O

    2005-01-01

    Multiplication in GF(2 k ) finite fields is becoming rapidly a very promising solution for fast, small, efficient binary algorithms designed for hardware applications. GF(2 k ) finite fields defined over optimal normal bases (ONB) can be very advantageous in term of gates number and multiplication time delay. Many ONB multipliers works have been proposed that use the Massey-Omura multiplication pattern. In this paper, a method for designing type I optimal normal basis multipliers and an optimal normal basis (ONB) type I multiplier hardware architecture is proposed that, through parallelism and pairing categorization of the ONB multiplication table matrix, achieves very interesting results in terms of gate number and multiplication time delay

  13. Deciphering the genomic architecture of the stickleback brain with a novel multilocus gene-mapping approach.

    Science.gov (United States)

    Li, Zitong; Guo, Baocheng; Yang, Jing; Herczeg, Gábor; Gonda, Abigél; Balázs, Gergely; Shikano, Takahito; Calboli, Federico C F; Merilä, Juha

    2017-03-01

    Quantitative traits important to organismal function and fitness, such as brain size, are presumably controlled by many small-effect loci. Deciphering the genetic architecture of such traits with traditional quantitative trait locus (QTL) mapping methods is challenging. Here, we investigated the genetic architecture of brain size (and the size of five different brain parts) in nine-spined sticklebacks (Pungitius pungitius) with the aid of novel multilocus QTL-mapping approaches based on a de-biased LASSO method. Apart from having more statistical power to detect QTL and reduced rate of false positives than conventional QTL-mapping approaches, the developed methods can handle large marker panels and provide estimates of genomic heritability. Single-locus analyses of an F 2 interpopulation cross with 239 individuals and 15 198, fully informative single nucleotide polymorphisms (SNPs) uncovered 79 QTL associated with variation in stickleback brain size traits. Many of these loci were in strong linkage disequilibrium (LD) with each other, and consequently, a multilocus mapping of individual SNPs, accounting for LD structure in the data, recovered only four significant QTL. However, a multilocus mapping of SNPs grouped by linkage group (LG) identified 14 LGs (1-6 depending on the trait) that influence variation in brain traits. For instance, 17.6% of the variation in relative brain size was explainable by cumulative effects of SNPs distributed over six LGs, whereas 42% of the variation was accounted for by all 21 LGs. Hence, the results suggest that variation in stickleback brain traits is influenced by many small-effect loci. Apart from suggesting moderately heritable (h 2  ≈ 0.15-0.42) multifactorial genetic architecture of brain traits, the results highlight the challenges in identifying the loci contributing to variation in quantitative traits. Nevertheless, the results demonstrate that the novel QTL-mapping approach developed here has distinctive advantages

  14. A Statistical Approach to Optimizing Concrete Mixture Design

    OpenAIRE

    Ahmad, Shamsad; Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicate...

  15. On the EU approach for DEMO architecture exploration and dealing with uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, M., E-mail: matti.coleman@euro-fusion.org [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Maviglia, F.; Bachmann, C. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Anthony, J. [CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Federici, G. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Shannon, M. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Wenninger, R. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Max-Planck-Institut für Plasmaphysik, 85748 Garching (Germany)

    2016-11-01

    Highlights: • The issue of epistemic uncertainties in the DEMO design basis is described. • An approach to tackle uncertainty by investigating plant architectures is proposed. • The first wall heat load uncertainty is addressed following the proposed approach. - Abstract: One of the difficulties inherent in designing a future fusion reactor is dealing with uncertainty. As the major step between ITER and the commercial exploitation of nuclear fusion energy, DEMO will have to address many challenges – the natures of which are still not fully known. Unlike fission reactors, fusion reactors suffer from the intrinsic complexity of the tokamak (numerous interdependent system parameters) and from the dependence of plasma physics on scale – prohibiting design exploration founded on incremental progression and small-scale experimentation. For DEMO, this means that significant technical uncertainties will exist for some time to come, and a systems engineering design exploration approach must be developed to explore the reactor architecture when faced with these uncertainties. Important uncertainties in the context of fusion reactor design are discussed and a strategy for dealing with these is presented, treating the uncertainty in the first wall loads as an example.

  16. Terminal Control Area Aircraft Scheduling and Trajectory Optimization Approaches

    Directory of Open Access Journals (Sweden)

    Samà Marcella

    2017-01-01

    Full Text Available Aviation authorities are seeking optimization methods to better use the available infrastructure and better manage aircraft movements. This paper deals with the realtime scheduling of take-off and landing aircraft at a busy terminal control area and with the optimization of aircraft trajectories during the landing procedures. The first problem aims to reduce the propagation of delays, while the second problem aims to either minimize the travel time or reduce the fuel consumption. Both problems are particularly complex, since the first one is NP-hard while the second one is nonlinear and a combined solution needs to be computed in a short-time during operations. This paper proposes a framework for the lexicographic optimization of the two problems. Computational experiments are performed for the Milano Malpensa airport and show the existing gaps between the performance indicators of the two problems when different lexicographic optimization approaches are considered.

  17. An intutionistic fuzzy optimization approach to vendor selection problem

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2016-09-01

    Full Text Available Selecting the right vendor is an important business decision made by any organization. The decision involves multiple criteria and if the objectives vary in preference and scope, then nature of decision becomes multiobjective. In this paper, a vendor selection problem has been formulated as an intutionistic fuzzy multiobjective optimization where appropriate number of vendors is to be selected and order allocated to them. The multiobjective problem includes three objectives: minimizing the net price, maximizing the quality, and maximizing the on time deliveries subject to supplier's constraints. The objection function and the demand are treated as intutionistic fuzzy sets. An intutionistic fuzzy set has its ability to handle uncertainty with additional degrees of freedom. The Intutionistic fuzzy optimization (IFO problem is converted into a crisp linear form and solved using optimization software Tora. The advantage of IFO is that they give better results than fuzzy/crisp optimization. The proposed approach is explained by a numerical example.

  18. A Novel Measurement Matrix Optimization Approach for Hyperspectral Unmixing

    Directory of Open Access Journals (Sweden)

    Su Xu

    2017-01-01

    Full Text Available Each pixel in the hyperspectral unmixing process is modeled as a linear combination of endmembers, which can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, the limitation of Gaussian random variables on its computational complexity or sparsity affects the efficiency and accuracy. This paper proposes a novel approach for the optimization of measurement matrix in compressive sensing (CS theory for hyperspectral unmixing. Firstly, a new Toeplitz-structured chaotic measurement matrix (TSCMM is formed by pseudo-random chaotic elements, which can be implemented by a simple hardware; secondly, rank revealing QR factorization with eigenvalue decomposition is presented to speed up the measurement time; finally, orthogonal gradient descent method for measurement matrix optimization is used to achieve optimal incoherence. Experimental results demonstrate that the proposed approach can lead to better CS reconstruction performance with low extra computational cost in hyperspectral unmixing.

  19. Designing area optimized application-specific network-on-chip architectures while providing hard QoS guarantees.

    Directory of Open Access Journals (Sweden)

    Sajid Gul Khawaja

    Full Text Available With the increase of transistors' density, popularity of System on Chip (SoC has increased exponentially. As a communication module for SoC, Network on Chip (NoC framework has been adapted as its backbone. In this paper, we propose a methodology for designing area-optimized application specific NoC while providing hard Quality of Service (QoS guarantees for real time flows. The novelty of the proposed system lies in derivation of a Mixed Integer Linear Programming model which is then used to generate a resource optimal Network on Chip (NoC topology and architecture while considering traffic and QoS requirements. We also present the micro-architectural design features used for enabling traffic and latency guarantees and discuss how the solution adapts for dynamic variations in the application traffic. The paper highlights the effectiveness of proposed method by generating resource efficient NoC solutions for both industrial and benchmark applications. The area-optimized results are generated in few seconds by proposed technique, without resorting to heuristics, even for an application with 48 traffic flows.

  20. Optimal Charging of Electric Drive Vehicles: A Dynamic Programming Approach

    DEFF Research Database (Denmark)

    Delikaraoglou, Stefanos; Capion, Karsten Emil; Juul, Nina

    2013-01-01

    , therefore, we propose an ex ante vehicle aggregation approach. We illustrate the results in a Danish case study and find that, although optimal management of the vehicles does not allow for storage and day-to-day flexibility in the electricity system, the market provides incentive for intra-day flexibility....

  1. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  2. Approaches to the Optimal Nonlinear Analysis of Microcalorimeter Pulses

    Science.gov (United States)

    Fowler, J. W.; Pappas, C. G.; Alpert, B. K.; Doriese, W. B.; O'Neil, G. C.; Ullom, J. N.; Swetz, D. S.

    2018-03-01

    We consider how to analyze microcalorimeter pulses for quantities that are nonlinear in the data, while preserving the signal-to-noise advantages of linear optimal filtering. We successfully apply our chosen approach to compute the electrothermal feedback energy deficit (the "Joule energy") of a pulse, which has been proposed as a linear estimator of the deposited photon energy.

  3. A design approach for integrating thermoelectric devices using topology optimization

    International Nuclear Information System (INIS)

    Soprani, S.; Haertel, J.H.K.; Lazarov, B.S.; Sigmund, O.; Engelbrecht, K.

    2016-01-01

    Highlights: • The integration of a thermoelectric (TE) cooler into a robotic tool is optimized. • Topology optimization is suggested as design tool for TE integrated systems. • A 3D optimization technique using temperature dependent TE properties is presented. • The sensitivity of the optimization process to the boundary conditions is studied. • A working prototype is constructed and compared to the model results. - Abstract: Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems for different operating conditions and objective functions, such as temperature span, efficiency, and power recovery rate. As a specific application, the integration of a thermoelectric cooler into the electronics section of a downhole oil well intervention tool is investigated, with the objective of minimizing the temperature of the cooled electronics. Several challenges are addressed: ensuring effective heat transfer from the load, minimizing the thermal resistances within the integrated system, maximizing the thermal protection of the cooled zone, and enhancing the conduction of the rejected heat to the oil well. The design method incorporates temperature dependent properties of the thermoelectric device and other materials. The 3D topology optimization model developed in this work was used to design a thermoelectric system, complete with insulation and heat sink, that was produced and tested. Good agreement between experimental results and

  4. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    Science.gov (United States)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  5. EASEE: an open architecture approach for modeling battlespace signal and sensor phenomenology

    Science.gov (United States)

    Waldrop, Lauren E.; Wilson, D. Keith; Ekegren, Michael T.; Borden, Christian T.

    2017-04-01

    Open architecture in the context of defense applications encourages collaboration across government agencies and academia. This paper describes a success story in the implementation of an open architecture framework that fosters transparency and modularity in the context of Environmental Awareness for Sensor and Emitter Employment (EASEE), a complex physics-based software package for modeling the effects of terrain and atmospheric conditions on signal propagation and sensor performance. Among the highlighted features in this paper are: (1) a code refactorization to separate sensitive parts of EASEE, thus allowing collaborators the opportunity to view and interact with non-sensitive parts of the EASEE framework with the end goal of supporting collaborative innovation, (2) a data exchange and validation effort to enable the dynamic addition of signatures within EASEE thus supporting a modular notion that components can be easily added or removed to the software without requiring recompilation by developers, and (3) a flexible and extensible XML interface, which aids in decoupling graphical user interfaces from EASEE's calculation engine, and thus encourages adaptability to many different defense applications. In addition to the outlined points above, this paper also addresses EASEE's ability to interface with both proprietary systems such as ArcGIS. A specific use case regarding the implementation of an ArcGIS toolbar that leverages EASEE's XML interface and enables users to set up an EASEE-compliant configuration for probability of detection or optimal sensor placement calculations in various modalities is discussed as well.

  6. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    International Nuclear Information System (INIS)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-01-01

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  7. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.

    Science.gov (United States)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-01

    In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number

  8. Runtime QoS control and revenue optimization within service oriented architecture

    NARCIS (Netherlands)

    Zivkovic, Miroslav

    2014-01-01

    The paradigms of service-oriented computing (SOC) and its underlying service-oriented architecture (SOA) have changed the way software applications are designed, developed, deployed, and consumed. Software engineers can therefore realize applications by service composition, using services offered by

  9. RFID-WSN integrated architecture for energy and delay- aware routing a simulation approach

    CERN Document Server

    Ahmed, Jameel; Tayyab, Muhammad; Nawaz, Menaa

    2015-01-01

    The book identifies the performance challenges concerning Wireless Sensor Networks (WSN) and Radio Frequency Identification (RFID) and analyzes their impact on the performance of routing protocols. It presents a thorough literature survey to identify the issues affecting routing protocol performance, as well as a mathematical model for calculating the end-to-end delays of the routing protocol ACQUIRE; a comparison of two routing protocols (ACQUIRE and DIRECTED DIFFUSION) is also provided for evaluation purposes. On the basis of the results and literature review, recommendations are made for better selection of protocols regarding the nature of the respective application and related challenges. In addition, this book covers a proposed simulator that integrates both RFID and WSN technologies. Therefore, the manuscript is divided in two major parts: an integrated architecture of smart nodes, and a power-optimized protocol for query and information interchange.

  10. Application of probabilistic risk based optimization approaches in environmental restoration

    International Nuclear Information System (INIS)

    Goldammer, W.

    1995-01-01

    The paper presents a general approach to site-specific risk assessments and optimization procedures. In order to account for uncertainties in the assessment of the current situation and future developments, optimization parameters are treated as probabilistic distributions. The assessments are performed within the framework of a cost-benefit analysis. Radiation hazards and conventional risks are treated within an integrated approach. Special consideration is given to consequences of low probability events such as, earthquakes or major floods. Risks and financial costs are combined to an overall figure of detriment allowing one to distinguish between benefits of available reclamation options. The probabilistic analysis uses a Monte Carlo simulation technique. The paper demonstrates the applicability of this approach in aiding the reclamation planning using an example from the German reclamation program for uranium mining and milling sites

  11. Stochastic optimization in insurance a dynamic programming approach

    CERN Document Server

    Azcue, Pablo

    2014-01-01

    The main purpose of the book is to show how a viscosity approach can be used to tackle control problems in insurance. The problems covered are the maximization of survival probability as well as the maximization of dividends in the classical collective risk model. The authors consider the possibility of controlling the risk process by reinsurance as well as by investments. They show that optimal value functions are characterized as either the unique or the smallest viscosity solution of the associated Hamilton-Jacobi-Bellman equation; they also study the structure of the optimal strategies and show how to find them. The viscosity approach was widely used in control problems related to mathematical finance but until quite recently it was not used to solve control problems related to actuarial mathematical science. This book is designed to familiarize the reader on how to use this approach. The intended audience is graduate students as well as researchers in this area.

  12. Impact of contour on aesthetic judgments and approach-avoidance decisions in architecture

    Science.gov (United States)

    Vartanian, Oshin; Navarrete, Gorka; Chatterjee, Anjan; Fich, Lars Brorson; Leder, Helmut; Modroño, Cristián; Nadal, Marcos; Rostrup, Nicolai; Skov, Martin

    2013-01-01

    On average, we urban dwellers spend about 90% of our time indoors, and share the intuition that the physical features of the places we live and work in influence how we feel and act. However, there is surprisingly little research on how architecture impacts behavior, much less on how it influences brain function. To begin closing this gap, we conducted a functional magnetic resonance imaging study to examine how systematic variation in contour impacts aesthetic judgments and approach-avoidance decisions, outcome measures of interest to both architects and users of spaces alike. As predicted, participants were more likely to judge spaces as beautiful if they were curvilinear than rectilinear. Neuroanatomically, when contemplating beauty, curvilinear contour activated the anterior cingulate cortex exclusively, a region strongly responsive to the reward properties and emotional salience of objects. Complementing this finding, pleasantness—the valence dimension of the affect circumplex—accounted for nearly 60% of the variance in beauty ratings. Furthermore, activation in a distributed brain network known to underlie the aesthetic evaluation of different types of visual stimuli covaried with beauty ratings. In contrast, contour did not affect approach-avoidance decisions, although curvilinear spaces activated the visual cortex. The results suggest that the well-established effect of contour on aesthetic preference can be extended to architecture. Furthermore, the combination of our behavioral and neural evidence underscores the role of emotion in our preference for curvilinear objects in this domain. PMID:23754408

  13. A Hybrid Heuristic Optimization Approach for Leak Detection in Pipe Networks Using Ordinal Optimization Approach and the Symbiotic Organism Search

    Directory of Open Access Journals (Sweden)

    Chao-Chih Lin

    2017-10-01

    Full Text Available A new transient-based hybrid heuristic approach is developed to optimize a transient generation process and to detect leaks in pipe networks. The approach couples the ordinal optimization approach (OOA and the symbiotic organism search (SOS to solve the optimization problem by means of iterations. A pipe network analysis model (PNSOS is first used to determine steady-state head distribution and pipe flow rates. The best transient generation point and its relevant valve operation parameters are optimized by maximizing the objective function of transient energy. The transient event is created at the chosen point, and the method of characteristics (MOC is used to analyze the transient flow. The OOA is applied to sift through the candidate pipes and the initial organisms with leak information. The SOS is employed to determine the leaks by minimizing the sum of differences between simulated and computed head at the observation points. Two synthetic leaking scenarios, a simple pipe network and a water distribution network (WDN, are chosen to test the performance of leak detection ordinal symbiotic organism search (LDOSOS. Leak information can be accurately identified by the proposed approach for both of the scenarios. The presented technique makes a remarkable contribution to the success of leak detection in the pipe networks.

  14. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    Science.gov (United States)

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  15. Simulation-based evaluation and optimization of a new CdZnTe gamma-camera architecture (HiSens)

    International Nuclear Information System (INIS)

    Robert, Charlotte; Montemont, Guillaume; Rebuffel, Veronique; Guerin, Lucie; Verger, Loick; Buvat, Irene

    2010-01-01

    A new gamma-camera architecture named HiSens is presented and evaluated. It consists of a parallel hole collimator, a pixelated CdZnTe (CZT) detector associated with specific electronics for 3D localization and dedicated reconstruction algorithms. To gain in efficiency, a high aperture collimator is used. The spatial resolution is preserved thanks to accurate 3D localization of the interactions inside the detector based on a fine sampling of the CZT detector and on the depth of interaction information. The performance of this architecture is characterized using Monte Carlo simulations in both planar and tomographic modes. Detective quantum efficiency (DQE) computations are then used to optimize the collimator aperture. In planar mode, the simulations show that the fine CZT detector pixelization increases the system sensitivity by 2 compared to a standard Anger camera without loss in spatial resolution. These results are then validated against experimental data. In SPECT, Monte Carlo simulations confirm the merits of the HiSens architecture observed in planar imaging.

  16. A Risk-Constrained Multi-Stage Decision Making Approach to the Architectural Analysis of Mars Missions

    Science.gov (United States)

    Kuwata, Yoshiaki; Pavone, Marco; Balaram, J. (Bob)

    2012-01-01

    This paper presents a novel risk-constrained multi-stage decision making approach to the architectural analysis of planetary rover missions. In particular, focusing on a 2018 Mars rover concept, which was considered as part of a potential Mars Sample Return campaign, we model the entry, descent, and landing (EDL) phase and the rover traverse phase as four sequential decision-making stages. The problem is to find a sequence of divert and driving maneuvers so that the rover drive is minimized and the probability of a mission failure (e.g., due to a failed landing) is below a user specified bound. By solving this problem for several different values of the model parameters (e.g., divert authority), this approach enables rigorous, accurate and systematic trade-offs for the EDL system vs. the mobility system, and, more in general, cross-domain trade-offs for the different phases of a space mission. The overall optimization problem can be seen as a chance-constrained dynamic programming problem, with the additional complexity that 1) in some stages the disturbances do not have any probabilistic characterization, and 2) the state space is extremely large (i.e, hundreds of millions of states for trade-offs with high-resolution Martian maps). To this purpose, we solve the problem by performing an unconventional combination of average and minimax cost analysis and by leveraging high efficient computation tools from the image processing community. Preliminary trade-off results are presented.

  17. Robust and optimal control a two-port framework approach

    CERN Document Server

    Tsai, Mi-Ching

    2014-01-01

    A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear, time-invariant systems, based on the two-port system widespread in electrical engineering. The novel use of the two-port system in this context allows straightforward engineering-oriented solution-finding procedures to be developed, requiring no mathematics beyond linear algebra. A chain-scattering description provides a unified framework for constructing the stabilizing controller set and for synthesizing H2 optimal and H∞ sub-optimal controllers. Simple yet illustrative examples explain each step. A Two-port Framework for Robust and Optimal Control  features: ·         a hands-on, tutorial-style presentation giving the reader the opportunity to repeat the designs presented and easily to modify them for their own programs; ·         an abundance of examples illustrating the most important steps in robust and optimal design; and ·   �...

  18. Efficient and Robust Data Collection Using Compact Micro Hardware, Distributed Bus Architectures and Optimizing Software

    Science.gov (United States)

    Chau, Savio; Vatan, Farrokh; Randolph, Vincent; Baroth, Edmund C.

    2006-01-01

    Future In-Space propulsion systems for exploration programs will invariably require data collection from a large number of sensors. Consider the sensors needed for monitoring several vehicle systems states of health, including the collection of structural health data, over a large area. This would include the fuel tanks, habitat structure, and science containment of systems required for Lunar, Mars, or deep space exploration. Such a system would consist of several hundred or even thousands of sensors. Conventional avionics system design will require these sensors to be connected to a few Remote Health Units (RHU), which are connected to robust, micro flight computers through a serial bus. This results in a large mass of cabling and unacceptable weight. This paper first gives a survey of several techniques that may reduce the cabling mass for sensors. These techniques can be categorized into four classes: power line communication, serial sensor buses, compound serial buses, and wireless network. The power line communication approach uses the power line to carry both power and data, so that the conventional data lines can be eliminated. The serial sensor bus approach reduces most of the cabling by connecting all the sensors with a single (or redundant) serial bus. Many standard buses for industrial control and sensor buses can support several hundreds of nodes, however, have not been space qualified. Conventional avionics serial buses such as the Mil-Std-1553B bus and IEEE 1394a are space qualified but can support only a limited number of nodes. The third approach is to combine avionics buses to increase their addressability. The reliability, EMI/EMC, and flight qualification issues of wireless networks have to be addressed. Several wireless networks such as the IEEE 802.11 and Ultra Wide Band are surveyed in this paper. The placement of sensors can also affect cable mass. Excessive sensors increase the number of cables unnecessarily. Insufficient number of sensors

  19. Two-Layer Linear MPC Approach Aimed at Walking Beam Billets Reheating Furnace Optimization

    Directory of Open Access Journals (Sweden)

    Silvia Maria Zanoli

    2017-01-01

    Full Text Available In this paper, the problem of the control and optimization of a walking beam billets reheating furnace located in an Italian steel plant is analyzed. An ad hoc Advanced Process Control framework has been developed, based on a two-layer linear Model Predictive Control architecture. This control block optimizes the steady and transient states of the considered process. Two main problems have been addressed. First, in order to manage all process conditions, a tailored module defines the process variables set to be included in the control problem. In particular, a unified approach for the selection on the control inputs to be used for control objectives related to the process outputs is guaranteed. The impact of the proposed method on the controller formulation is also detailed. Second, an innovative mathematical approach for stoichiometric ratios constraints handling has been proposed, together with their introduction in the controller optimization problems. The designed control system has been installed on a real plant, replacing operators’ mental model in the conduction of local PID controllers. After two years from the first startup, a strong energy efficiency improvement has been observed.

  20. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    Science.gov (United States)

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2017-10-01

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  1. A Study on Technology Architecture and Serving Approaches of Electronic Government System

    Science.gov (United States)

    Liu, Chunnian; Huang, Yiyun; Pan, Qin

    As E-government becomes a very active research area, a lot of solutions to solve citizens' needs are being deployed. This paper provides technology architecture of E-government system and approaches of service in Public Administrations. The proposed electronic system addresses the basic E-government requirements of user friendliness, security, interoperability, transparency and effectiveness in the communication between small and medium sized public organizations and their citizens, businesses and other public organizations. The paper has provided several serving approaches of E-government, which includes SOA, web service, mobile E-government, public library and every has its own characteristics and application scenes. Still, there are a number of E-government issues for further research on organization structure change, including research methodology, data collection analysis, etc.

  2. Implementing the competences-based students-centered learning approach in Architectural Design Education. The case of the T MEDA Pilot Architectural Program at the Hashemite University (Jordan

    Directory of Open Access Journals (Sweden)

    Ahmad A. S. Al Husban

    2016-11-01

    Full Text Available Higher educational systems become increasingly oriented towards the competences-based student-centered learning and outcome approach. Worldwide, these systems are focusing on the students as a whole: focusing on their dimensional, intellectual, professional, psychological, moral, and spiritual. This research was conducted in an attempt to answer the main research question: how can the architectural design courses be designed based on the required competences and how can the teaching, learning activities and assessment methods be structured and aligned in order to allow students to achieve and reach the intended learning outcomes? This research used a case study driven best practice research method to answer the research questions based on the T MEDA pilot architectural program that was implemented at the Hashemite University, Jordan. This research found that it is important for architectural education to adapt the students-centered learning method. Such approach increases the effectiveness of teaching and learning methods, enhances the design studio environment, and focuses on students’ engagement to develop their design process and product. Moreover, this research found that using different assessment methods in architectural design courses help students to develop their learning outcomes; and inform teachers about the effectiveness of their teaching process. Furthermore, the involvement of students in assessment produces effective learning and enhances their design motivation. However, applying competences-based students-centered learning and outcome approach needs more time and staff to apply. Another problem is that some instructors resist changing to the new methods or approaches because they prefer to use their old and traditional systems. The application for this method at the first time needs intensive recourses, more time, and good cooperation between different instructors and course coordinator. However, within the time this method

  3. Development of a Multi-Event Trajectory Optimization Tool for Noise-Optimized Approach Route Design

    NARCIS (Netherlands)

    Braakenburg, M.L.; Hartjes, S.; Visser, H.G.; Hebly, S.J.

    2011-01-01

    This paper presents preliminary results from an ongoing research effort towards the development of a multi-event trajectory optimization methodology that allows to synthesize RNAV approach routes that minimize a cumulative measure of noise, taking into account the total noise effect aggregated for

  4. Site specific optimization of wind turbines energy cost: Iterative approach

    International Nuclear Information System (INIS)

    Rezaei Mirghaed, Mohammad; Roshandel, Ramin

    2013-01-01

    Highlights: • Optimization model of wind turbine parameters plus rectangular farm layout is developed. • Results show that levelized cost for single turbine fluctuates between 46.6 and 54.5 $/MW h. • Modeling results for two specific farms reported optimal sizing and farm layout. • Results show that levelized cost of the wind farms fluctuates between 45.8 and 67.2 $/MW h. - Abstract: The present study was aimed at developing a model to optimize the sizing parameters and farm layout of wind turbines according to the wind resource and economic aspects. The proposed model, including aerodynamic, economic and optimization sub-models, is used to achieve minimum levelized cost of electricity. The blade element momentum theory is utilized for aerodynamic modeling of pitch-regulated horizontal axis wind turbines. Also, a comprehensive cost model including capital costs of all turbine components is considered. An iterative approach is used to develop the optimization model. The modeling results are presented for three potential regions in Iran: Khaf, Ahar and Manjil. The optimum configurations and sizing for a single turbine with minimum levelized cost of electricity are presented. The optimal cost of energy for one turbine is calculated about 46.7, 54.5 and 46.6 dollars per MW h in the studied sites, respectively. In addition, optimal size of turbines, annual electricity production, capital cost, and wind farm layout for two different rectangular and square shaped farms in the proposed areas have been recognized. According to the results, optimal system configuration corresponds to minimum levelized cost of electricity about 45.8 to 67.2 dollars per MW h in the studied wind farms

  5. Design and optimization of different P-channel LUDMOS architectures on a 0.18 µm SOI-CMOS technology

    International Nuclear Information System (INIS)

    Cortés, I; Toulon, G; Morancho, F; Hugonnard-Bruyere, E; Villard, B; Toren, W J

    2011-01-01

    This paper focuses on the design and optimization of different power P-channel LDMOS transistors (V BR > 120 V) to be integrated in a new generation of smart-power technology based upon a 0.18 µm SOI-CMOS technology. Different drift architectures have been envisaged in this work with the purpose of optimizing the transistor static (R on-sp /V BR trade-off) and dynamic (R on × Q g ) characteristics to improve their switching performance. Conventional single-RESURF P-channel LUDMOS architectures on thin-SOI substrates show very poor R on-sp /V BR trade-off due to their low RESURF effectiveness. Alternative drift configurations such as the addition of an N-type buried layer deep inside the SOI layer or the application of the superjunction concept by alternatively placing stacked P- and N-type pillars could highly improve the RESURF effectiveness and the P-channel device switching performance

  6. Multi-agent based distributed control architecture for microgrid energy management and optimization

    International Nuclear Information System (INIS)

    Basir Khan, M. Reyasudin; Jidin, Razali; Pasupuleti, Jagadeesh

    2016-01-01

    Highlights: • A new multi-agent based distributed control architecture for energy management. • Multi-agent coordination based on non-cooperative game theory. • A microgrid model comprised of renewable energy generation systems. • Performance comparison of distributed with conventional centralized control. - Abstract: Most energy management systems are based on a centralized controller that is difficult to satisfy criteria such as fault tolerance and adaptability. Therefore, a new multi-agent based distributed energy management system architecture is proposed in this paper. The distributed generation system is composed of several distributed energy resources and a group of loads. A multi-agent system based decentralized control architecture was developed in order to provide control for the complex energy management of the distributed generation system. Then, non-cooperative game theory was used for the multi-agent coordination in the system. The distributed generation system was assessed by simulation under renewable resource fluctuations, seasonal load demand and grid disturbances. The simulation results show that the implementation of the new energy management system proved to provide more robust and high performance controls than conventional centralized energy management systems.

  7. APPROACH ON INTELLIGENT OPTIMIZATION DESIGN BASED ON COMPOUND KNOWLEDGE

    Institute of Scientific and Technical Information of China (English)

    Yao Jianchu; Zhou Ji; Yu Jun

    2003-01-01

    A concept of an intelligent optimal design approach is proposed, which is organized by a kind of compound knowledge model. The compound knowledge consists of modularized quantitative knowledge, inclusive experience knowledge and case-based sample knowledge. By using this compound knowledge model, the abundant quantity information of mathematical programming and the symbolic knowledge of artificial intelligence can be united together in this model. The intelligent optimal design model based on such a compound knowledge and the automatically generated decomposition principles based on it are also presented. Practically, it is applied to the production planning, process schedule and optimization of production process of a refining & chemical work and a great profit is achieved. Specially, the methods and principles are adaptable not only to continuous process industry, but also to discrete manufacturing one.

  8. Data driven approaches for diagnostics and optimization of NPP operation

    International Nuclear Information System (INIS)

    Pliska, J.; Machat, Z.

    2014-01-01

    The efficiency and heat rate is an important indicator of both the health of the power plant equipment and the quality of power plant operation. To achieve this challenges powerful tool is a statistical data processing of large data sets which are stored in data historians. These large data sets contain useful information about process quality and equipment and sensor health. The paper discusses data-driven approaches for model building of main power plant equipment such as condenser, cooling tower and the overall thermal cycle as well using multivariate regression techniques based on so called a regression triplet - data, model and method. Regression models comprise a base for diagnostics and optimization tasks. Diagnostics and optimization tasks are demonstrated on practical cases - diagnostics of main power plant equipment to early identify equipment fault, and optimization task of cooling circuit by cooling water flow control to achieve for a given boundary conditions the highest power output. (authors)

  9. A Hybrid Harmony Search Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Mimoun YOUNES

    2012-08-01

    Full Text Available Optimal Power Flow (OPF is one of the main functions of Power system operation. It determines the optimal settings of generating units, bus voltage, transformer tap and shunt elements in Power System with the objective of minimizing total production costs or losses while the system is operating within its security limits. The aim of this paper is to propose a novel methodology (BCGAs-HSA that solves OPF including both active and reactive power dispatch It is based on combining the binary-coded genetic algorithm (BCGAs and the harmony search algorithm (HSA to determine the optimal global solution. This method was tested on the modified IEEE 30 bus test system. The results obtained by this method are compared with those obtained with BCGAs or HSA separately. The results show that the BCGAs-HSA approach can converge to the optimum solution with accuracy compared to those reported recently in the literature.

  10. A PSO approach for preventive maintenance scheduling optimization

    International Nuclear Information System (INIS)

    Pereira, C.M.N.A.; Lapa, C.M.F.; Mol, A.C.A.; Luz, A.F. da

    2009-01-01

    This work presents a Particle Swarm Optimization (PSO) approach for preventive maintenance policy optimization, focused in reliability and cost. The probabilistic model for reliability and cost evaluation is developed in such a way that flexible intervals between maintenance are allowed. As PSO is skilled for realcoded continuous spaces, a non-conventional codification has been developed in order to allow PSO to solve scheduling problems (which is discrete) with variable number of maintenance interventions. In order to evaluate the proposed methodology, the High Pressure Injection System (HPIS) of a typical 4-loop PWR has been considered. Results demonstrate ability in finding optimal solutions, for which expert knowledge had to be automatically discovered by PSO. (author)

  11. A Hybrid Genetic Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Sydulu Maheswarapu

    2011-08-01

    Full Text Available This paper puts forward a reformed hybrid genetic algorithm (GA based approach to the optimal power flow. In the approach followed here, continuous variables are designed using real-coded GA and discrete variables are processed as binary strings. The outcomes are compared with many other methods like simple genetic algorithm (GA, adaptive genetic algorithm (AGA, differential evolution (DE, particle swarm optimization (PSO and music based harmony search (MBHS on a IEEE30 bus test bed, with a total load of 283.4 MW. Its found that the proposed algorithm is found to offer lowest fuel cost. The proposed method is found to be computationally faster, robust, superior and promising form its convergence characteristics.

  12. A design approach for integrating thermoelectric devices using topology optimization

    DEFF Research Database (Denmark)

    Soprani, Stefano; Haertel, Jan Hendrik Klaas; Lazarov, Boyan Stefanov

    2016-01-01

    Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system...... to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems...... for different operating conditions and objective functions, such as temperature span, efficiency, and power recoveryrate. As a specific application, the integration of a thermoelectric cooler into the electronics section ofa downhole oil well intervention tool is investigated, with the objective of minimizing...

  13. An Innovative Approach for online Meta Search Engine Optimization

    OpenAIRE

    Manral, Jai; Hossain, Mohammed Alamgir

    2015-01-01

    This paper presents an approach to identify efficient techniques used in Web Search Engine Optimization (SEO). Understanding SEO factors which can influence page ranking in search engine is significant for webmasters who wish to attract large number of users to their website. Different from previous relevant research, in this study we developed an intelligent Meta search engine which aggregates results from various search engines and ranks them based on several important SEO parameters. The r...

  14. A measure theoretic approach to traffic flow optimization on networks

    OpenAIRE

    Cacace, Simone; Camilli, Fabio; De Maio, Raul; Tosin, Andrea

    2018-01-01

    We consider a class of optimal control problems for measure-valued nonlinear transport equations describing traffic flow problems on networks. The objective isto minimise/maximise macroscopic quantities, such as traffic volume or average speed,controlling few agents, for example smart traffic lights and automated cars. The measuretheoretic approach allows to study in a same setting local and nonlocal drivers interactionsand to consider the control variables as additional measures interacting ...

  15. Log-Optimal Portfolio Selection Using the Blackwell Approachability Theorem

    OpenAIRE

    V'yugin, Vladimir

    2014-01-01

    We present a method for constructing the log-optimal portfolio using the well-calibrated forecasts of market values. Dawid's notion of calibration and the Blackwell approachability theorem are used for computing well-calibrated forecasts. We select a portfolio using this "artificial" probability distribution of market values. Our portfolio performs asymptotically at least as well as any stationary portfolio that redistributes the investment at each round using a continuous function of side in...

  16. BUBBLE UP: ALTERNATIVE APPROACHES TO RESEARCH IN THE ACADEMIC ARCHITECTURE STUDIO

    Directory of Open Access Journals (Sweden)

    Gregory Marinic

    2010-07-01

    Full Text Available Increased connectivity among the design disciplines has radically transformed the nature of building today. Architectural education must accordingly adapt to the emerging needs of our changing built environment by providing vital, flexible, and open learning environments. Pedagogies in the academy have typically been rooted in practices that are both reluctant to change and slow to address transformative forces in an honest and open manner. Regrettably, the resilience of such top-down methods continues to bias the lens of learning toward natural performers and the notion of singular genius. Authentic attempts to react to new demands and to introduce change are all too often met with both strong resistance and profound contempt by conservative critics. Mainline architectural academia continues to project a deep ambivalence to new methodologies, alternative approaches to context, broadened conceptual practices, and advanced visualization techniques. Yet such means provide a responsive and resilient structure to re-frame content, expedite delivery, and update pedagogical objectives for the next generation.

  17. Art as behaviour--an ethological approach to visual and verbal art, music and architecture.

    Science.gov (United States)

    Sütterlin, Christa; Schiefenhövel, Wulf; Lehmann, Christian; Forster, Johanna; Apfelauer, Gerhard

    2014-01-01

    In recent years, the fine arts, architecture, music and literature have increasingly been examined from the vantage point of human ethology and evolutionary psychology. In 2011 the authors formed the research group 'Ethology of the Arts' concentrating on the evolution and biology of perception and behaviour. These novel approaches aim at a better understanding of the various facets represented by the arts by taking into focus possible phylogenetic adaptations, which have shaped the artistic capacities of our ancestors. Rather than culture specificity, which is stressed e.g. by cultural anthropology and numerous other disciplines, universal human tendencies to perceive, feel, think and behave are postulated. Artistic expressive behaviour is understood as an integral part of the human condition, whether expressed in ritual, visual, verbal or musical art. The Ethology of the Arts-group's research focuses on visual and verbal art, music and built environment/architecture and is designed to contribute to the incipient interdisciplinarity in the field of evolutionary art research.

  18. Adjoint current-based approaches to prostate brachytherapy optimization

    International Nuclear Information System (INIS)

    Roberts, J. A.; Henderson, D. L.

    2009-01-01

    This paper builds on previous work done at the Univ. of Wisconsin - Madison to employ the adjoint concept of nuclear reactor physics in the so-called greedy heuristic of brachytherapy optimization. Whereas that previous work focused on the adjoint flux, i.e. the importance, this work has included use of the adjoint current to increase the amount of information available in optimizing. Two current-based approaches were developed for 2-D problems, and each was compared to the most recent form of the flux-based methodology. The first method aimed to take a treatment plan from the flux-based greedy heuristic and adjust via application of the current-displacement, or a vector displacement based on a combination of tissue (adjoint) and seed (forward) currents acting as forces on a seed. This method showed promise in improving key urethral and rectal dosimetric quantities. The second method uses the normed current-displacement as the greedy criterion such that seeds are placed in regions of least force. This method, coupled with the dose-update scheme, generated treatment plans with better target irradiation and sparing of the urethra and normal tissues than the flux-based approach. Tables of these parameters are given for both approaches. In summary, these preliminary results indicate adjoint current methods are useful in optimization and further work in 3-D should be performed. (authors)

  19. Neural Architectures for Control

    Science.gov (United States)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  20. OPTIMAL TRAFFIC MANAGEMENT FOR AIRCRAFT APPROACHING THE AERODROME LANDING AREA

    Directory of Open Access Journals (Sweden)

    Igor B. Ivenin

    2018-01-01

    Full Text Available The research proposes a mathematical optimization approach of arriving aircraft traffic at the aerodrome zone. The airfield having two parallel runways, capable of operating independently of each other, is modeled. The incoming traffic of aircraft is described by a Poisson flow of random events. The arriving aircraft are distributed by the air traffic controller between two runways. There is one approach flight path for each runway. Both approach paths have a common starting point. Each approach path has a different length. The approach trajectories do not overlap. For each of the two approach procedures, the air traffic controller sets the average speed of the aircraft. The given model of airfield and airfield zone is considered as the two-channel system of mass service with refusals in service. Each of the two servicing units includes an approach trajectory, a glide path and a runway. The servicing unit can be in one of two states – free and busy. The probabilities of the states of the servicing units are described by the Kolmogorov system of differential equations. The number of refusals in service on the simulated time interval is used as criterion for assessment of mass service system quality of functioning. This quality of functioning criterion is described by an integral functional. The functions describing the distribution of aircraft flows between the runways, as well as the functions describing the average speed of the aircraft, are control parameters. The optimization problem consists in finding such values of the control parameters for which the value of the criterion functional is minimal. To solve the formulated optimization problem, the L.S. Pontryagin maximum principle is applied. The form of the Hamiltonian function and the conjugate system of differential equations is given. The structure of optimal control has been studied for two different cases of restrictions on the control of the distribution of incoming aircraft

  1. A State-Based Modeling Approach for Efficient Performance Evaluation of Embedded System Architectures at Transaction Level

    Directory of Open Access Journals (Sweden)

    Anthony Barreteau

    2012-01-01

    Full Text Available Abstract models are necessary to assist system architects in the evaluation process of hardware/software architectures and to cope with the still increasing complexity of embedded systems. Efficient methods are required to create reliable models of system architectures and to allow early performance evaluation and fast exploration of the design space. In this paper, we present a specific transaction level modeling approach for performance evaluation of hardware/software architectures. This approach relies on a generic execution model that exhibits light modeling effort. Created models are used to evaluate by simulation expected processing and memory resources according to various architectures. The proposed execution model relies on a specific computation method defined to improve the simulation speed of transaction level models. The benefits of the proposed approach are highlighted through two case studies. The first case study is a didactic example illustrating the modeling approach. In this example, a simulation speed-up by a factor of 7,62 is achieved by using the proposed computation method. The second case study concerns the analysis of a communication receiver supporting part of the physical layer of the LTE protocol. In this case study, architecture exploration is led in order to improve the allocation of processing functions.

  2. An urban informatics approach to smart city learning in architecture and urban design education

    Directory of Open Access Journals (Sweden)

    Mirko Guaralda

    2013-08-01

    Full Text Available This study aims to redefine spaces of learning to places of learning through the direct engagement of local communities as a way to examine and learn from real world issues in the city. This paper exemplifies Smart City Learning, where the key goal is to promote the generation and exchange of urban design ideas for the future development of South Bank, in Brisbane, Australia, informing the creation of new design policies responding to the needs of local citizens. Specific to this project was the implementation of urban informatics techniques and approaches to promote innovative engagement strategies. Architecture and Urban Design students were encouraged to review and appropriate real-time, ubiquitous technology, social media, and mobile devices that were used by urban residents to augment and mediate the physical and digital layers of urban infrastructures. Our study’s experience found that urban informatics provide an innovative opportunity to enrich students’ place of learning within the city.

  3. Building constructions: architecture and nature

    Directory of Open Access Journals (Sweden)

    Mayatskaya Irina

    2017-01-01

    Full Text Available The problem of optimization of building structures is considered in architectural bionic modeling on the bionic principle basis. It is possible to get a reliable and durable constructions by studying the structure and the laws of organization of natural objects. Modern architects have created unique buildings using the bionic approach. There are such properties as symmetry, asymmetry, self-similarity and fractality used in the modern architecture. Using the methods of fractal geometry in the design of architectural forms allows finding a variety of constructive solutions.

  4. Solution of optimization problems using hybrid architecture; Solucao de problemas de otimizacao utilizando arquitetura hibrida

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, Lelis Tetsuo

    2008-07-01

    king of problem. Because of the importance and magnitude of this issue, every effort which contributes to the improvement of power planning is welcome and this corroborates with this thesis which has an objective to propose technical, viable and economic solutions to solve the optimization problems with a new approach and has potential to be applied in many others kind of similar problems. (author)

  5. Optimization of Investment Planning Based on Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Butsenko

    2018-03-01

    Full Text Available The game-theoretic approach has a vast potential in solving economic problems. On the other hand, the theory of games itself can be enriched by the studies of real problems of decision-making. Hence, this study is aimed at developing and testing the game-theoretic technique to optimize the management of investment planning. This technique enables to forecast the results and manage the processes of investment planning. The proposed method of optimizing the management of investment planning allows to choose the best development strategy of an enterprise. This technique uses the “game with nature” model, and the Wald criterion, the maximum criterion and the Hurwitz criterion as criteria. The article presents a new algorithm for constructing the proposed econometric method to optimize investment project management. This algorithm combines the methods of matrix games. Furthermore, I show the implementation of this technique in a block diagram. The algorithm includes the formation of initial data, the elements of the payment matrix, as well as the definition of maximin, maximal, compromise and optimal management strategies. The methodology is tested on the example of the passenger transportation enterprise of the Sverdlovsk Railway in Ekaterinburg. The application of the proposed methodology and the corresponding algorithm allowed to obtain an optimal price strategy for transporting passengers for one direction of traffic. This price strategy contributes to an increase in the company’s income with minimal risk from the launch of this direction. The obtained results and conclusions show the effectiveness of using the developed methodology for optimizing the management of investment processes in the enterprise. The results of the research can be used as a basis for the development of an appropriate tool and applied by any economic entity in its investment activities.

  6. Self-optimizing approach for automated laser resonator alignment

    Science.gov (United States)

    Brecher, C.; Schmitt, R.; Loosen, P.; Guerrero, V.; Pyschny, N.; Pavim, A.; Gatej, A.

    2012-02-01

    Nowadays, the assembly of laser systems is dominated by manual operations, involving elaborate alignment by means of adjustable mountings. From a competition perspective, the most challenging problem in laser source manufacturing is price pressure, a result of cost competition exerted mainly from Asia. From an economical point of view, an automated assembly of laser systems defines a better approach to produce more reliable units at lower cost. However, the step from today's manual solutions towards an automated assembly requires parallel developments regarding product design, automation equipment and assembly processes. This paper introduces briefly the idea of self-optimizing technical systems as a new approach towards highly flexible automation. Technically, the work focuses on the precision assembly of laser resonators, which is one of the final and most crucial assembly steps in terms of beam quality and laser power. The paper presents a new design approach for miniaturized laser systems and new automation concepts for a robot-based precision assembly, as well as passive and active alignment methods, which are based on a self-optimizing approach. Very promising results have already been achieved, considerably reducing the duration and complexity of the laser resonator assembly. These results as well as future development perspectives are discussed.

  7. Bifurcation-based approach reveals synergism and optimal combinatorial perturbation.

    Science.gov (United States)

    Liu, Yanwei; Li, Shanshan; Liu, Zengrong; Wang, Ruiqi

    2016-06-01

    Cells accomplish the process of fate decisions and form terminal lineages through a series of binary choices in which cells switch stable states from one branch to another as the interacting strengths of regulatory factors continuously vary. Various combinatorial effects may occur because almost all regulatory processes are managed in a combinatorial fashion. Combinatorial regulation is crucial for cell fate decisions because it may effectively integrate many different signaling pathways to meet the higher regulation demand during cell development. However, whether the contribution of combinatorial regulation to the state transition is better than that of a single one and if so, what the optimal combination strategy is, seem to be significant issue from the point of view of both biology and mathematics. Using the approaches of combinatorial perturbations and bifurcation analysis, we provide a general framework for the quantitative analysis of synergism in molecular networks. Different from the known methods, the bifurcation-based approach depends only on stable state responses to stimuli because the state transition induced by combinatorial perturbations occurs between stable states. More importantly, an optimal combinatorial perturbation strategy can be determined by investigating the relationship between the bifurcation curve of a synergistic perturbation pair and the level set of a specific objective function. The approach is applied to two models, i.e., a theoretical multistable decision model and a biologically realistic CREB model, to show its validity, although the approach holds for a general class of biological systems.

  8. Novel in situ multiharmonic EQCM-D approach to characterize complex carbon pore architectures for capacitive deionization of brackish water

    International Nuclear Information System (INIS)

    Shpigel, Netanel; Levi, Mikhael D; Sigalov, Sergey; Aurbach, Doron; Daikhin, Leonid; Presser, Volker

    2016-01-01

    Multiharmonic analysis by electrochemical quartz-crystal microbalance with dissipation monitoring (EQCM-D) is introduced as an excellent tool for quantitative studying electrosorption of ions from aqueous solution in mesoporous (BP-880) or mixed micro-mesoporous (BP-2000) carbon electrodes. Finding the optimal conditions for gravimetric analysis of the ionic content in the charged carbon electrodes, we propose a novel approach to modeling the charge-dependent gravimetric characteristics by incorporation of Gouy-Chapman-Stern electric double layer model for ions electrosorption into meso- and micro-mesoporous carbon electrodes. All three parameters of the gravimetric equation evaluated by fitting it to the experimental mass changes curves were validated using supplementary nitrogen gas sorption analysis and complementing atomic force microscopy. Important overlap between gravimetric EQCM-D analysis of the ionic content of porous carbon electrodes and the classical capacitive deionization models has been established. The necessity and usefulness of non-gravimetric EQCM-D characterizations of complex carbon architectures, providing insight into their unique viscoelastic behavior and porous structure changes, have been discussed in detail. (paper)

  9. Toward an Agile Approach to Managing the Effect of Requirements on Software Architecture during Global Software Development

    Directory of Open Access Journals (Sweden)

    Abdulaziz Alsahli

    2016-01-01

    Full Text Available Requirement change management (RCM is a critical activity during software development because poor RCM results in occurrence of defects, thereby resulting in software failure. To achieve RCM, efficient impact analysis is mandatory. A common repository is a good approach to maintain changed requirements, reusing and reducing effort. Thus, a better approach is needed to tailor knowledge for better change management of requirements and architecture during global software development (GSD.The objective of this research is to introduce an innovative approach for handling requirements and architecture changes simultaneously during global software development. The approach makes use of Case-Based Reasoning (CBR and agile practices. Agile practices make our approach iterative, whereas CBR stores requirements and makes them reusable. Twin Peaks is our base model, meaning that requirements and architecture are handled simultaneously. For this research, grounded theory has been applied; similarly, interviews from domain experts were conducted. Interview and literature transcripts formed the basis of data collection in grounded theory. Physical saturation of theory has been achieved through a published case study and developed tool. Expert reviews and statistical analysis have been used for evaluation. The proposed approach resulted in effective change management of requirements and architecture simultaneously during global software development.

  10. Optimizing communication satellites payload configuration with exact approaches

    Science.gov (United States)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  11. Portfolio optimization in enhanced index tracking with goal programming approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  12. Spatiotemporal radiotherapy planning using a global optimization approach

    Science.gov (United States)

    Adibi, Ali; Salari, Ehsan

    2018-02-01

    This paper aims at quantifying the extent of potential therapeutic gain, measured using biologically effective dose (BED), that can be achieved by altering the radiation dose distribution over treatment sessions in fractionated radiotherapy. To that end, a spatiotemporally integrated planning approach is developed, where the spatial and temporal dose modulations are optimized simultaneously. The concept of equivalent uniform BED (EUBED) is used to quantify and compare the clinical quality of spatiotemporally heterogeneous dose distributions in target and critical structures. This gives rise to a large-scale non-convex treatment-plan optimization problem, which is solved using global optimization techniques. The proposed spatiotemporal planning approach is tested on two stylized cancer cases resembling two different tumor sites and sensitivity analysis is performed for radio-biological and EUBED parameters. Numerical results validate that spatiotemporal plans are capable of delivering a larger BED to the target volume without increasing the BED in critical structures compared to conventional time-invariant plans. In particular, this additional gain is attributed to the irradiation of different regions of the target volume at different treatment sessions. Additionally, the trade-off between the potential therapeutic gain and the number of distinct dose distributions is quantified, which suggests a diminishing marginal gain as the number of dose distributions increases.

  13. System, methods and apparatus for program optimization for multi-threaded processor architectures

    Science.gov (United States)

    Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E

    2015-01-06

    Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.

  14. Numerical Optimization Design of Dynamic Quantizer via Matrix Uncertainty Approach

    Directory of Open Access Journals (Sweden)

    Kenji Sawada

    2013-01-01

    Full Text Available In networked control systems, continuous-valued signals are compressed to discrete-valued signals via quantizers and then transmitted/received through communication channels. Such quantization often degrades the control performance; a quantizer must be designed that minimizes the output difference between before and after the quantizer is inserted. In terms of the broadbandization and the robustness of the networked control systems, we consider the continuous-time quantizer design problem. In particular, this paper describes a numerical optimization method for a continuous-time dynamic quantizer considering the switching speed. Using a matrix uncertainty approach of sampled-data control, we clarify that both the temporal and spatial resolution constraints can be considered in analysis and synthesis, simultaneously. Finally, for the slow switching, we compare the proposed and the existing methods through numerical examples. From the examples, a new insight is presented for the two-step design of the existing continuous-time optimal quantizer.

  15. Optimal trading strategies—a time series approach

    Science.gov (United States)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  16. System-on-chip architecture and validation for real-time transceiver optimization: APC implementation on FPGA

    Science.gov (United States)

    Suarez, Hernan; Zhang, Yan R.

    2015-05-01

    New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.

  17. Optimizing Libraries’ Content Findability Using Simple Object Access Protocol (SOAP) With Multi-Tier Architecture

    Science.gov (United States)

    Lahinta, A.; Haris, I.; Abdillah, T.

    2017-03-01

    The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.

  18. An Optimizing Compiler for Petascale I/O on Leadership Class Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Choudhary, Alok [Northwestern Univ., Evanston, IL (United States); Kandemir, Mahmut [Pennsylvania State Univ., State College, PA (United States)

    2015-03-18

    In high-performance computing systems, parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions.

  19. Deterministic network interdiction optimization via an evolutionary approach

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem

  20. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    Energy Technology Data Exchange (ETDEWEB)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A. [Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California 90095 (United States); Chen, Q. [Department of Radiation Oncology, University of Virginia, 1300 Jefferson Park Avenue, Charlottesville, California 22908 (United States)

    2014-10-15

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  1. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    International Nuclear Information System (INIS)

    Neylon, J.; Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-01-01

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  2. A Swarm Optimization approach for clinical knowledge mining.

    Science.gov (United States)

    Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A

    2015-10-01

    Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright

  3. Primal and dual approaches to adjustable robust optimization

    NARCIS (Netherlands)

    de Ruiter, Frans

    2018-01-01

    Robust optimization has become an important paradigm to deal with optimization under uncertainty. Adjustable robust optimization is an extension that deals with multistage problems. This thesis starts with a short but comprehensive introduction to adjustable robust optimization. Then the two

  4. ADAPTIVE REUSE FOR NEW SOCIAL AND MUNICIPAL FUNCTIONS AS AN ACCEPTABLE APPROACH FOR CONSERVATION OF INDUSTRIAL HERITAGE ARCHITECTURE IN THE CZECH REPUBLIC

    Directory of Open Access Journals (Sweden)

    Oleg Fetisov

    2016-04-01

    Full Text Available The present paper deals with a problem of conservation and adaptive reuse of industrial heritage architecture. The relevance and topicality of the problem of adaptive reuse of industrial heritage architecture for new social and municipal functions as the conservation concept are defined. New insights on the typology of industrial architecture are reviewed (e. g. global changes in all European industry, new concepts and technologies in manufacturing, new features of industrial architecture and their construction and typology, first results of industrialization and changes in the typology of industrial architecture in post-industrial period. General goals and tasks of conservation in context of adaptive reuse of industrial heritage architecture are defined (e. g. historical, architectural and artistic, technical. Adaptive reuse as an acceptable approach for conservation and new use is proposed and reviewed. Moreover, the logical model of adaptive reuse of industrial heritage architecture as an acceptable approach for new use has been developed. Consequently, three general methods for the conservation of industrial heritage architecture by the adaptive reuse approach are developed: historical, architectural and artistic, technical. Relevant functional methods' concepts (social concepts are defined and classified. General beneficial effect of the adaptive reuse approach is given. On the basis of analysis results of experience in adaptive reuse of industrial architecture with new social functions general conclusions are developed.

  5. An approach to maintenance optimization where safety issues are important

    Energy Technology Data Exchange (ETDEWEB)

    Vatn, Jorn, E-mail: jorn.vatn@ntnu.n [NTNU, Production and Quality Engineering, 7491 Trondheim (Norway); Aven, Terje [University of Stavanger (Norway)

    2010-01-15

    The starting point for this paper is a traditional approach to maintenance optimization where an object function is used for optimizing maintenance intervals. The object function reflects maintenance cost, cost of loss of production/services, as well as safety costs, and is based on a classical cost-benefit analysis approach where a value of prevented fatality (VPF) is used to weight the importance of safety. However, the rationale for such an approach could be questioned. What is the meaning of such a VPF figure, and is it sufficient to reflect the importance of safety by calculating the expected fatality loss VPF and potential loss of lives (PLL) as being done in the cost-benefit analyses? Should the VPF be the same number for all type of accidents, or should it be increased in case of multiple fatality accidents to reflect gross accident aversion? In this paper, these issues are discussed. We conclude that we have to see beyond the expected values in situations with high safety impacts. A framework is presented which opens up for a broader decision basis, covering considerations on the potential for gross accidents, the type of uncertainties and lack of knowledge of important risk influencing factors. Decisions with a high safety impact are moved from the maintenance department to the 'Safety Board' for a broader discussion. In this way, we avoid that the object function is used in a mechanical way to optimize the maintenance and important safety-related decisions are made implicit and outside the normal arena for safety decisions, e.g. outside the traditional 'Safety Board'. A case study from the Norwegian railways is used to illustrate the discussions.

  6. An approach to maintenance optimization where safety issues are important

    International Nuclear Information System (INIS)

    Vatn, Jorn; Aven, Terje

    2010-01-01

    The starting point for this paper is a traditional approach to maintenance optimization where an object function is used for optimizing maintenance intervals. The object function reflects maintenance cost, cost of loss of production/services, as well as safety costs, and is based on a classical cost-benefit analysis approach where a value of prevented fatality (VPF) is used to weight the importance of safety. However, the rationale for such an approach could be questioned. What is the meaning of such a VPF figure, and is it sufficient to reflect the importance of safety by calculating the expected fatality loss VPF and potential loss of lives (PLL) as being done in the cost-benefit analyses? Should the VPF be the same number for all type of accidents, or should it be increased in case of multiple fatality accidents to reflect gross accident aversion? In this paper, these issues are discussed. We conclude that we have to see beyond the expected values in situations with high safety impacts. A framework is presented which opens up for a broader decision basis, covering considerations on the potential for gross accidents, the type of uncertainties and lack of knowledge of important risk influencing factors. Decisions with a high safety impact are moved from the maintenance department to the 'Safety Board' for a broader discussion. In this way, we avoid that the object function is used in a mechanical way to optimize the maintenance and important safety-related decisions are made implicit and outside the normal arena for safety decisions, e.g. outside the traditional 'Safety Board'. A case study from the Norwegian railways is used to illustrate the discussions.

  7. An Optimal Path Computation Architecture for the Cloud-Network on Software-Defined Networking

    Directory of Open Access Journals (Sweden)

    Hyunhun Cho

    2015-05-01

    Full Text Available Legacy networks do not open the precise information of the network domain because of scalability, management and commercial reasons, and it is very hard to compute an optimal path to the destination. According to today’s ICT environment change, in order to meet the new network requirements, the concept of software-defined networking (SDN has been developed as a technological alternative to overcome the limitations of the legacy network structure and to introduce innovative concepts. The purpose of this paper is to propose the application that calculates the optimal paths for general data transmission and real-time audio/video transmission, which consist of the major services of the National Research & Education Network (NREN in the SDN environment. The proposed SDN routing computation (SRC application is designed and applied in a multi-domain network for the efficient use of resources, selection of the optimal path between the multi-domains and optimal establishment of end-to-end connections.

  8. Approaching direct optimization of as-built lens performance

    Science.gov (United States)

    McGuire, James P.; Kuper, Thomas G.

    2012-10-01

    We describe a method approaching direct optimization of the rms wavefront error of a lens including tolerances. By including the effect of tolerances in the error function, the designer can choose to improve the as-built performance with a fixed set of tolerances and/or reduce the cost of production lenses with looser tolerances. The method relies on the speed of differential tolerance analysis and has recently become practical due to the combination of continuing increases in computer hardware speed and multiple core processing We illustrate the method's use on a Cooke triplet, a double Gauss, and two plastic mobile phone camera lenses.

  9. A Hybrid Approach to the Optimization of Multiechelon Systems

    Directory of Open Access Journals (Sweden)

    Paweł Sitek

    2015-01-01

    Full Text Available In freight transportation there are two main distribution strategies: direct shipping and multiechelon distribution. In the direct shipping, vehicles, starting from a depot, bring their freight directly to the destination, while in the multiechelon systems, freight is delivered from the depot to the customers through an intermediate points. Multiechelon systems are particularly useful for logistic issues in a competitive environment. The paper presents a concept and application of a hybrid approach to modeling and optimization of the Multi-Echelon Capacitated Vehicle Routing Problem. Two ways of mathematical programming (MP and constraint logic programming (CLP are integrated in one environment. The strengths of MP and CLP in which constraints are treated in a different way and different methods are implemented and combined to use the strengths of both. The proposed approach is particularly important for the discrete decision models with an objective function and many discrete decision variables added up in multiple constraints. An implementation of hybrid approach in the ECLiPSe system using Eplex library is presented. The Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP and its variants are shown as an illustrative example of the hybrid approach. The presented hybrid approach will be compared with classical mathematical programming on the same benchmark data sets.

  10. Clustering Approaches for Pragmatic Two-Layer IoT Architecture

    Directory of Open Access Journals (Sweden)

    J. Sathish Kumar

    2018-01-01

    Full Text Available Connecting all devices through Internet is now practical due to Internet of Things. IoT assures numerous applications in everyday life of common people, government bodies, business, and society as a whole. Collaboration among the devices in IoT to bring various applications in the real world is a challenging task. In this context, we introduce an application-based two-layer architectural framework for IoT which consists of sensing layer and IoT layer. For any real-time application, sensing devices play an important role. Both these layers are required for accomplishing IoT-based applications. The success of any IoT-based application relies on efficient communication and utilization of the devices and data acquired by the devices at both layers. The grouping of these devices helps to achieve the same, which leads to formation of cluster of devices at various levels. The clustering helps not only in collaboration but also in prolonging overall network lifetime. In this paper, we propose two clustering algorithms based on heuristic and graph, respectively. The proposed clustering approaches are evaluated on IoT platform using standard parameters and compared with different approaches reported in literature.

  11. Exploring a model-driven architecture (MDA) approach to health care information systems development.

    Science.gov (United States)

    Raghupathi, Wullianallur; Umar, Amjad

    2008-05-01

    To explore the potential of the model-driven architecture (MDA) in health care information systems development. An MDA is conceptualized and developed for a health clinic system to track patient information. A prototype of the MDA is implemented using an advanced MDA tool. The UML provides the underlying modeling support in the form of the class diagram. The PIM to PSM transformation rules are applied to generate the prototype application from the model. The result of the research is a complete MDA methodology to developing health care information systems. Additional insights gained include development of transformation rules and documentation of the challenges in the application of MDA to health care. Design guidelines for future MDA applications are described. The model has the potential for generalizability. The overall approach supports limited interoperability and portability. The research demonstrates the applicability of the MDA approach to health care information systems development. When properly implemented, it has the potential to overcome the challenges of platform (vendor) dependency, lack of open standards, interoperability, portability, scalability, and the high cost of implementation.

  12. Optimizing nitrogen fertilizer use: Current approaches and simulation models

    International Nuclear Information System (INIS)

    Baethgen, W.E.

    2000-01-01

    Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)

  13. Taxes, subsidies and unemployment - a unified optimization approach

    Directory of Open Access Journals (Sweden)

    Erik Bajalinov

    2010-12-01

    Full Text Available Like a linear programming (LP problem, linear-fractional programming (LFP problem can be usefully applied in a wide range of real-world applications. In the last few decades a lot of research papers and monographs were published throughout the world where authors (mainly mathematicians investigated different theoretical and algorithmic aspects of LFP problems in various forms. In this paper we consider these two approaches to optimization (based on linear and linear-fractional objective functions on the same feasible set, compare results they lead to and give interpretation in terms of taxes, subsidies and manpower requirement. We show that in certain cases both approaches are closely connected with one another and may be fruitfully utilized simultaneously.

  14. Optimization of Partitioned Architectures to Support Soft Real-Time Applications

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Pop, Paul

    2014-01-01

    In this paper we propose a new Tabu Search-based design optimization strategy for mixed-criticality systems implementing hard and soft real-time applications on the same platform. Our proposed strategy determined an implementation such that all hard real-time applications are schedulable and the ......In this paper we propose a new Tabu Search-based design optimization strategy for mixed-criticality systems implementing hard and soft real-time applications on the same platform. Our proposed strategy determined an implementation such that all hard real-time applications are schedulable...... and the quality of service of the soft real-time tasks is maximized. We have evaluated our strategy using an aerospace case study....

  15. Reliability optimization using multiobjective ant colony system approaches

    International Nuclear Information System (INIS)

    Zhao Jianhua; Liu Zhaoheng; Dao, M.-T.

    2007-01-01

    The multiobjective ant colony system (ACS) meta-heuristic has been developed to provide solutions for the reliability optimization problem of series-parallel systems. This type of problems involves selection of components with multiple choices and redundancy levels that produce maximum benefits, and is subject to the cost and weight constraints at the system level. These are very common and realistic problems encountered in conceptual design of many engineering systems. It is becoming increasingly important to develop efficient solutions to these problems because many mechanical and electrical systems are becoming more complex, even as development schedules get shorter and reliability requirements become very stringent. The multiobjective ACS algorithm offers distinct advantages to these problems compared with alternative optimization methods, and can be applied to a more diverse problem domain with respect to the type or size of the problems. Through the combination of probabilistic search, multiobjective formulation of local moves and the dynamic penalty method, the multiobjective ACSRAP, allows us to obtain an optimal design solution very frequently and more quickly than with some other heuristic approaches. The proposed algorithm was successfully applied to an engineering design problem of gearbox with multiple stages

  16. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  17. A robust optimization approach for energy generation scheduling in microgrids

    International Nuclear Information System (INIS)

    Wang, Ran; Wang, Ping; Xiao, Gaoxi

    2015-01-01

    Highlights: • A new uncertainty model is proposed for better describing unstable energy demands. • An optimization problem is formulated to minimize the cost of microgrid operations. • Robust optimization algorithms are developed to transform and solve the problem. • The proposed scheme can prominently reduce energy expenses. • Numerical results provide useful insights for future investment policy making. - Abstract: In this paper, a cost minimization problem is formulated to intelligently schedule energy generations for microgrids equipped with unstable renewable sources and combined heat and power (CHP) generators. In such systems, the fluctuant net demands (i.e., the electricity demands not balanced by renewable energies) and heat demands impose unprecedented challenges. To cope with the uncertainty nature of net demand and heat demand, a new flexible uncertainty model is developed. Specifically, we introduce reference distributions according to predictions and field measurements and then define uncertainty sets to confine net and heat demands. The model allows the net demand and heat demand distributions to fluctuate around their reference distributions. Another difficulty existing in this problem is the indeterminate electricity market prices. We develop chance constraint approximations and robust optimization approaches to firstly transform and then solve the prime problem. Numerical results based on real-world data evaluate the impacts of different parameters. It is shown that our energy generation scheduling strategy performs well and the integration of combined heat and power (CHP) generators effectively reduces the system expenditure. Our research also helps shed some illuminations on the investment policy making for microgrids.

  18. Evolutionary algorithms approach for integrated bioenergy supply chains optimization

    International Nuclear Information System (INIS)

    Ayoub, Nasser; Elmoshi, Elsayed; Seki, Hiroya; Naka, Yuji

    2009-01-01

    In this paper, we propose an optimization model and solution approach for designing and evaluating integrated system of bioenergy production supply chains, SC, at the local level. Designing SC that simultaneously utilize a set of bio-resources together is a complicated task, considered here. The complication arises from the different nature and sources of bio-resources used in bioenergy production i.e., wet, dry or agriculture, industrial etc. Moreover, the different concerns that decision makers should take into account, to overcome the tradeoff anxieties of the socialists and investors, i.e., social, environmental and economical factors, was considered through the options of multi-criteria optimization. A first part of this research was introduced in earlier research work explaining the general Bioenergy Decision System gBEDS [Ayoub N, Martins R, Wang K, Seki H, Naka Y. Two levels decision system for efficient planning and implementation of bioenergy production. Energy Convers Manage 2007;48:709-23]. In this paper, brief introduction and emphasize on gBEDS are given; the optimization model is presented and followed by a case study on designing a supply chain of nine bio-resources at Iida city in the middle part of Japan.

  19. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  20. Optimization of minoxidil microemulsions using fractional factorial design approach.

    Science.gov (United States)

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  1. Lightweight enterprise architectures

    CERN Document Server

    Theuerkorn, Fenix

    2004-01-01

    STATE OF ARCHITECTUREArchitectural ChaosRelation of Technology and Architecture The Many Faces of Architecture The Scope of Enterprise Architecture The Need for Enterprise ArchitectureThe History of Architecture The Current Environment Standardization Barriers The Need for Lightweight Architecture in the EnterpriseThe Cost of TechnologyThe Benefits of Enterprise Architecture The Domains of Architecture The Gap between Business and ITWhere Does LEA Fit? LEA's FrameworkFrameworks, Methodologies, and Approaches The Framework of LEATypes of Methodologies Types of ApproachesActual System Environmen

  2. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Grout, Ray W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-09

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved here through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.

  3. An Informatics Approach to Demand Response Optimization in Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  4. Optimizing the Performance of Reactive Molecular Dynamics Simulations for Multi-core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Aktulga, Hasan Metin [Michigan State Univ., East Lansing, MI (United States); Coffman, Paul [Argonne National Lab. (ANL), Argonne, IL (United States); Shan, Tzu-Ray [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knight, Chris [Argonne National Lab. (ANL), Argonne, IL (United States); Jiang, Wei [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-12-01

    Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups in the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.

  5. Optimal Integration of Intermittent Renewables: A System LCOE Stochastic Approach

    Directory of Open Access Journals (Sweden)

    Carlo Lucheroni

    2018-03-01

    Full Text Available We propose a system level approach to value the impact on costs of the integration of intermittent renewable generation in a power system, based on expected breakeven cost and breakeven cost risk. To do this, we carefully reconsider the definition of Levelized Cost of Electricity (LCOE when extended to non-dispatchable generation, by examining extra costs and gains originated by the costly management of random power injections. We are thus lead to define a ‘system LCOE’ as a system dependent LCOE that takes properly into account intermittent generation. In order to include breakeven cost risk we further extend this deterministic approach to a stochastic setting, by introducing a ‘stochastic system LCOE’. This extension allows us to discuss the optimal integration of intermittent renewables from a broad, system level point of view. This paper thus aims to provide power producers and policy makers with a new methodological scheme, still based on the LCOE but which updates this valuation technique to current energy system configurations characterized by a large share of non-dispatchable production. Quantifying and optimizing the impact of intermittent renewables integration on power system costs, risk and CO 2 emissions, the proposed methodology can be used as powerful tool of analysis for assessing environmental and energy policies.

  6. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  7. A statistical approach to optimizing concrete mixture design.

    Science.gov (United States)

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  8. A Statistical Approach to Optimizing Concrete Mixture Design

    Directory of Open Access Journals (Sweden)

    Shamsad Ahmad

    2014-01-01

    Full Text Available A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33. A total of 27 concrete mixtures with three replicates (81 specimens were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48, cementitious materials content (350, 375, and 400 kg/m3, and fine/total aggregate ratio (0.35, 0.40, and 0.45. The experimental data were utilized to carry out analysis of variance (ANOVA and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  9. Toward an Agile Approach to Managing the Effect of Requirements on Software Architecture during Global Software Development

    OpenAIRE

    Alsahli, Abdulaziz; Khan, Hameed; Alyahya, Sultan

    2016-01-01

    Requirement change management (RCM) is a critical activity during software development because poor RCM results in occurrence of defects, thereby resulting in software failure. To achieve RCM, efficient impact analysis is mandatory. A common repository is a good approach to maintain changed requirements, reusing and reducing effort. Thus, a better approach is needed to tailor knowledge for better change management of requirements and architecture during global software development (GSD).The o...

  10. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  11. An end-to-end security auditing approach for service oriented architectures

    NARCIS (Netherlands)

    Azarmi, M.; Bhargava, B.; Angin, P.; Ranchal, R.; Ahmed, N.; Sinclair, A.; Linderman, M.; Ben Othmane, L.

    2012-01-01

    Service-Oriented Architecture (SOA) is becoming a major paradigm for distributed application development in the recent explosion of Internet services and cloud computing. However, SOA introduces new security challenges not present in the single-hop client-server architectures due to the involvement

  12. Investigating the Role of Cultural Capital and Organisational Habitus in Architectural Education: A Case Study Approach

    Science.gov (United States)

    Payne, Jennifer Chamberlin

    2015-01-01

    Compared to other professions in recent years, architecture has lagged woefully behind in attracting and retaining a diverse population, as defined by class, race and gender. This research investigates the extent to which architecture culturally reproduces itself, specifically examining the socialisation process of students into the subculture of…

  13. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  14. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  15. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    Science.gov (United States)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  16. Approach to design neural cryptography: a generalized architecture and a heuristic rule.

    Science.gov (United States)

    Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen

    2013-06-01

    Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.

  17. Evidence and speculation: reimagining approaches to architecture and research within the paediatric hospital.

    Science.gov (United States)

    McLaughlan, Rebecca; Pert, Alan

    2017-11-25

    As the dominant research paradigm within the construction of contemporary healthcare facilities, evidence-based design (EBD) will increasingly impact our expectations of what hospital architecture should be. Research methods within EBD focus on prototyping incremental advances and evaluating what has already been built. Yet medical care is a rapidly evolving system; changes to technology, workforce composition, patient demographics and funding models can create rapid and unpredictable changes to medical practice and modes of care. This dynamism has the potential to curtail or negate the usefulness of current best practice approaches. To imagine new directions for the role of the hospital in society, or innovative ways in which the built environment might support well-being, requires a model that can project beyond existing constraints. Speculative design employs a design-based research methodology to imagine alternative futures and uses the artefacts created through this process to enable broader critical reflection on existing practices. This paper examines the contribution of speculative design within the context of the paediatric hospital as a means of facilitating critical reflection regarding the design of new healthcare facilities. While EBD is largely limited by what has already been built, speculative design offers a complementary research method to meet this limitation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. A Service Oriented Architecture Approach to Achieve Interoperability between Immunization Information Systems in Iran.

    Science.gov (United States)

    Hosseini, Masoud; Ahmadi, Maryam; Dixon, Brian E

    2014-01-01

    Clinical decision support (CDS) systems can support vaccine forecasting and immunization reminders; however, immunization decision-making requires data from fragmented, independent systems. Interoperability and accurate data exchange between immunization information systems (IIS) is an essential factor to utilize Immunization CDS systems. Service oriented architecture (SOA) and Health Level 7 (HL7) are dominant standards for web-based exchange of clinical information. We implemented a system based on SOA and HL7 v3 to support immunization CDS in Iran. We evaluated system performance by exchanging 1500 immunization records for roughly 400 infants between two IISs. System turnaround time is less than a minute for synchronous operation calls and the retrieved immunization history of infants were always identical in different systems. CDS generated reports were accordant to immunization guidelines and the calculations for next visit times were accurate. Interoperability is rare or nonexistent between IIS. Since inter-state data exchange is rare in United States, this approach could be a good prototype to achieve interoperability of immunization information.

  19. A Cut-and-Paste Approach to 3D Graphene-Oxide-Based Architectures.

    Science.gov (United States)

    Luo, Chong; Yeh, Che-Ning; Baltazar, Jesus M Lopez; Tsai, Chao-Lin; Huang, Jiaxing

    2018-04-01

    Properly cut sheets can be converted into complex 3D structures by three basic operations including folding, bending, and pasting to render new functions. Folding and bending are extensively employed in crumpling, origami, and pop-up fabrications for 3D structures. Pasting joins different parts of a material together, and can create new geometries that are fundamentally unattainable by folding and bending. However, it has been much less explored, likely due to limited choice of weldable thin film materials and residue-free glues. Here it is shown that graphene oxide (GO) paper is one such suitable material. Stacked GO sheets can be readily loosened up and even redispersed in water, which upon drying, restack to form solid structures. Therefore, water can be utilized to heal local damage, glue separated pieces, and release internal stress in bent GO papers to fix their shapes. Complex and dynamic 3D GO architectures can thus be fabricated by a cut-and-paste approach, which is also applicable to GO-based hybrid with carbon nanotubes or clay sheets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. OSSA - An optimized approach to severe accident management: EPR application

    International Nuclear Information System (INIS)

    Sauvage, E. C.; Prior, R.; Coffey, K.; Mazurkiewicz, S. M.

    2006-01-01

    There is a recognized need to provide nuclear power plant technical staff with structured guidance for response to a potential severe accident condition involving core damage and potential release of fission products to the environment. Over the past ten years, many plants worldwide have implemented such guidance for their emergency technical support center teams either by following one of the generic approaches, or by developing fully independent approaches. There are many lessons to be learned from the experience of the past decade, in developing, implementing, and validating severe accident management guidance. Also, though numerous basic approaches exist which share common principles, there are differences in the methodology and application of the guidelines. AREVA/Framatome-ANP is developing an optimized approach to severe accident management guidance in a project called OSSA ('Operating Strategies for Severe Accidents'). There are still numerous operating power plants which have yet to implement severe accident management programs. For these, the option to use an updated approach which makes full use of lessons learned and experience, is seen as a major advantage. Very few of the current approaches covers all operating plant states, including shutdown states with the primary system closed and open. Although it is not necessary to develop an entirely new approach in order to add this capability, the opportunity has been taken to develop revised full scope guidance covering all plant states in addition to the fuel in the fuel building. The EPR includes at the design phase systems and measures to minimize the risk of severe accident and to mitigate such potential scenarios. This presents a difference in comparison with existing plant, for which severe accidents where not considered in the design. Thought developed for all type of plants, OSSA will also be applied on the EPR, with adaptations designed to take into account its favourable situation in that field

  1. Optimization of the Coupled Cluster Implementation in NWChem on Petascale Parallel Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Anisimov, Victor; Bauer, Gregory H.; Chadalavada, Kalyana; Olson, Ryan M.; Glenski, Joseph W.; Kramer, William T.; Apra, Edoardo; Kowalski, Karol

    2014-09-04

    Coupled cluster singles and doubles (CCSD) algorithm has been optimized in NWChem software package. This modification alleviated the communication bottleneck and provided from 2- to 5-fold speedup in the CCSD iteration time depending on the problem size and available memory. Sustained 0.60 petaflop/sec performance on CCSD(T) calculation has been obtained on NCSA Blue Waters. This number included all stages of the calculation from initialization till termination, iterative computation of single and double excitations, and perturbative accounting for triple excitations. In the section of perturbative triples alone, the computation maintained 1.18 petaflop/sec performance level. CCSD computations have been performed on Guanine-Cytosine deoxydinucleotide monophosphate (GC-dDMP) to probe the conformational energy difference in DNA single strand in A- and B-conformations. The computation revealed significant discrepancy between CCSD and classical force fields in prediction of relative energy of A- and B-conformations of GC-dDMP.

  2. Analysis and Optimization of Mixed-Criticality Applications on Partitioned Distributed Architectures

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Marinescu, S. O.; Pop, Paul

    2012-01-01

    Constrained (RC) messages, transmitted if there are no TT messages, and Best Effort (BE) messages. We assume that applications are scheduled using Static Cyclic Scheduling (SCS) or Fixed-Priority Preemptive Scheduling (FPS). We are interested in analysis and optimization methods and tools, which decide...... within predefined time slots, allocated on each processor. At the communication-level, TTEthernet uses the concepts of virtual links for the separation of mixed-criticality messages. TTEthernet integrates three types of traffic: Time-Triggered (TT) messages, transmitted based on schedule tables, Rate...... the mapping of tasks to PEs, the sequence and length of the time partitions on each PE and the schedule tables of the SCS tasks and TT messages, such that the applications are schedulable and the response times of FPS tasks and RC messages is minimized. We have proposed a Tabu Search-based meta...

  3. Architecture and method for optimization of cloud resources used in software testing

    Directory of Open Access Journals (Sweden)

    Joana Coelho Vigário

    2016-03-01

    Full Text Available Nowadays systems can evolve quickly, and to this growth is associated, for example, the production of new features, or even the change of system perspective, required by the stakeholders. These conditions require the development of software testing in order to validate the systems. Run a large battery of tests sequentially can take hours. However, tests can run faster in a distributed environment with rapid availability of pre-configured systems, such as cloud computing. There is increasing demand for automation of the entire process, including integration, build, running tests and management of cloud resources.This paper aims to demonstrate the applicability of the practice continuous integration (CI in Information Systems, for automating the build and software testing performed in a distributed environment of cloud computing, in order to achieve optimization and elasticity of the resources provided by the cloud.

  4. A conceptual approach to approximate tree root architecture in infinite slope models

    Science.gov (United States)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  5. Electrospun fibers for high performance anodes in microbial fuel cells. Optimizing materials and architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Shuiliang

    2010-04-15

    the results of above, the porosity and the pore size in the fiber mat are utmost important for the performance of anode in MFCs. With concept of curve or helix in fibers can lead to higher porosity in the fiber mat, a novel 3D porous architecture, nanospring, was designed for high performance anode structure in future MFC. Polymeric nanospring was prepared by bicomponent electrospinning. The reasons for the formation of polymeric nanosprings were investigated by coaxial electrospinning of bicomponent rigid i.e. Nomex {sup registered} or polysulfonamide (PSA) (rigid) and flexible polymers i.e. thermoplastic polyurethane (TPU) (flexible). The results indicated that the nanospring formation is attributed to longitudinal compressive forces which are resulted from the different shrinkages of the rigid and flexible two polymer components and a good electrical conductivity of one of the polymer solutions in coaxial electrospinning system. The modified electrospinning i.e. off-centered electrospinning and side-by-side electrospinning are much more effective than the coaxial electrospinning for generating polymer spring or helical structures, because of the higher longitudinal compressive forces which derived from the lopsided elastic forces. The aligned nanofiber mat with high percent of nanospring shows higher elongation and higher storage modulus below transition glass temperature (T{sub g}) compared to that with straight fibers. The nanospring or helical shape preserves much void-space in the mat. It would be a potential architecture for highly efficient anode in future MFCs. (orig.)

  6. EVALUATING AND REFINING THE ‘ENTERPRISE ARCHITECTURE AS STRATEGY’ APPROACH AND ARTEFACTS

    Directory of Open Access Journals (Sweden)

    M. De Vries

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Enterprise Architecture (EA is a new discipline that has emerged from the need to create a holistic view of an enterprise, and thereby to discover business/IT integration and alignment opportunities across enterprise structures. Previous EA value propositions that merely focus on IT cost reductions will no longer convince management to invest in EA. Today, EA should enable business strategy in the organisation to create value. This resides in the ability to do enterprise optimisation through process standardisation and integration. In order to do this, a new approach is required to integrate EA into the strategy planning process of the organisation.
    This article explores the use of three key artefacts – operating models, core diagrams, and an operating maturity assessment as defined by Ross, Weill & Robertson [1] – as the basis of this new approach. Action research is applied to a research group to obtain qualitative feedback on the practicality of the artefacts.

    AFRIKAANSE OPSOMMING: Ondernemingsargitektuur (OA is ’n nuwe dissipline wat ontstaan het uit die behoefte om ’n holistiese perspektief van ’n onderneming te skep om sodoende besigheid/IT-integrasie en - belyningsgeleenthede regoor ondernemingstrukture te ontdek. Vorige OA waardeaanbiedings wat hoofsaaklik gefokus het op IT kostebesparings sal bestuur nie meer kan oorreed om in OA te belê nie. Vandag behoort OA bevoegdheid te gee aan ondernemingstrategie om werklik waarde te skep. Hierdie bevoegdheid lê gesetel in ondernemingsoptimering deur middel van prosesstandaardisasie en -integrasie. ’n Nuwe benadering word benodig ten einde OA te integreer met die strategiese beplanningsproses van die organisasie.
    Hierdie artikel ondersoek die gebruik van drie artefakte – operasionele modelle, kerndiagramme, en operasionele volwassenheidsassessering soos gedefinieer deur Ross, Weill & Robertson [1] – as die basis van hierdie nuwe benadering

  7. Contaminated Land Remediation on decommissioned nuclear facilities: an optimized approach

    International Nuclear Information System (INIS)

    Sauer, Emilie

    2016-01-01

    The site of the Monts d'Arree located in Brennilis in the area of Brittany in France is a former 70 MWe heavy water reactor. EDF is now in charge of its decommissioning. The effluent treatment facility (STE) is currently being dismantled. As the future use of the site will exclude any nuclear activity, EDF is taking site release into consideration. Therefore a land management strategy for the land and soil is needed. An optimized approach is being proposed for the STE, to the French Regulator. In France, there is no specific regulation related to contaminated land (either radiologically contaminated or chemically contaminated). The French Nuclear Safety Authority's doctrine for radioactively contaminated land is a reference approach which involves complete clean-up, removing any trace of artificial radioactivity in the ground. If technical difficulties are encountered or the quantity of radioactive waste produced is too voluminous, an optimised clean-up can be implemented. EDF has been engaged since 2008 in drawing up a common guideline with other French nuclear operators (CEA and AREVA). The operators' guideline proposed the first steps to define how to optimise nuclear waste and to carry out a cost-benefits analysis. This is in accordance with the IAEA's prescriptions. Historically, various incidents involving effluent drum spills caused radiological contamination in the building platform and the underlying soil. While conducting the decontamination works in 2004/2005, it was impossible to remove all contamination (that went deeper than expected). A large characterization campaign was carried out in order to map the contamination. For the site investigation, 34 boreholes were drilled from 2 to 5 m under the building platform and 98 samples were analyzed to search for gamma, beta and alpha emitters. With the results, the contamination was mapped using a geostatistical approach developed by Geovariances TM . Main results were: - Soils are

  8. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  9. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  10. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  11. Multipurpose Water Reservoir Management: An Evolutionary Multiobjective Optimization Approach

    Directory of Open Access Journals (Sweden)

    Luís A. Scola

    2014-01-01

    Full Text Available The reservoirs that feed large hydropower plants should be managed in order to provide other uses for the water resources. Those uses include, for instance, flood control and avoidance, irrigation, navigability in the rivers, and other ones. This work presents an evolutionary multiobjective optimization approach for the study of multiple water usages in multiple interlinked reservoirs, including both power generation objectives and other objectives not related to energy generation. The classical evolutionary algorithm NSGA-II is employed as the basic multiobjective optimization machinery, being modified in order to cope with specific problem features. The case studies, which include the analysis of a problem which involves an objective of navigability on the river, are tailored in order to illustrate the usefulness of the data generated by the proposed methodology for decision-making on the problem of operation planning of multiple reservoirs with multiple usages. It is shown that it is even possible to use the generated data in order to determine the cost of any new usage of the water, in terms of the opportunity cost that can be measured on the revenues related to electric energy sales.

  12. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  13. Optimization of decision rules based on dynamic programming approach

    KAUST Repository

    Zielosko, Beata

    2014-01-14

    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  14. An analytic approach to optimize tidal turbine fields

    Science.gov (United States)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  15. Approaches of Russian oil companies to optimal capital structure

    Science.gov (United States)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  16. Optimal extraction of petroleum resources: an empirical approach

    International Nuclear Information System (INIS)

    Helmi-Oskoui, B.; Narayanan, R.; Glover, T.; Lyon, K.S.; Sinha, M.

    1992-01-01

    Petroleum reservoir behaviour at different levels of reservoir pressure is estimated with the actual well data and reservoir characteristics. Using the pressure at the bottom of producing wells as control variables, the time paths of profit maximizing joint production of oil and natural gas under various tax policies are obtained using a dynamic optimization approach. The results emerge from numerical solution of the maximization of estimated future expected revenues net of variable costs in the presence of taxation. Higher discount rate shifts the production forward in time and prolongs the production plan. The analysis of the state, corporate income taxes and depletion allowance reveals the changes in the revenues to the firm, the state and the federal governments. 18 refs., 3 figs., 4 tabs

  17. Architectural optimizations for low-power K-best MIMO decoders

    KAUST Repository

    Mondal, Sudip

    2009-09-01

    Maximum-likelihood (ML) detection for higher order multiple-input-multiple-output (MIMO) systems faces a major challenge in computational complexity. This limits the practicality of these systems from an implementation point of view, particularly for mobile battery-operated devices. In this paper, we propose a modified approach for MIMO detection, which takes advantage of the quadratic-amplitude modulation (QAM) constellation structure to accelerate the detection procedure. This approach achieves low-power operation by extending the minimum number of paths and reducing the number of required computations for each path extension, which results in an order-of-magnitude reduction in computations in comparison with existing algorithms. This paper also describes the very-large-scale integration (VLSI) design of the low-power path metric computation unit. The approach is applied to a 4 × 4, 64-QAM MIMO detector system. Results show negligible performance degradation compared with conventional algorithms while reducing the complexity by more than 50%. © 2009 IEEE.

  18. Architectural optimizations for low-power K-best MIMO decoders

    KAUST Repository

    Mondal, Sudip; Eltawil, Ahmed M.; Salama, Khaled N.

    2009-01-01

    Maximum-likelihood (ML) detection for higher order multiple-input-multiple-output (MIMO) systems faces a major challenge in computational complexity. This limits the practicality of these systems from an implementation point of view, particularly for mobile battery-operated devices. In this paper, we propose a modified approach for MIMO detection, which takes advantage of the quadratic-amplitude modulation (QAM) constellation structure to accelerate the detection procedure. This approach achieves low-power operation by extending the minimum number of paths and reducing the number of required computations for each path extension, which results in an order-of-magnitude reduction in computations in comparison with existing algorithms. This paper also describes the very-large-scale integration (VLSI) design of the low-power path metric computation unit. The approach is applied to a 4 × 4, 64-QAM MIMO detector system. Results show negligible performance degradation compared with conventional algorithms while reducing the complexity by more than 50%. © 2009 IEEE.

  19. Biomimicry as an approach for sustainable architecture case of arid regions with hot and dry climate

    Science.gov (United States)

    Bouabdallah, Nabila; M'sellem, Houda; Alkama, Djamel

    2016-07-01

    This paper aims to study the problem of thermal comfort inside buildings located in hot and arid climates. The principal idea behind this research is using concepts based on the potential of nature as an instrument that helps creating appropriate facades with the environment "building skin". The biomimetic architecture imitates nature through the study of form, function, behaviour and ecosystems of biological organisms. This research aims to clarify the possibilities that can be offered by biomimicry architecture to develop architectural bio-inspired building's design that can help to enhance indoor thermal ambiance in buildings located in hot and dry climate which helps to achieve thermal comfort for users.

  20. DOE's Institute for Advanced Architecture and Algorithms: An application-driven approach

    International Nuclear Information System (INIS)

    Murphy, Richard C

    2009-01-01

    This paper describes an application driven methodology for understanding the impact of future architecture decisions on the end of the MPP era. Fundamental transistor device limitations combined with application performance characteristics have created the switch to multicore/multithreaded architectures. Designing large-scale supercomputers to match application demands is particularly challenging since performance characteristics are highly counter-intuitive. In fact, data movement more than FLOPS dominates. This work discusses some basic performance analysis for a set of DOE applications, the limits of CMOS technology, and the impact of both on future architectures.

  1. Experiencing a Problem-Based Learning Approach for Teaching Reconfigurable Architecture Design

    Directory of Open Access Journals (Sweden)

    Erwan Fabiani

    2009-01-01

    Full Text Available This paper presents the “reconfigurable computing” teaching part of a computer science master course (first year on parallel architectures. The practical work sessions of this course rely on active pedagogy using problem-based learning, focused on designing a reconfigurable architecture for the implementation of an application class of image processing algorithms. We show how the successive steps of this project permit the student to experiment with several fundamental concepts of reconfigurable computing at different levels. Specific experiments include exploitation of architectural parallelism, dataflow and communicating component-based design, and configurability-specificity tradeoffs.

  2. FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization

    Science.gov (United States)

    Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon

    2008-02-01

    This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.

  3. Synthesis of conjugated polymers with complex architecture for photovoltaic applications

    DEFF Research Database (Denmark)

    Kiriy, Anton; Krebs, Frederik C

    2017-01-01

    A common approach to bulk heterojunction solar cells involves a “trialand- error” approach in finding optimal kinetically unstable morphologies. An alternative approach assumes the utilization of complex polymer architectures, such as donor–acceptor block copolymers. Because of a covalent preorga...... preorganization of the donor and acceptor components, these materials may form desirable morphologies at thermodynamic equilibrium. This chapter reviews synthetic approaches to such architectures and shows the first photovoltaic results....

  4. Securing cloud services a pragmatic approach to security architecture in the cloud

    CERN Document Server

    Newcombe, Lee

    2012-01-01

    This book provides an overview of security architecture processes and explains how they may be used to derive an appropriate set of security controls to manage the risks associated with working in the Cloud.

  5. The semiotics of landscape design communication: towards a critical visual research approach in landscape architecture.

    NARCIS (Netherlands)

    Raaphorst, K.M.C.; Duchhart, I.; Knaap, van der W.G.M.; Roeleveld, Gerda; Brink, van den A.

    2017-01-01

    In landscape architecture, visual representations are the primary means of communication between stakeholders in design processes. Despite the reliance on visual representations, little critical research has been undertaken by landscape architects on how visual communication forms work or their

  6. Evolution of the Milieu Approach for Software Development for the Polymorphous Computing Architecture Program

    National Research Council Canada - National Science Library

    Dandass, Yoginder

    2004-01-01

    A key goal of the DARPA Polymorphous Computing Architectures (PCA) program is to develop reactive closed-loop systems that are capable of being dynamically reconfigured in order to respond to changing mission scenarios...

  7. Workforce Optimization for Bank Operation Centers: A Machine Learning Approach

    Directory of Open Access Journals (Sweden)

    Sefik Ilkin Serengil

    2017-12-01

    Full Text Available Online Banking Systems evolved and improved in recent years with the use of mobile and online technologies, performing money transfer transactions on these channels can be done without delay and human interaction, however commercial customers still tend to transfer money on bank branches due to several concerns. Bank Operation Centers serve to reduce the operational workload of branches. Centralized management also offers personalized service by appointed expert employees in these centers. Inherently, workload volume of money transfer transactions changes dramatically in hours. Therefore, work-force should be planned instantly or early to save labor force and increase operational efficiency. This paper introduces a hybrid multi stage approach for workforce planning in bank operation centers by the application of supervised and unsu-pervised learning algorithms. Expected workload would be predicted as supervised learning whereas employees are clus-tered into different skill groups as unsupervised learning to match transactions and proper employees. Finally, workforce optimization is analyzed for proposed approach on production data.

  8. Optimal Control Approaches to the Aggregate Production Planning Problem

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2015-12-01

    Full Text Available In the area of production planning and control, the aggregate production planning (APP problem represents a great challenge for decision makers in production-inventory systems. Tradeoff between inventory-capacity is known as the APP problem. To address it, static and dynamic models have been proposed, which in general have several shortcomings. It is the premise of this paper that the main drawback of these proposals is, that they do not take into account the dynamic nature of the APP. For this reason, we propose the use of an Optimal Control (OC formulation via the approach of energy-based and Hamiltonian-present value. The main contribution of this paper is the mathematical model which integrates a second order dynamical system coupled with a first order system, incorporating production rate, inventory level, and capacity as well with the associated cost by work force in the same formulation. Also, a novel result in relation with the Hamiltonian-present value in the OC formulation is that it reduces the inventory level compared with the pure energy based approach for APP. A set of simulations are provided which verifies the theoretical contribution of this work.

  9. Optimizing Concurrent M3-Transactions: A Fuzzy Constraint Satisfaction Approach

    Directory of Open Access Journals (Sweden)

    Peng LI

    2004-10-01

    Full Text Available Due to the high connectivity and great convenience, many E-commerce application systems have a high transaction volume. Consequently, the system state changes rapidly and it is likely that customers issue transactions based on out-of-date state information. Thus, the potential of transaction abortion increases greatly. To address this problem, we proposed an M3-transaction model. An M3-transaction is a generalized transaction where users can issue their preferences in a request by specifying multiple criteria and optional data resources simultaneously within one transaction. In this paper, we introduce the transaction grouping and group evaluation techniques. We consider evaluating a group of M3-transactions arrived to the system within a short duration together. The system makes optimal decisions in allocating data to transactions to achieve better customer satisfaction and lower transaction failure rate. We apply the fuzzy constraint satisfaction approach for decision-making. We also conduct experimental studies to evaluate the performance of our approach. The results show that the M3-transaction with group evaluation is more resilient to failure and yields much better performance than the traditional transaction model.

  10. Building Quality into Learning Management Systems – An Architecture-Centric Approach

    OpenAIRE

    Avgeriou, P.; Retalis, Simos; Skordalakis, Manolis

    2003-01-01

    The design and development of contemporary Learning Management Systems (LMS), is largely focused on satisfying functional requirements, rather than quality requirements, thus resulting in inefficient systems of poor software and business quality. In order to remedy this problem there is a research trend into specifying and evaluating software architectures for LMS, since quality at-tributes in a system depend profoundly on its architecture. This paper presents a case study of appraising the s...

  11. Information Integration Architecture Development

    OpenAIRE

    Faulkner, Stéphane; Kolp, Manuel; Nguyen, Duy Thai; Coyette, Adrien; Do, Thanh Tung; 16th International Conference on Software Engineering and Knowledge Engineering

    2004-01-01

    Multi-Agent Systems (MAS) architectures are gaining popularity for building open, distributed, and evolving software required by systems such as information integration applications. Unfortunately, despite considerable work in software architecture during the last decade, few research efforts have aimed at truly defining patterns and languages for designing such multiagent architectures. We propose a modern approach based on organizational structures and architectural description lan...

  12. Convergent functional architecture of the superior parietal lobule unraveled with multimodal neuroimaging approaches.

    Science.gov (United States)

    Wang, Jiaojian; Yang, Yong; Fan, Lingzhong; Xu, Jinping; Li, Changhai; Liu, Yong; Fox, Peter T; Eickhoff, Simon B; Yu, Chunshui; Jiang, Tianzi

    2015-01-01

    The superior parietal lobule (SPL) plays a pivotal role in many cognitive, perceptive, and motor-related processes. This implies that a mosaic of distinct functional and structural subregions may exist in this area. Recent studies have demonstrated that the ongoing spontaneous fluctuations in the brain at rest are highly structured and, like coactivation patterns, reflect the integration of cortical locations into long-distance networks. This suggests that the internal differentiation of a complex brain region may be revealed by interaction patterns that are reflected in different neuroimaging modalities. On the basis of this perspective, we aimed to identify a convergent functional organization of the SPL using multimodal neuroimaging approaches. The SPL was first parcellated based on its structural connections as well as on its resting-state connectivity and coactivation patterns. Then, post hoc functional characterizations and connectivity analyses were performed for each subregion. The three types of connectivity-based parcellations consistently identified five subregions in the SPL of each hemisphere. The two anterior subregions were found to be primarily involved in action processes and in visually guided visuomotor functions, whereas the three posterior subregions were primarily associated with visual perception, spatial cognition, reasoning, working memory, and attention. This parcellation scheme for the SPL was further supported by revealing distinct connectivity patterns for each subregion in all the used modalities. These results thus indicate a convergent functional architecture of the SPL that can be revealed based on different types of connectivity and is reflected by different functions and interactions. © 2014 Wiley Periodicals, Inc.

  13. Optimization-Based Approaches to Control of Probabilistic Boolean Networks

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2017-02-01

    Full Text Available Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs, which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained.

  14. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  15. An optimal adder-based hardware architecture for the DCT/SA-DCT

    Science.gov (United States)

    Kinane, Andrew; Muresan, Valentin; O'Connor, Noel

    2005-07-01

    The explosive growth of the mobile multimedia industry has accentuated the need for ecient VLSI implemen- tations of the associated computationally demanding signal processing algorithms. This need becomes greater as end-users demand increasingly enhanced features and more advanced underpinning video analysis. One such feature is object-based video processing as supported by MPEG-4 core profile, which allows content-based in- teractivity. MPEG-4 has many computationally demanding underlying algorithms, an example of which is the Shape Adaptive Discrete Cosine Transform (SA-DCT). The dynamic nature of the SA-DCT processing steps pose significant VLSI implementation challenges and many of the previously proposed approaches use area and power consumptive multipliers. Most also ignore the subtleties of the packing steps and manipulation of the shape information. We propose a new multiplier-less serial datapath based solely on adders and multiplexers to improve area and power. The adder cost is minimised by employing resource re-use methods. The number of (physical) adders used has been derived using a common sub-expression elimination algorithm. Additional energy eciency is factored into the design by employing guarded evaluation and local clock gating. Our design implements the SA-DCT packing with minimal switching using ecient addressing logic with a transpose mem- ory RAM. The entire design has been synthesized using TSMC 0.09µm TCBN90LP technology yielding a gate count of 12028 for the datapath and its control logic.

  16. Defending against the Advanced Persistent Threat: An Optimal Control Approach

    Directory of Open Access Journals (Sweden)

    Pengdeng Li

    2018-01-01

    Full Text Available The new cyberattack pattern of advanced persistent threat (APT has posed a serious threat to modern society. This paper addresses the APT defense problem, that is, the problem of how to effectively defend against an APT campaign. Based on a novel APT attack-defense model, the effectiveness of an APT defense strategy is quantified. Thereby, the APT defense problem is modeled as an optimal control problem, in which an optimal control stands for a most effective APT defense strategy. The existence of an optimal control is proved, and an optimality system is derived. Consequently, an optimal control can be figured out by solving the optimality system. Some examples of the optimal control are given. Finally, the influence of some factors on the effectiveness of an optimal control is examined through computer experiments. These findings help organizations to work out policies of defending against APTs.

  17. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    bus (New England) test system. Numerical results include performance comparisons with other metaheuristic optimization techniques, namely, comprehensive learning particle swarm optimization (CLPSO), genetic algorithm with multi-parent ...

  18. Architectural geometry

    KAUST Repository

    Pottmann, Helmut; Eigensatz, Michael; Vaxman, Amir; Wallner, Johannes

    2014-01-01

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  19. Architectural geometry

    KAUST Repository

    Pottmann, Helmut

    2014-11-26

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  20. Optimizing Thermal-Elastic Properties of C/C–SiC Composites Using a Hybrid Approach and PSO Algorithm

    Science.gov (United States)

    Xu, Yingjie; Gao, Tian

    2016-01-01

    Carbon fiber-reinforced multi-layered pyrocarbon–silicon carbide matrix (C/C–SiC) composites are widely used in aerospace structures. The complicated spatial architecture and material heterogeneity of C/C–SiC composites constitute the challenge for tailoring their properties. Thus, discovering the intrinsic relations between the properties and the microstructures and sequentially optimizing the microstructures to obtain composites with the best performances becomes the key for practical applications. The objective of this work is to optimize the thermal-elastic properties of unidirectional C/C–SiC composites by controlling the multi-layered matrix thicknesses. A hybrid approach based on micromechanical modeling and back propagation (BP) neural network is proposed to predict the thermal-elastic properties of composites. Then, a particle swarm optimization (PSO) algorithm is interfaced with this hybrid model to achieve the optimal design for minimizing the coefficient of thermal expansion (CTE) of composites with the constraint of elastic modulus. Numerical examples demonstrate the effectiveness of the proposed hybrid model and optimization method. PMID:28773343

  1. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  2. A multiscale optimization approach to detect exudates in the macula.

    Science.gov (United States)

    Agurto, Carla; Murray, Victor; Yu, Honggang; Wigdahl, Jeffrey; Pattichis, Marios; Nemeth, Sheila; Barriga, E Simon; Soliz, Peter

    2014-07-01

    Pathologies that occur on or near the fovea, such as clinically significant macular edema (CSME), represent high risk for vision loss. The presence of exudates, lipid residues of serous leakage from damaged capillaries, has been associated with CSME, in particular if they are located one optic disc-diameter away from the fovea. In this paper, we present an automatic system to detect exudates in the macula. Our approach uses optimal thresholding of instantaneous amplitude (IA) components that are extracted from multiple frequency scales to generate candidate exudate regions. For each candidate region, we extract color, shape, and texture features that are used for classification. Classification is performed using partial least squares (PLS). We tested the performance of the system on two different databases of 652 and 400 images. The system achieved an area under the receiver operator characteristic curve (AUC) of 0.96 for the combination of both databases and an AUC of 0.97 for each of them when they were evaluated independently.

  3. A nonlinear optimal control approach for chaotic finance dynamics

    Science.gov (United States)

    Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.

    2017-11-01

    A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.

  4. Soft computing approach for reliability optimization: State-of-the-art survey

    International Nuclear Information System (INIS)

    Gen, Mitsuo; Yun, Young Su

    2006-01-01

    In the broadest sense, reliability is a measure of performance of systems. As systems have grown more complex, the consequences of their unreliable behavior have become severe in terms of cost, effort, lives, etc., and the interest in assessing system reliability and the need for improving the reliability of products and systems have become very important. Most solution methods for reliability optimization assume that systems have redundancy components in series and/or parallel systems and alternative designs are available. Reliability optimization problems concentrate on optimal allocation of redundancy components and optimal selection of alternative designs to meet system requirement. In the past two decades, numerous reliability optimization techniques have been proposed. Generally, these techniques can be classified as linear programming, dynamic programming, integer programming, geometric programming, heuristic method, Lagrangean multiplier method and so on. A Genetic Algorithm (GA), as a soft computing approach, is a powerful tool for solving various reliability optimization problems. In this paper, we briefly survey GA-based approach for various reliability optimization problems, such as reliability optimization of redundant system, reliability optimization with alternative design, reliability optimization with time-dependent reliability, reliability optimization with interval coefficients, bicriteria reliability optimization, and reliability optimization with fuzzy goals. We also introduce the hybrid approaches for combining GA with fuzzy logic, neural network and other conventional search techniques. Finally, we have some experiments with an example of various reliability optimization problems using hybrid GA approach

  5. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen

    environments and a knowledge gap therefore exists in present hospital designs. Consequently, the purpose of this thesis has been to investigate if any research-based knowledge exist supporting the hypothesis that the interior architectural qualities of eating environments influence patient food intake, health...... and well-being, as well as outline a set of basic design principles ‘predicting’ the future interior architectural qualities of patient eating environments. Methodologically the thesis is based on an explorative study employing an abductive approach and hermeneutic-interpretative strategy utilizing tactics...... and food intake, as well as a series of references exist linking the interior architectural qualities of healthcare environments with the health and wellbeing of patients. On the basis of these findings, the thesis presents the concept of Architectural Theatricality as well as a set of design principles...

  6. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  7. Research in architecture : reflection on three approaches linking research and design

    NARCIS (Netherlands)

    Colenbrander, B.J.F.; Pereira Roders, A.R.; Veldpaus, L.; Fidanoglu, Esra

    2013-01-01

    Research in Architecture is not new, it is usually known as analysis or pre-design stage. Architects get acquainted with program requirements, project context and/or other inspiring works, including concepts on theory, philosophy or history. Seldom are the architects who design without any

  8. Impact of contour on aesthetic judgments and approach-avoidance decisions in architecture

    DEFF Research Database (Denmark)

    Vartanian, Oshin; Navarrete, Gorka; Chatterjee, Anjan

    2013-01-01

    On average, we urban dwellers spend about 90% of our time indoors, and share the intuition that the physical features of the places we live and work in influence how we feel and act. However, there is surprisingly little research on how architecture impacts behavior, much less on how it influence...

  9. Information security architecture an integrated approach to security in the organization

    CERN Document Server

    Killmeyer, Jan

    2006-01-01

    Information Security Architecture, Second Edition incorporates the knowledge developed during the past decade that has pushed the information security life cycle from infancy to a more mature, understandable, and manageable state. It simplifies security by providing clear and organized methods and by guiding you to the most effective resources available.

  10. Information Architecture for the Web: The IA Matrix Approach to Designing Children's Portals.

    Science.gov (United States)

    Large, Andrew; Beheshti, Jamshid; Cole, Charles

    2002-01-01

    Presents a matrix that can serve as a tool for designing the information architecture of a Web portal in a logical and systematic manner. Highlights include interfaces; metaphors; navigation; interaction; information retrieval; and an example of a children's Web portal to provide access to museum information. (Author/LRW)

  11. The Development of a Coalition Operational Architecture: A British and US Army Approach

    National Research Council Canada - National Science Library

    Galvin, K. E; Madigan, J. C

    2000-01-01

    ... (COA) to support a US Corps operating as a Combined Joint Task Force (CJTF) Headquarters with up to a UK Division as an integral part of its ORBAT would be investigated by staff from both countries' Army Operational Architecture (AOA) teams...

  12. A Survey of Some Approaches to Distributed Data Base & Distributed File System Architecture.

    Science.gov (United States)

    1980-01-01

    BUS POD A DD A 12 12 A = A Cell D = D Cell Figure 7-1: MUFFIN logical architecture - 45 - MUFI January 1980 ".-.Bus Interface V Conventional Processor...and Applied Mathematics (14), * December, 1966. [Kimbleton 791 Kimbleton, Stephen; Wang, Pearl; and Fong, Elizabeth. XNDM: An Experimental Network

  13. An Architectural Approach towards Innovative Renewable Energy Infrastructure in Kapisillit, Greenland

    DEFF Research Database (Denmark)

    Carruth, Susan; Krogh, Peter

    2014-01-01

    workshop with architecture students who were asked to create conceptual strategies, driven by distributed, community-controlled renewable energy, for the future of the village. It culminates in a discussion on how this empirical work contributes towards the construction of a vocabulary of material...

  14. A Project-Based Learning Approach to Programmable Logic Design and Computer Architecture

    Science.gov (United States)

    Kellett, C. M.

    2012-01-01

    This paper describes a course in programmable logic design and computer architecture as it is taught at the University of Newcastle, Australia. The course is designed around a major design project and has two supplemental assessment tasks that are also described. The context of the Computer Engineering degree program within which the course is…

  15. Understanding the Value of Enterprise Architecture for Organizations: A Grounded Theory Approach

    Science.gov (United States)

    Nassiff, Edwin

    2012-01-01

    There is a high rate of information system implementation failures attributed to the lack of alignment between business and information technology strategy. Although enterprise architecture (EA) is a means to correct alignment problems and executives highly rate the importance of EA, it is still not used in most organizations today. Current…

  16. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    OpenAIRE

    Khatib, Tamer; Mohamed, Azah; Sopian, K.; Mahmoud, M.

    2012-01-01

    This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV) system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia...

  17. An interactive and flexible approach to stamping design and optimization

    International Nuclear Information System (INIS)

    Roy, Subir; Kunju, Ravi; Kirby, David

    2004-01-01

    This paper describes an efficient method that integrates finite element analysis (FEA), mesh morphing and response surface based optimization in order to implement an automated and flexible software tool to optimize stamping tool and process design. For FEA, a robust and extremely fast inverse solver is chosen. For morphing, a state of the art mesh morpher that interactively generates shape variables for optimization studies is used. The optimization algorithm utilized in this study enables a global search for a multitude of parameters and is highly flexible with regards to the choice of objective functions. A quality function that minimizes formability defects resulting from stretching and compression is implemented

  18. The impact of optimize solar radiation received on the levels and energy disposal of levels on architectural design result by using computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Rezaei, Davood; Farajzadeh Khosroshahi, Samaneh; Sadegh Falahat, Mohammad [Zanjan University (Iran, Islamic Republic of)], email: d_rezaei@znu.ac.ir, email: ronas_66@yahoo.com, email: Safalahat@yahoo.com

    2011-07-01

    In order to minimize the energy consumption of a building it is important to achieve optimum solar energy. The aim of this paper is to introduce the use of computer modeling in the early stages of design to optimize solar radiation received and energy disposal in an architectural design. Computer modeling was performed on 2 different projects located in Los Angeles, USA, using ECOTECT software. Changes were made to the designs following analysis of the modeling results and a subsequent analysis was carried out on the optimized designs. Results showed that the computer simulation allows the designer to set the analysis criteria and improve the energy performance of a building before it is constructed; moreover, it can be used for a wide range of optimization levels. This study pointed out that computer simulation should be performed in the design stage to optimize a building's energy performance.

  19. A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection

    NARCIS (Netherlands)

    Takano, Y.; Sotirov, R.

    2010-01-01

    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on

  20. Hybrid Metaheuristic Approach for Nonlocal Optimization of Molecular Systems.

    Science.gov (United States)

    Dresselhaus, Thomas; Yang, Jack; Kumbhar, Sadhana; Waller, Mark P

    2013-04-09

    Accurate modeling of molecular systems requires a good knowledge of the structure; therefore, conformation searching/optimization is a routine necessity in computational chemistry. Here we present a hybrid metaheuristic optimization (HMO) algorithm, which combines ant colony optimization (ACO) and particle swarm optimization (PSO) for the optimization of molecular systems. The HMO implementation meta-optimizes the parameters of the ACO algorithm on-the-fly by the coupled PSO algorithm. The ACO parameters were optimized on a set of small difluorinated polyenes where the parameters exhibited small variance as the size of the molecule increased. The HMO algorithm was validated by searching for the closed form of around 100 molecular balances. Compared to the gradient-based optimized molecular balance structures, the HMO algorithm was able to find low-energy conformations with a 87% success rate. Finally, the computational effort for generating low-energy conformation(s) for the phenylalanyl-glycyl-glycine tripeptide was approximately 60 CPU hours with the ACO algorithm, in comparison to 4 CPU years required for an exhaustive brute-force calculation.

  1. Optimal angle reduction - a behavioral approach to linear system appromixation

    NARCIS (Netherlands)

    Roorda, B.; Weiland, S.

    2001-01-01

    We investigate the problem of optimal state reduction under minimization of the angle between system behaviors. The angle is defined in a worst-case sense, as the largest angle that can occur between a system trajectory and its optimal approximation in the reduced-order model. This problem is

  2. A polynomial optimization approach to constant rebalanced portfolio selection

    NARCIS (Netherlands)

    Takano, Y.; Sotirov, R.

    2012-01-01

    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on

  3. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Tamer Khatib

    2012-01-01

    Full Text Available This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia, which are Kuala Lumpur, Johor Bharu, Ipoh, Kuching, and Alor Setar. Based on the results of the designed example for a PV system installed in Kuala Lumpur, the proposed method gives satisfactory optimal sizing results.

  4. Geometrical Optimization Approach to Isomerization: Models and Limitations.

    Science.gov (United States)

    Chang, Bo Y; Shin, Seokmin; Engel, Volker; Sola, Ignacio R

    2017-11-02

    We study laser-driven isomerization reactions through an excited electronic state using the recently developed Geometrical Optimization procedure. Our goal is to analyze whether an initial wave packet in the ground state, with optimized amplitudes and phases, can be used to enhance the yield of the reaction at faster rates, driven by a single picosecond pulse or a pair of femtosecond pulses resonant with the electronic transition. We show that the symmetry of the system imposes limitations in the optimization procedure, such that the method rediscovers the pump-dump mechanism.

  5. From Requirements to code: an Architecture-centric Approach for producing Quality Systems

    OpenAIRE

    Bucchiarone, Antonio; Di Ruscio, Davide; Muccini, Henry; Pelliccione, Patrizio

    2009-01-01

    When engineering complex and distributed software and hardware systems (increasingly used in many sectors, such as manufacturing, aerospace, transportation, communication, energy, and health-care), quality has become a big issue, since failures can have economics consequences and can also endanger human life. Model-based specifications of a component-based system permit to explicitly model the structure and behaviour of components and their integration. In particular Software Architectures (S...

  6. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    Full Text Available Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor’s method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  7. Potential and challenges in home care service process optimization : a route optimization approach

    OpenAIRE

    Nakari, Pentti J. E.

    2016-01-01

    Aging of the population is an increasing problem in many countries, including Finland, and it poses a challenge to public services such as home care. Vehicle routing optimization (VRP) type optimization solutions are one possible way to decrease the time required for planning home visits and driving to customer addresses, as well as decreasing transportation costs. Although VRP optimization is widely and succesfully applied to commercial and industrial logistics, the home care ...

  8. Novel approach for optimization of fermentative condition for ...

    African Journals Online (AJOL)

    Jane

    2011-07-20

    Jul 20, 2011 ... School of Biochemical Engineering, Institute of Technology, Banaras Hindu University ... been applied for the optimization of a few biochemical ..... Methanol by Methylo bacterium Extorquens DSMZ. 1340. Iran J. Chem. Chem.

  9. PLM support to architecture based development

    DEFF Research Database (Denmark)

    Bruun, Hans Peter Lomholt

    , organisation, processes, etc. To identify, evaluate, and align aspects of these domains are necessary for developing the optimal layout of product architectures. It is stated in this thesis that architectures describe building principles for products, product families, and product programs, where this project...... and developing architectures can be difficult to manage, update, and maintain during development. The concept of representing product architectures in computer-based product information tools has though been central in this research, and in the creation of results. A standard PLM tool (Windchill PDMLink...... architectures in computer systems. Presented results build on research literature and experiences from industrial partners. Verification of the theory contributions, approaches, models, and tools, have been carried out in industrial projects, with promising results. This thesis describes the means for: (1...

  10. A new approach of optimization procedure for superconducting integrated circuits

    International Nuclear Information System (INIS)

    Saitoh, K.; Soutome, Y.; Tarutani, Y.; Takagi, K.

    1999-01-01

    We have developed and tested a new circuit simulation procedure for superconducting integrated circuits which can be used to optimize circuit parameters. This method reveals a stable operation region in the circuit parameter space in connection with the global bias margin by means of a contour plot of the global bias margin versus the circuit parameters. An optimal set of parameters with margins larger than these of the initial values has been found in the stable region. (author)

  11. MVMO-based approach for optimal placement and tuning of supplementary damping controller

    NARCIS (Netherlands)

    Rueda Torres, J.L.; Gonzalez-Longatt, F.

    2015-01-01

    This paper introduces an approach based on the Swarm Variant of the Mean-Variance Mapping Optimization (MVMO-S) to solve the multi-scenario formulation of the optimal placement and coordinated tuning of power system supplementary damping controllers (POCDCs). The effectiveness of the approach is

  12. Architecturally Reconfigurable Development of Mobile Games

    DEFF Research Database (Denmark)

    Zhang, Weishan

    2005-01-01

    . Mobile game domain variants could be handled uniformly and traced across all kinds of software assets. The architecture and configuration mechanism in our approach make optimizations that built into meta-components propagated to all product line members. We show this approach with an industrial Role-Playing-Game......Mobile game development must face the problem of multiple hardware and software platforms, which will bring large number of variants. To cut the development and maintenance efforts, in this paper, we present an architecturally reconfigurable software product line approach to develop mobile games...

  13. A universal approach to electrically connecting nanowire arrays using nanoparticles—application to a novel gas sensor architecture

    Science.gov (United States)

    Parthangal, Prahalad M.; Cavicchi, Richard E.; Zachariah, Michael R.

    2006-08-01

    We report on a novel, in situ approach toward connecting and electrically contacting vertically aligned nanowire arrays using conductive nanoparticles. The utility of the approach is demonstrated by development of a gas sensing device employing this nano-architecture. Well-aligned, single-crystalline zinc oxide nanowires were grown through a direct thermal evaporation process at 550 °C on gold catalyst layers. Electrical contact to the top of the nanowire array was established by creating a contiguous nanoparticle film through electrostatic attachment of conductive gold nanoparticles exclusively onto the tips of nanowires. A gas sensing device was constructed using such an arrangement and the nanowire assembly was found to be sensitive to both reducing (methanol) and oxidizing (nitrous oxides) gases. This assembly approach is amenable to any nanowire array for which a top contact electrode is needed.

  14. A universal approach to electrically connecting nanowire arrays using nanoparticles-application to a novel gas sensor architecture

    International Nuclear Information System (INIS)

    Parthangal, Prahalad M; Cavicchi, Richard E; Zachariah, Michael R

    2006-01-01

    We report on a novel, in situ approach toward connecting and electrically contacting vertically aligned nanowire arrays using conductive nanoparticles. The utility of the approach is demonstrated by development of a gas sensing device employing this nano-architecture. Well-aligned, single-crystalline zinc oxide nanowires were grown through a direct thermal evaporation process at 550 deg. C on gold catalyst layers. Electrical contact to the top of the nanowire array was established by creating a contiguous nanoparticle film through electrostatic attachment of conductive gold nanoparticles exclusively onto the tips of nanowires. A gas sensing device was constructed using such an arrangement and the nanowire assembly was found to be sensitive to both reducing (methanol) and oxidizing (nitrous oxides) gases. This assembly approach is amenable to any nanowire array for which a top contact electrode is needed

  15. Architectural approach to the energy performance of buildings in a hot-dry climate with special reference to Egypt

    Energy Technology Data Exchange (ETDEWEB)

    Hamdy, I F

    1986-01-01

    A thesis is presented on the changing approach to architectural design of buildings in a hot, dry climate in view of the increased recognition of the importance of energy efficiency. The thermal performance of buildings in Egypt is used as an example and the nature of the local climate and human requirements are also studied. Other effects on the thermal performance considered include building form, orientation and surrounding conditions. An evaluative computer model is constructed and its applications allow the prediction on the energy performance of changing design parameters.

  16. Control the Morphologies and the Pore Architectures of Meso porous Silicas through a Dual-Templating Approach

    International Nuclear Information System (INIS)

    Wang, H.; Chen, H.; Xu, Z.; Wang, S.; Li, B.; Li, Y.

    2012-01-01

    Meso porous silica nanospheres were prepared using a chiral cationic low-molecular-weight amphiphile and organic solvents such as toluene, cyclohexane, and tetrachlorocarbon through a dual-templating approach. X-ray diffraction, nitrogen sorption, field emission scanning electron microscopy, and transmission electron microscopy techniques have been used to characterize the meso porous silicas. The volume ratio of toluene to water plays an important role in controlling the morphologies and the pore architectures of the meso porous silicas. It was also found that meso porous silica nano flakes can be prepared by adding tetrahydrofuran to the reaction mixtures.

  17. Geometry optimization of molecules within an LCGTO local-density functional approach

    International Nuclear Information System (INIS)

    Mintmire, J.W.

    1990-01-01

    We describe our implementation of geometry optimization techniques within the linear combination of Gaussian-type orbitals (LCGTO) approach to local-density functional theory. The algorithm for geometry optimization is based on the evaluation of the gradient of the total energy with respect to internal coordinates within the local-density functional scheme. We present optimization results for a range of small molecules which serve as test cases for our approach

  18. Multi-objective optimization of design and testing of safety instrumented systems with MooN voting architectures using a genetic algorithm

    International Nuclear Information System (INIS)

    Torres-Echeverría, A.C.; Martorell, S.; Thompson, H.A.

    2012-01-01

    This paper presents the optimization of design and test policies of safety instrumented systems using MooN voting redundancies by a multi-objective genetic algorithm. The objectives to optimize are the Average Probability of Dangerous Failure on Demand, which represents the system safety integrity, the Spurious Trip Rate and the Lifecycle Cost. In this way safety, reliability and cost are included. This is done by using novel models of time-dependent probability of failure on demand and spurious trip rate, recently published by the authors. These models are capable of delivering the level of modeling detail required by the standard IEC 61508. Modeling includes common cause failure and diagnostic coverage. The Probability of Failure on Demand model also permits to quantify results with changing testing strategies. The optimization is performed using the multi-objective Genetic Algorithm NSGA-II. This allows weighting of the trade-offs between the three objectives and, thus, implementation of safety systems that keep a good balance between safety, reliability and cost. The complete methodology is applied to two separate case studies, one for optimization of system design with redundancy allocation and component selection and another for optimization of testing policies. Both optimization cases are performed for both systems with MooN redundancies and systems with only parallel redundancies. Their results are compared, demonstrating how introducing MooN architectures presents a significant improvement for the optimization process.

  19. Hierarchical Swarm Model: A New Approach to Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2010-01-01

    Full Text Available This paper presents a novel optimization model called hierarchical swarm optimization (HSO, which simulates the natural hierarchical complex system from where more complex intelligence can emerge for complex problems solving. This proposed model is intended to suggest ways that the performance of HSO-based algorithms on complex optimization problems can be significantly improved. This performance improvement is obtained by constructing the HSO hierarchies, which means that an agent in a higher level swarm can be composed of swarms of other agents from lower level and different swarms of different levels evolve on different spatiotemporal scale. A novel optimization algorithm (named PS2O, based on the HSO model, is instantiated and tested to illustrate the ideas of HSO model clearly. Experiments were conducted on a set of 17 benchmark optimization problems including both continuous and discrete cases. The results demonstrate remarkable performance of the PS2O algorithm on all chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms.

  20. Photoperiodic envelope: application of the generative design based on the performance of architectural envelopes, the exploring its shape and performance optimization

    International Nuclear Information System (INIS)

    Viquez Alas, Ernesto Alonso

    2013-01-01

    An alternative method of design is demonstrated to be used in the creation of an architectural envelope, through the application of tools and techniques such as algorithms, optimization, parametrization and simulation. The aesthetic criteria of the form are enriched to achieve the decrease in solar radiation rates. The methods and techniques of optimization, simulation, analysis and synthesis are habituated through the study of the contemporary paradigm of generative design and design by performance. Some of the applying of potential benefits an alternative design method and conditions to be met are designed to facilitate its application in the design of envelopes. A study of application and testing is demonstrated to explore the surround topology. The optimization results in relation to reducing the solar incidence are examined in a simulated environment [es

  1. Reframing Architecture

    DEFF Research Database (Denmark)

    Riis, Søren

    2013-01-01

    I would like to thank Prof. Stephen Read (2011) and Prof. Andrew Benjamin (2011) for both giving inspiring and elaborate comments on my article “Dwelling in-between walls: the architectural surround”. As I will try to demonstrate below, their two different responses not only supplement my article...... focuses on how the absence of an initial distinction might threaten the endeavour of my paper. In my reply to Read and Benjamin, I will discuss their suggestions and arguments, while at the same time hopefully clarifying the postphenomenological approach to architecture....

  2. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  3. Deterministic global optimization an introduction to the diagonal approach

    CERN Document Server

    Sergeyev, Yaroslav D

    2017-01-01

    This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...

  4. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    Science.gov (United States)

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  5. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  6. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Directory of Open Access Journals (Sweden)

    Hiroshi Shiraishi

    2012-01-01

    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  7. Intel Many Integrated Core (MIC) architecture optimization strategies for a memory-bound Weather Research and Forecasting (WRF) Goddard microphysics scheme

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.

  8. An optimal control approach to manpower planning problem

    Directory of Open Access Journals (Sweden)

    H. W. J. Lee

    2001-01-01

    Full Text Available A manpower planning problem is studied in this paper. The model includes scheduling different types of workers over different tasks, employing and terminating different types of workers, and assigning different types of workers to various trainning programmes. The aim is to find an optimal way to do all these while keeping the time-varying demand for minimum number of workers working on each different tasks satisfied. The problem is posed as an optimal discrete-valued control problem in discrete time. A novel numerical scheme is proposed to solve the problem, and an illustrative example is provided.

  9. Optimally eating a stochastic cake. A recursive utility approach

    International Nuclear Information System (INIS)

    Epaulard, Anne; Pommeret, Aude

    2003-01-01

    In this short paper, uncertainties on resource stock and on technical progress are introduced into an intertemporal equilibrium model of optimal extraction of a non-renewable resource. The representative consumer maximizes a recursive utility function which disentangles between intertemporal elasticity of substitution and risk aversion. A closed-form solution is derived for both the optimal extraction and price paths. The value of the intertemporal elasticity of substitution relative to unity is then crucial in understanding extraction. Moreover, this model leads to a non-renewable resource price following a geometric Brownian motion

  10. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Science.gov (United States)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  11. PICNIC Architecture.

    Science.gov (United States)

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source.

  12. An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)

    2008-09-15

    This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)

  13. Constructing hierarchical porous nanospheres for versatile microwave response approaches: the effect of architectural design.

    Science.gov (United States)

    Quan, Bin; Liang, Xiaohui; Yi, Heng; Gong, He; Ji, Guangbin; Chen, Jiabin; Xu, Guoyue; Du, Youwei

    2017-10-24

    Owing to their immense potential in functionalized applications, tremendous interest has been devoted to the design and synthesis of nanostructures. The introduction of sufficient amount of microwaves into the absorbers on the premise that the dissipation capacity is strong enough remains a key challenge. Pursuing a general methodology to overcome the incompatibility is of great importance. There is widespread interest in designing the materials with specific architectures. Herein, the common absorber candidates were chosen to feature the hierarchical porous Fe 3 O 4 @C@Fe 3 O 4 nanospheres. Due to the reduced skin effect (induced by low-conductivity Fe 3 O 4 outer layer), multiple interfacial polarizations and scattering (due to the ternary hierarchical structures and nanoporous inner core) as well as the improved magnetic dissipation ability (because of multiple magnetic components), the material design enabled a promising microwave absorption performance. This study not only illustrates the primary mechanisms for the improved microwave absorption performance but also underscores the potential in designing the particular architectures as a strategy for achieving the compatibility characteristics.

  14. An Open Architecture Framework for Electronic Warfare Based Approach to HLA Federate Development

    Directory of Open Access Journals (Sweden)

    HyunSeo Kang

    2018-01-01

    Full Text Available A variety of electronic warfare models are developed in the Electronic Warfare Research Center. An Open Architecture Framework for Electronic Warfare (OAFEw has been developed for reusability of various object models participating in the electronic warfare simulation and for extensibility of the electronic warfare simulator. OAFEw is a kind of component-based software (SW lifecycle management support framework. This OAFEw is defined by six components and ten rules. The purpose of this study is to construct a Distributed Simulation Interface Model, according to the rules of OAFEw, and create Use Case Model of OAFEw Reference Conceptual Model version 1.0. This is embodied in the OAFEw-FOM (Federate Object Model for High-Level Architecture (HLA based distributed simulation. Therefore, we design and implement EW real-time distributed simulation that can work with a model in C++ and MATLAB API (Application Programming Interface. In addition, OAFEw-FOM, electronic component model, and scenario of the electronic warfare domain were designed through simple scenarios for verification, and real-time distributed simulation between C++ and MATLAB was performed through OAFEw-Distributed Simulation Interface.

  15. Integrating emerging earth science technologies into disaster risk management: an enterprise architecture approach

    Science.gov (United States)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster risk management has grown to rely on earth observations, multi-source data analysis, numerical modeling, and interagency information sharing. The practice and outcomes of disaster risk management will likely undergo further change as several emerging earth science technologies come of age: mobile devices; location-based services; ubiquitous sensors; drones; small satellites; satellite direct readout; Big Data analytics; cloud computing; Web services for predictive modeling, semantic reconciliation, and collaboration; and many others. Integrating these new technologies well requires developing and adapting them to meet current needs; but also rethinking current practice to draw on new capabilities to reach additional objectives. This requires a holistic view of the disaster risk management enterprise and of the analytical or operational capabilities afforded by these technologies. One helpful tool for this assessment, the GEOSS Architecture for the Use of Remote Sensing Products in Disaster Management and Risk Assessment (Evans & Moe, 2013), considers all phases of the disaster risk management lifecycle for a comprehensive set of natural hazard types, and outlines common clusters of activities and their use of information and computation resources. We are using these architectural views, together with insights from current practice, to highlight effective, interrelated roles for emerging earth science technologies in disaster risk management. These roles may be helpful in creating roadmaps for research and development investment at national and international levels.

  16. A Blended Learning Approach to the Teaching of Professional Practice in Architecture

    Directory of Open Access Journals (Sweden)

    Murray Lane

    2015-05-01

    Full Text Available This paper reports on a number of blended learning activities conducted in two subjects of a Master of Architecture degree at a major Australian university. The subjects were related to “professional practice” and as such represent a little researched area of architectural curriculum. The research provides some insight into the student perceptions of learning opportunity and engagement associated with on-line delivery modes. Students from these two subjects were surveyed for their perceptions about the opportunity for learning afforded by the on-line components, and also for their perceived level of engagement. Responses to these perceptions of traditional and on-line modes of delivery are compared and analysed for significant differences. While students were generally positive in response to the learning experiences, analysis of the results shows that students found the traditional modes to assist in their learning significantly more than on-line modes. Students were neutral regarding the opportunity for engagement that on-line modes provided. Analysis of the students’ gender, age and hours of paid work was also conducted to ascertain any relationship with attitudes to the flexibility of on-line delivery; no significant relationship was detected. This study has shown that students were generally resistant to on-line engagement opportunities and their ability to support learning.

  17. Particle Swarm Optimization approach to defect detection in armour ceramics.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.

  18. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    DR OKE

    differential evolution DE algorithm with adaptive crossover operator, .... x are assigned by using a sequential scheme which accounts for mean and ... the representative scenarios from probabilistic model based Monte Carlo ... Comparison of average convergence of MVMO-S with other metaheuristic optimization methods.

  19. A compensatory approach to optimal selection with mastery scores

    NARCIS (Netherlands)

    van der Linden, Willem J.; Vos, Hendrik J.

    1994-01-01

    This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious

  20. Design of experiment approach for the process optimization of ...

    African Journals Online (AJOL)

    Mulberry is considered as food-medicine herb, with specific nutritional and medicinal values. In this study, response surface methodology (RSM) was employed to optimize the ultrasonic-assisted extraction of total polysaccharide from mulberry using Box-Behnken design (BBD). Based on single factor experiments, a three ...

  1. Multi-objective optimization approach for air traffic flow management

    Directory of Open Access Journals (Sweden)

    Fadil Rabie

    2017-01-01

    The decision-making stage was then performed with the aid of data clustering techniques to reduce the sizeof the Pareto-optimal set and obtain a smaller representation of the multi-objective design space, there by making it easier for the decision-maker to find satisfactory and meaningful trade-offs, and to select a preferred final design solution.

  2. Optimal pricing in retail: a Cox regression approach

    NARCIS (Netherlands)

    Meijer, R.; Bhulai, S.

    2013-01-01

    Purpose: The purpose of this paper is to study the optimal pricing problem that retailers are challenged with when dealing with seasonal products. The friction between expected demand and realized demand creates a risk that supply during the season is not cleared, thus forcing the retailer to

  3. Taxing Strategies for Carbon Emissions: A Bilevel Optimization Approach

    Directory of Open Access Journals (Sweden)

    Wei Wei

    2014-04-01

    Full Text Available This paper presents a quantitative and computational method to determine the optimal tax rate among generating units. To strike a balance between the reduction of carbon emission and the profit of energy sectors, the proposed bilevel optimization model can be regarded as a Stackelberg game between the government agency and the generation companies. The upper-level, which represents the government agency, aims to limit total carbon emissions within a certain level by setting optimal tax rates among generators according to their emission performances. The lower-level, which represents decision behaviors of the grid operator, tries to minimize the total production cost under the tax rates set by the government. The bilevel optimization model is finally reformulated into a mixed integer linear program (MILP which can be solved by off-the-shelf MILP solvers. Case studies on a 10-unit system as well as a provincial power grid in China demonstrate the validity of the proposed method and its capability in practical applications.

  4. Allomorphs in the Igbo Language: An Optimality Theory Approach ...

    African Journals Online (AJOL)

    Allomorphs are any two or more morphemes that have different forms but perform the same grammatical functions in different linguistic environments. The optimality theory claims that the Universal Grammar is a set of violable constraints and that language-specific grammars rank these constraints in languagespecific ways.

  5. Optimal control of quantum systems: a projection approach

    International Nuclear Information System (INIS)

    Cheng, C.-J.; Hwang, C.-C.; Liao, T.-L.; Chou, G.-L.

    2005-01-01

    This paper considers the optimal control of quantum systems. The controlled quantum systems are described by the probability-density-matrix-based Liouville-von Neumann equation. Using projection operators, the states of the quantum system are decomposed into two sub-spaces, namely the 'main state' space and the 'remaining state' space. Since the control energy is limited, a solution for optimizing the external control force is proposed in which the main state is brought to the desired main state at a certain target time, while the population of the remaining state is simultaneously suppressed in order to diminish its effects on the final population of the main state. The optimization problem is formulated by maximizing a general cost functional of states and control force. An efficient algorithm is developed to solve the optimization problem. Finally, using the hydrogen fluoride (HF) molecular population transfer problem as an illustrative example, the effectiveness of the proposed scheme for a quantum system initially in a mixed state or in a pure state is investigated through numerical simulations

  6. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    Architecture and anthropology have always had a common focus on dwelling, housing, urban life and spatial organisation. Current developments in both disciplines make it even more relevant to explore their boundaries and overlaps. Architects are inspired by anthropological insights and methods......, while recent material and spatial turns in anthropology have also brought an increasing interest in design, architecture and the built environment. Understanding the relationship between the social and the physical is at the heart of both disciplines, and they can obviously benefit from further...... collaboration: How can qualitative anthropological approaches contribute to contemporary architecture? And just as importantly: What can anthropologists learn from architects’ understanding of spatial and material surroundings? Recent theoretical developments in anthropology stress the role of materials...

  7. Architectural Engineers

    DEFF Research Database (Denmark)

    Petersen, Rikke Premer

    engineering is addresses from two perspectives – as an educational response and an occupational constellation. Architecture and engineering are two of the traditional design professions and they frequently meet in the occupational setting, but at educational institutions they remain largely estranged....... The paper builds on a multi-sited study of an architectural engineering program at the Technical University of Denmark and an architectural engineering team within an international engineering consultancy based on Denmark. They are both responding to new tendencies within the building industry where...... the role of engineers and architects increasingly overlap during the design process, but their approaches reflect different perceptions of the consequences. The paper discusses some of the challenges that design education, not only within engineering, is facing today: young designers must be equipped...

  8. A New RTL Design Approach for a DCT/IDCT-Based Image Compression Architecture using the mCBE Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2012-09-01

    Full Text Available In the literature, several approaches of designing a DCT/IDCT-based image compression system have been proposed. In this paper, we present a new RTL design approach with as main focus developing a DCT/IDCT-based image compression architecture using a self-created algorithm. This algorithm can efficiently minimize the amount of shifter-adders to substitute multipliers. We call this new algorithm the multiplication from Common Binary Expression (mCBE Algorithm. Besides this algorithm, we propose alternative quantization numbers, which can be implemented simply as shifters in digital hardware. Mostly, these numbers can retain a good compressed-image quality compared to JPEG recommendations. These ideas lead to our design being small in circuit area, multiplierless, and low in complexity. The proposed 8-point 1D-DCT design has only six stages, while the 8-point 1D-IDCT design has only seven stages (one stage being defined as equal to the delay of one shifter or 2-input adder. By using the pipelining method, we can achieve a high-speed architecture with latency as a trade-off consideration. The design has been synthesized and can reach a speed of up to 1.41ns critical path delay (709.22MHz.

  9. Multiple 3d Approaches for the Architectural Study of the Medieval Abbey of Cormery in the Loire Valley

    Science.gov (United States)

    Pouyet, T.

    2017-02-01

    This paper will focus on the technical approaches used for a PhD thesis regarding architecture and spatial organization of benedict abbeys in Touraine in the Middle Ages, in particular the abbey of Cormery in the heart of the Loire Valley. Monastic space is approached in a diachronic way, from the early Middle Ages to the modern times using multi-sources data: architectural study, written sources, ancient maps, various iconographic documents… Many scales are used in the analysis, from the establishment of the abbeys in a territory to the scale of a building like the tower-entrance of the church of Cormery. These methodological axes have been developed in the research unit CITERES for many years and the 3D technology is now used to go further along in that field. The recording in 3D of the buildings of the abbey of Cormery allows us to work at the scale of the monastery and to produce useful data such as sections or orthoimages of the ground and the walls faces which are afterwards drawn and analysed. The study of these documents, crossed with the other historical sources, allowed us to emphasize the presence of walls older than what we thought and to discover construction elements that had not been recognized earlier and which enhance the debate about the construction date St Paul tower and associated the monastic church.

  10. A Robust Optimization Approach for Improving Service Quality

    OpenAIRE

    Andreas C. Soteriou; Richard B. Chase

    2000-01-01

    Delivering high quality service during the service encounter is central to competitive advantage in service organizations. However, achieving such high quality while controlling for costs is a major challenge for service managers. The purpose of this paper is to present an approach for addressing this challenge. The approach entails developing a model linking service process operational variables to service quality metrics to provide guidelines for service resource allocation. The approach en...

  11. A Multiscale Adaptive Mesh Refinement Approach to Architectured Steel Specification in the Design of a Frameless Stressed Skin Structure

    DEFF Research Database (Denmark)

    Nicholas, Paul; Stasiuk, David; Nørgaard, Esben

    2015-01-01

    This paper describes the development of a modelling approach for the design and fabrication of an incrementally formed, stressed skin metal structure. The term incremental forming refers to a progression of localised plastic deformation to impart 3D form onto a 2D metal sheet, directly from 3D...... design data. A brief introduction presents this fabrication concept, as well as the context of structures whose skin plays a significant structural role. Existing research into ISF privileges either the control of forming parameters to minimise geometric deviation, or the more accurate measurement...... of the impact of the forming process at the scale of the grain. But to enhance structural performance for architectural applications requires that both aspects are considered synthetically. We demonstrate a mesh-based approach that incorporates critical parameters at the scales of structure, element...

  12. Optimization based tuning approach for offset free MPC

    DEFF Research Database (Denmark)

    Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2012-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system...... identifiation techniques using convex optimization can be used for identification of such models from input-output data. The stochastic model of the ARX model identified from input-output data is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The ARMAX...... model description resulting from the extension can be realized as a state space model in innovation form. The MPC is designed and implemented based on this state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity...

  13. Handbook of Optimization From Classical to Modern Approach

    CERN Document Server

    Snášel, Václav; Abraham, Ajith

    2013-01-01

    Optimization problems were and still are the focus of mathematics from antiquity to the present. Since the beginning of our civilization, the human race has had to confront numerous technological challenges, such as finding the optimal solution of various problems including control technologies, power sources construction, applications in economy, mechanical engineering and energy distribution amongst others. These examples encompass both ancient as well as modern technologies like the first electrical energy distribution network in USA etc. Some of the key principles formulated in the middle ages were done by Johannes Kepler (Problem of the wine barrels), Johan Bernoulli (brachystochrone problem), Leonhard Euler (Calculus of Variations), Lagrange (Principle multipliers), that were formulated primarily in the ancient world and are of a geometric nature. In the beginning of the modern era, works of L.V. Kantorovich and G.B. Dantzig (so-called linear programming) can be considered amongst others. This book disc...

  14. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    Science.gov (United States)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  15. A New Interpolation Approach for Linearly Constrained Convex Optimization

    KAUST Repository

    Espinoza, Francisco

    2012-08-01

    In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.

  16. A simple three-dimensional macroscopic root water uptake model based on the hydraulic architecture approach

    Directory of Open Access Journals (Sweden)

    V. Couvreur

    2012-08-01

    Full Text Available Many hydrological models including root water uptake (RWU do not consider the dimension of root system hydraulic architecture (HA because explicitly solving water flow in such a complex system is too time consuming. However, they might lack process understanding when basing RWU and plant water stress predictions on functions of variables such as the root length density distribution. On the basis of analytical solutions of water flow in a simple HA, we developed an "implicit" model of the root system HA for simulation of RWU distribution (sink term of Richards' equation and plant water stress in three-dimensional soil water flow models. The new model has three macroscopic parameters defined at the soil element scale, or at the plant scale, rather than for each segment of the root system architecture: the standard sink fraction distribution SSF, the root system equivalent conductance Krs and the compensatory RWU conductance Kcomp. It clearly decouples the process of water stress from compensatory RWU, and its structure is appropriate for hydraulic lift simulation. As compared to a model explicitly solving water flow in a realistic maize root system HA, the implicit model showed to be accurate for predicting RWU distribution and plant collar water potential, with one single set of parameters, in dissimilar water dynamics scenarios. For these scenarios, the computing time of the implicit model was a factor 28 to 214 shorter than that of the explicit one. We also provide a new expression for the effective soil water potential sensed by plants in soils with a heterogeneous water potential distribution, which emerged from the implicit model equations. With the proposed implicit model of the root system HA, new concepts are brought which open avenues towards simple and mechanistic RWU models and water stress functions operational for field scale water dynamics simulation.

  17. Optimization of the graph model of the water conduit network, based on the approach of search space reducing

    Science.gov (United States)

    Korovin, Iakov S.; Tkachenko, Maxim G.

    2018-03-01

    In this paper we present a heuristic approach, improving the efficiency of methods, used for creation of efficient architecture of water distribution networks. The essence of the approach is a procedure of search space reduction the by limiting the range of available pipe diameters that can be used for each edge of the network graph. In order to proceed the reduction, two opposite boundary scenarios for the distribution of flows are analysed, after which the resulting range is further narrowed by applying a flow rate limitation for each edge of the network. The first boundary scenario provides the most uniform distribution of the flow in the network, the opposite scenario created the net with the highest possible flow level. The parameters of both distributions are calculated by optimizing systems of quadratic functions in a confined space, which can be effectively performed with small time costs. This approach was used to modify the genetic algorithm (GA). The proposed GA provides a variable number of variants of each gene, according to the number of diameters in list, taking into account flow restrictions. The proposed approach was implemented to the evaluation of a well-known test network - the Hanoi water distribution network [1], the results of research were compared with a classical GA with an unlimited search space. On the test data, the proposed trip significantly reduced the search space and provided faster and more obvious convergence in comparison with the classical version of GA.

  18. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    Science.gov (United States)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  19. ISP: an optimal out-of-core image-set processing streaming architecture for parallel heterogeneous systems.

    Science.gov (United States)

    Ha, Linh Khanh; Krüger, Jens; Dihl Comba, João Luiz; Silva, Cláudio T; Joshi, Sarang

    2012-06-01

    Image population analysis is the class of statistical methods that plays a central role in understanding the development, evolution, and disease of a population. However, these techniques often require excessive computational power and memory that are compounded with a large number of volumetric inputs. Restricted access to supercomputing power limits its influence in general research and practical applications. In this paper we introduce ISP, an Image-Set Processing streaming framework that harnesses the processing power of commodity heterogeneous CPU/GPU systems and attempts to solve this computational problem. In ISP, we introduce specially designed streaming algorithms and data structures that provide an optimal solution for out-of-core multiimage processing problems both in terms of memory usage and computational efficiency. ISP makes use of the asynchronous execution mechanism supported by parallel heterogeneous systems to efficiently hide the inherent latency of the processing pipeline of out-of-core approaches. Consequently, with computationally intensive problems, the ISP out-of-core solution can achieve the same performance as the in-core solution. We demonstrate the efficiency of the ISP framework on synthetic and real datasets.

  20. An Evolutionary Multi-objective Approach for Speed Tuning Optimization with Energy Saving in Railway Management

    OpenAIRE

    Chevrier , Rémy

    2010-01-01

    International audience; An approach for speed tuning in railway management is presented for optimizing both travel duration and energy saving. This approach is based on a state-of-the-art evolutionary algorithm with Pareto approach. This algorithm provides a set of diversified non-dominated solutions to the decision-maker. A case study on Gonesse connection (France) is also reported and analyzed.

  1. RF cavity design exploiting a new derivative-free trust region optimization approach

    Directory of Open Access Journals (Sweden)

    Abdel-Karim S.O. Hassan

    2015-11-01

    Full Text Available In this article, a novel derivative-free (DF surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n, where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique.

  2. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders...

  3. A Gradient Optimization Approach to Adaptive Multi-Robot Control

    Science.gov (United States)

    2009-09-01

    t) - v, (t) = 0 Vi from the first term in the sum, so the network converges to a near-optimal coverage configuration. Furthermore, from Ci(7) Td (t...they are not a function of 7) to get ry&(t)TW[j (TF)ICjIC[ d-l &di(t) = nYdcti( td Since 1V - 0, if limt,, Ai (t) is positive definite (we know the limit...property clearly, examine the magnitude of force exerted by one neighbor (m - 1 = 1) given by IIl f = 0ij - 02i/ llj - P I, and shown in the left of

  4. Optimization approach for saddling cost of medical cyclotrons with fuzziness

    International Nuclear Information System (INIS)

    Abass, S.A.; Massoud, E.M.A.

    2007-01-01

    Most radiation fields are combinations of different kinds of radiation. The radiations of most significance are fast neutrons, thermal neutrons, primary gammas and secondary gammas. Thermos's composite shielding materials are designed to attenuate these types of radiation. The shielding design requires an accurate cost-benefit analysis based on uncertainty optimization technique. The theory of fuzzy sets has been employed to formulate and solve the problem of cost-benefit analysis of medical cyclotron. This medical radioisotope production cyclotron is based in Sydney, Australia

  5. A three-dimensional vertically aligned functionalized multilayer graphene architecture: an approach for graphene-based thermal interfacial materials.

    Science.gov (United States)

    Liang, Qizhen; Yao, Xuxia; Wang, Wei; Liu, Yan; Wong, Ching Ping

    2011-03-22

    Thermally conductive functionalized multilayer graphene sheets (fMGs) are efficiently aligned in large-scale by a vacuum filtration method at room temperature, as evidenced by SEM images and polarized Raman spectroscopy. A remarkably strong anisotropy in properties of aligned fMGs is observed. High electrical (∼386 S cm(-1)) and thermal conductivity (∼112 W m(-1) K(-1) at 25 °C) and ultralow coefficient of thermal expansion (∼-0.71 ppm K(-1)) in the in-plane direction of A-fMGs are obtained without any reduction process. Aligned fMGs are vertically assembled between contacted silicon/silicon surfaces with pure indium as a metallic medium. Thus-constructed three-dimensional vertically aligned fMG thermal interfacial material (VA-fMG TIM) architecture has significantly higher equivalent thermal conductivity (75.5 W m(-1) K(-1)) and lower contact thermal resistance (5.1 mm2 K W(-1)), compared with their counterpart from A-fMGs that are recumbent between silicon surfaces. This finding provides a throughout approach for a graphene-based TIM assembly as well as knowledge of vertically aligned graphene architectures, which may not only facilitate graphene's application in current demanding thermal management but also promote its widespread applications in electrodes of energy storage devices, conductive polymeric composites, etc.

  6. An iterative approach for the optimization of pavement maintenance management at the network level.

    Science.gov (United States)

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  7. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    Directory of Open Access Journals (Sweden)

    Cristina Torres-Machí

    2014-01-01

    Full Text Available Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  8. Optimal wind power deployment in Europe. A portfolio approach

    International Nuclear Information System (INIS)

    Roques, Fabien; Hiroux, Celine; Saguan, Marcelo

    2010-01-01

    Geographic diversification of wind farms can smooth out the fluctuations in wind power generation and reduce the associated system balancing and reliability costs. The paper uses historical wind production data from five European countries (Austria, Denmark, France, Germany, and Spain) and applies the Mean-Variance Portfolio theory to identify cross-country portfolios that minimise the total variance of wind production for a given level of production. Theoretical unconstrained portfolios show that countries (Spain and Denmark) with the best wind resource or whose size contributes to smoothing out the country output variability dominate optimal portfolios. The methodology is then elaborated to derive optimal constrained portfolios taking into account national wind resource potential and transmission constraints and compare them with the projected portfolios for 2020. Such constraints limit the theoretical potential efficiency gains from geographical diversification, but there is still considerable room to improve performance from actual or projected portfolios. These results highlight the need for more cross-border interconnection capacity, for greater coordination of European renewable support policies, and for renewable support mechanisms and electricity market designs providing locational incentives. Under these conditions, a mechanism for renewables credits trading could help aligning wind power portfolios with the theoretically efficient geographic dispersion. (author)

  9. Multiobjective Optimization Modeling Approach for Multipurpose Single Reservoir Operation

    Directory of Open Access Journals (Sweden)

    Iosvany Recio Villa

    2018-04-01

    Full Text Available The water resources planning and management discipline recognizes the importance of a reservoir’s carryover storage. However, mathematical models for reservoir operation that include carryover storage are scarce. This paper presents a novel multiobjective optimization modeling framework that uses the constraint-ε method and genetic algorithms as optimization techniques for the operation of multipurpose simple reservoirs, including carryover storage. The carryover storage was conceived by modifying Kritsky and Menkel’s method for reservoir design at the operational stage. The main objective function minimizes the cost of the total annual water shortage for irrigation areas connected to a reservoir, while the secondary one maximizes its energy production. The model includes operational constraints for the reservoir, Kritsky and Menkel’s method, irrigation areas, and the hydropower plant. The study is applied to Carlos Manuel de Céspedes reservoir, establishing a 12-month planning horizon and an annual reliability of 75%. The results highly demonstrate the applicability of the model, obtaining monthly releases from the reservoir that include the carryover storage, degree of reservoir inflow regulation, water shortages in irrigation areas, and the energy generated by the hydroelectric plant. The main product is an operational graph that includes zones as well as rule and guide curves, which are used as triggers for long-term reservoir operation.

  10. Compound light ion fuel cycles: An approach to optimization

    International Nuclear Information System (INIS)

    Kernbichler, W.; Heindler, M.

    1985-01-01

    Together with the relatively high complexity and the low power density anticipated for fusion reactors have produced different attitude towards the long term perspective of fusion as a commercial energy source. The favourite pathway is to trust in optimization aiming at low tritium inventory, the availability of low-activation structure materials, the increase of redundancy, etc. In contrast, a respectable minority suggests turning away from d-t fusion or to envisage fusion as powerful neutron rather than energy source (fusion as fissile fuel or synfuel factory). We here intend to investigate the potentiality of fusion based on alternatives to d-t fuel. Such so called ''advanced fuels'' require higher burn temperatures and advanced reactor concepts (high-beta confinement schemes to compensate for their inherently lower reactivities. The experience that has been gained in fusion oriented plasma research admittedly justifies optimism for advanced fuels to a still lesser extent than for d-t. It can however be argued that it may pay off to choose a developmental direction with higher risk for failure but aiming at a more desirable end product. In order to explore this eventual desirability of advanced fuel fusion, we assume, as has been done in the case of d-t, that the first category of problems can be successfully handled. Our goal is thus to examine the potentiality of advanced fuels with respect to the second category of problems which largely determines the attractivity of utilization in fusion reactors

  11. A New Reversible Database Watermarking Approach with Firefly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Mustafa Bilgehan Imamoglu

    2017-01-01

    Full Text Available Up-to-date information is crucial in many fields such as medicine, science, and stock market, where data should be distributed to clients from a centralized database. Shared databases are usually stored in data centers where they are distributed over insecure public access network, the Internet. Sharing may result in a number of problems such as unauthorized copies, alteration of data, and distribution to unauthorized people for reuse. Researchers proposed using watermarking to prevent problems and claim digital rights. Many methods are proposed recently to watermark databases to protect digital rights of owners. Particularly, optimization based watermarking techniques draw attention, which results in lower distortion and improved watermark capacity. Difference expansion watermarking (DEW with Firefly Algorithm (FFA, a bioinspired optimization technique, is proposed to embed watermark into relational databases in this work. Best attribute values to yield lower distortion and increased watermark capacity are selected efficiently by the FFA. Experimental results indicate that FFA has reduced complexity and results in less distortion and improved watermark capacity compared to similar works reported in the literature.

  12. Optimal PID settings for first and second-order processes - Comparison with different controller tuning approaches

    OpenAIRE

    Pappas, Iosif

    2016-01-01

    PID controllers are extensively used in industry. Although many tuning methodologies exist, finding good controller settings is not an easy task and frequently optimization-based design is preferred to satisfy more complex criteria. In this thesis, the focus was to find which tuning approaches, if any, present close to optimal behavior. Pareto-optimal controllers were found for different first and second-order processes with time delay. Performance was quantified in terms of the integrat...

  13. Extending Failure Modes and Effects Analysis Approach for Reliability Analysis at the Software Architecture Design Level

    NARCIS (Netherlands)

    Sözer, Hasan; Tekinerdogan, B.; Aksit, Mehmet; de Lemos, Rogerio; Gacek, Cristina

    2007-01-01

    Several reliability engineering approaches have been proposed to identify and recover from failures. A well-known and mature approach is the Failure Mode and Effect Analysis (FMEA) method that is usually utilized together with Fault Tree Analysis (FTA) to analyze and diagnose the causes of failures.

  14. Information rich mapping requirement to product architecture through functional system deployment: The multi entity domain approach

    DEFF Research Database (Denmark)

    Hauksdóttir, Dagný; Mortensen, Niels Henrik

    2017-01-01

    may impede the ability to evolve, maintain or reuse systems. In this paper the Multi Entity Domain Approach (MEDA) is presented. The approach combines different design information within the domain views, incorporates both Software and Hardware design and supports iterative requirements definition...

  15. Evolvable Mars Campaign Long Duration Habitation Strategies: Architectural Approaches to Enable Human Exploration Missions

    Science.gov (United States)

    Simon, Matthew A.; Toups, Larry; Howe, A. Scott; Wald, Samuel I.

    2015-01-01

    The Evolvable Mars Campaign (EMC) is the current NASA Mars mission planning effort which seeks to establish sustainable, realistic strategies to enable crewed Mars missions in the mid-2030s timeframe. The primary outcome of the Evolvable Mars Campaign is not to produce "The Plan" for sending humans to Mars, but instead its intent is to inform the Human Exploration and Operations Mission Directorate near-term key decisions and investment priorities to prepare for those types of missions. The FY'15 EMC effort focused upon analysis of integrated mission architectures to identify technically appealing transportation strategies, logistics build-up strategies, and vehicle designs for reaching and exploring Mars moons and Mars surface. As part of the development of this campaign, long duration habitats are required which are capable of supporting crew with limited resupply and crew abort during the Mars transit, Mars moons, and Mars surface segments of EMC missions. In particular, the EMC design team sought to design a single, affordable habitation system whose manufactured units could be outfitted uniquely for each of these missions and reused for multiple crewed missions. This habitat system must provide all of the functionality to safely support 4 crew for long durations while meeting mass and volume constraints for each of the mission segments set by the chosen transportation architecture and propulsion technologies. This paper describes several proposed long-duration habitation strategies to enable the Evolvable Mars Campaign through improvements in mass, cost, and reusability, and presents results of analysis to compare the options and identify promising solutions. The concepts investigated include several monolithic concepts: monolithic clean sheet designs, and concepts which leverage the co-manifested payload capability of NASA's Space Launch System (SLS) to deliver habitable elements within the Universal Payload Adaptor between the SLS upper stage and the Orion

  16. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... correlation between the study of existing architectures and the training of competences to design for present-day realities.......This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...

  17. Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach

    Science.gov (United States)

    Fleury, Benoit; Labbe, Julien

    2014-08-01

    The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.

  18. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  19. Metabolistic Architecture

    DEFF Research Database (Denmark)

    2013-01-01

    Textile Spaces presents different approaches to using textile as a spatial definer and artistic medium. The publication collages images and text, art and architecture, science, philosophy and literature, process and product, past, present and future. It forms an insight into soft materials' funct......' functional and poetic potentials, linking the disciplines through fragments that aim to inspire a further look into the artists' and architects' practices, while simultaneously framing these textile visions in a wider context.......Textile Spaces presents different approaches to using textile as a spatial definer and artistic medium. The publication collages images and text, art and architecture, science, philosophy and literature, process and product, past, present and future. It forms an insight into soft materials...

  20. Quantifying loopy network architectures.

    Directory of Open Access Journals (Sweden)

    Eleni Katifori

    Full Text Available Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes from the metric topology (connectivity and edge weight and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  1. The Odyssey Approach for Optimizing Federated SPARQL Queries

    DEFF Research Database (Denmark)

    Montoya, Gabriela; Skaf-Molli, Hala; Hose, Katja

    2017-01-01

    . Nevertheless, these plans may still exhibit a high number of intermediate results or high execution times because of heuristics and inaccurate cost estimations. In this paper, we present Odyssey, an approach that uses statistics that allow for a more accurate cost estimation for federated queries and therefore...

  2. A Quasi-Robust Optimization Approach for Crew Rescheduling

    NARCIS (Netherlands)

    Veelenturf, L.P.; Potthoff, D.; Huisman, D.; Kroon, L.G.; Maroti, G.; Wagelmans, A.P.M.

    2016-01-01

    This paper studies the real-time crew rescheduling problem in case of large-scale disruptions. One of the greatest challenges of real-time disruption management is the unknown duration of the disruption. In this paper we present a novel approach for crew rescheduling where we deal with this

  3. Architectural design led approach to sustainable tourism for the waterfront development of Kunduchi in Tanzania

    Science.gov (United States)

    Leus, M.; Winkels, P.; Hannes, E.

    2018-04-01

    In Kunduchi, located in the Kinondoni district of the Dar es Salaam Region, it is of vital importance for the lives and livelihoods of the indigenous people to preserve the typical ecosystems and ensure the identity and economic resilience of these areas. The problem statement is as follows: In what way can sustainable tourism in Kunduchi serve as an engine for economic and social empowerment? How can Kunduchi be an inspiring example for the development of the coast of Dar Es Salaam in Tanzania that is threatened by large-scale tourist infrastructure? How to design sustainable solutions with respect for the local community and the local traditions? Firstly, a theoretical framework that connects sustainable tourism with the sustainable development of coastal areas is defined. Assumptions made on the basis of the literature review provide parameters that play an important role in the architectural concepts. Secondly, a research by design is presented in order to analyze and evaluate different scenarios to outline the opportunities of sustainable tourism on site of Kunduchi. Sustainable waterfront development is an obvious subtitle since the subtle spatial integration of these projects in the urban and water related context of Dares Salaam is of major importance.

  4. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    Science.gov (United States)

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  5. Reactive Robustness and Integrated Approaches for Railway Optimization Problems

    DEFF Research Database (Denmark)

    Haahr, Jørgen Thorlund

    journeys helps the driver to drive efficiently and enhances robustness in a realistic (dynamic) environment. Four international scientific prizes have been awarded for distinct parts of the research during the course of this PhD project. The first prize was awarded for work during the \\2014 RAS Problem...... to absorb or withstand unexpected events such as delays. Making robust plans is central in order to maintain a safe and timely railway operation. This thesis focuses on reactive robustness, i.e., the ability to react once a plan is rendered infeasible in operation due to disruptions. In such time...... Solving Competition", where a freight yard optimization problem was considered. The second junior (PhD) prize was awared for the work performed in the \\ROADEF/EURO Challenge 2014: Trains don't vanish!", where the planning of rolling stock movements at a large station was considered. An honorable mention...

  6. Particle Swarm Optimization Approach in a Consignment Inventory System

    Science.gov (United States)

    Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

    2009-09-01

    Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

  7. Heuristic versus statistical physics approach to optimization problems

    International Nuclear Information System (INIS)

    Jedrzejek, C.; Cieplinski, L.

    1995-01-01

    Optimization is a crucial ingredient of many calculation schemes in science and engineering. In this paper we assess several classes of methods: heuristic algorithms, methods directly relying on statistical physics such as the mean-field method and simulated annealing; and Hopfield-type neural networks and genetic algorithms partly related to statistical physics. We perform the analysis for three types of problems: (1) the Travelling Salesman Problem, (2) vector quantization, and (3) traffic control problem in multistage interconnection network. In general, heuristic algorithms perform better (except for genetic algorithms) and much faster but have to be specific for every problem. The key to improving the performance could be to include heuristic features into general purpose statistical physics methods. (author)

  8. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    Science.gov (United States)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  9. Probabilistic safety assessment and optimal control of hazardous technological systems. A marked point process approach

    International Nuclear Information System (INIS)

    Holmberg, J.

    1997-04-01

    The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant

  10. Probabilistic safety assessment and optimal control of hazardous technological systems. A marked point process approach

    Energy Technology Data Exchange (ETDEWEB)

    Holmberg, J [VTT Automation, Espoo (Finland)

    1997-04-01

    The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant. 62 refs. The thesis includes also five previous publications by author.

  11. Digital museums of the imagined architecture: an integrated approach to the definition of cultural heritage’s knowledge paths

    Directory of Open Access Journals (Sweden)

    Aldo R.D. Accardi

    2016-12-01

    Full Text Available The aim of this work is to highlight a multidisciplinary approach to define new ways of knowledge of architectures and urban contexts that were drawn but never built. In particular, the focus is on a set of 18th century representations, created in the field of scenic illusion. The work was firstly carried out in order to define the structure of a Digital Museum Ontology, a complex semantic resource able to store documents of various typologies. Then, it was elaborated a digital museum project, which starting from existing images will introduce to an interactive way of experiencing the heritage in question. This experimentation has the intent of finding a best practice for the creation of virtual exhibitions. 

  12. A penalty guided stochastic fractal search approach for system reliability optimization

    International Nuclear Information System (INIS)

    Mellal, Mohamed Arezki; Zio, Enrico

    2016-01-01

    Modern industry requires components and systems with high reliability levels. In this paper, we address the system reliability optimization problem. A penalty guided stochastic fractal search approach is developed for solving reliability allocation, redundancy allocation, and reliability–redundancy allocation problems. Numerical results of ten case studies are presented as benchmark problems for highlighting the superiority of the proposed approach compared to others from literature. - Highlights: • System reliability optimization is investigated. • A penalty guided stochastic fractal search approach is developed. • Results of ten case studies are compared with previously published methods. • Performance of the approach is demonstrated.

  13. Geometry Optimization Approaches of Inductively Coupled Printed Spiral Coils for Remote Powering of Implantable Biomedical Sensors

    Directory of Open Access Journals (Sweden)

    Sondos Mehri

    2016-01-01

    Full Text Available Electronic biomedical implantable sensors need power to perform. Among the main reported approaches, inductive link is the most commonly used method for remote powering of such devices. Power efficiency is the most important characteristic to be considered when designing inductive links to transfer energy to implantable biomedical sensors. The maximum power efficiency is obtained for maximum coupling and quality factors of the coils and is generally limited as the coupling between the inductors is usually very small. This paper is dealing with geometry optimization of inductively coupled printed spiral coils for powering a given implantable sensor system. For this aim, Iterative Procedure (IP and Genetic Algorithm (GA analytic based optimization approaches are proposed. Both of these approaches implement simple mathematical models that approximate the coil parameters and the link efficiency values. Using numerical simulations based on Finite Element Method (FEM and with experimental validation, the proposed analytic approaches are shown to have improved accurate performance results in comparison with the obtained performance of a reference design case. The analytical GA and IP optimization methods are also compared to a purely Finite Element Method based on numerical optimization approach (GA-FEM. Numerical and experimental validations confirmed the accuracy and the effectiveness of the analytical optimization approaches to design the optimal coil geometries for the best values of efficiency.

  14. Functional architecture of visual emotion recognition ability: A latent variable approach.

    Science.gov (United States)

    Lewis, Gary J; Lefevre, Carmen E; Young, Andrew W

    2016-05-01

    Emotion recognition has been a focus of considerable attention for several decades. However, despite this interest, the underlying structure of individual differences in emotion recognition ability has been largely overlooked and thus is poorly understood. For example, limited knowledge exists concerning whether recognition ability for one emotion (e.g., disgust) generalizes to other emotions (e.g., anger, fear). Furthermore, it is unclear whether emotion recognition ability generalizes across modalities, such that those who are good at recognizing emotions from the face, for example, are also good at identifying emotions from nonfacial cues (such as cues conveyed via the body). The primary goal of the current set of studies was to address these questions through establishing the structure of individual differences in visual emotion recognition ability. In three independent samples (Study 1: n = 640; Study 2: n = 389; Study 3: n = 303), we observed that the ability to recognize visually presented emotions is based on different sources of variation: a supramodal emotion-general factor, supramodal emotion-specific factors, and face- and within-modality emotion-specific factors. In addition, we found evidence that general intelligence and alexithymia were associated with supramodal emotion recognition ability. Autism-like traits, empathic concern, and alexithymia were independently associated with face-specific emotion recognition ability. These results (a) provide a platform for further individual differences research on emotion recognition ability, (b) indicate that differentiating levels within the architecture of emotion recognition ability is of high importance, and (c) show that the capacity to understand expressions of emotion in others is linked to broader affective and cognitive processes. (c) 2016 APA, all rights reserved).

  15. On the equivalent static loads approach for dynamic response structural optimization

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2014-01-01

    The equivalent static loads algorithm is an increasingly popular approach to solve dynamic response structural optimization problems. The algorithm is based on solving a sequence of related static response structural optimization problems with the same objective and constraint functions...... as the original problem. The optimization theoretical foundation of the algorithm is mainly developed in Park and Kang (J Optim Theory Appl 118(1):191–200, 2003). In that article it is shown, for a certain class of problems, that if the equivalent static loads algorithm terminates then the KKT conditions...

  16. Method of transient identification based on a possibilistic approach, optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos Soares de

    2001-02-01

    This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)

  17. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    Science.gov (United States)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  18. A systemic approach for optimal cooling tower operation

    International Nuclear Information System (INIS)

    Cortinovis, Giorgia F.; Paiva, Jose L.; Song, Tah W.; Pinto, Jose M.

    2009-01-01

    The thermal performance of a cooling tower and its cooling water system is critical for industrial plants, and small deviations from the design conditions may cause severe instability in the operation and economics of the process. External disturbances such as variation in the thermal demand of the process or oscillations in atmospheric conditions may be suppressed in multiple ways. Nevertheless, such alternatives are hardly ever implemented in the industrial operation due to the poor coordination between the utility and process sectors. The complexity of the operation increases because of the strong interaction among the process variables. In the present work, an integrated model for the minimization of the operating costs of a cooling water system is developed. The system is composed of a cooling tower as well as a network of heat exchangers. After the model is verified, several cases are studied with the objective of determining the optimal operation. It is observed that the most important operational resources to mitigate disturbances in the thermal demand of the process are, in this order: the increase in recycle water flow rate, the increase in air flow rate and finally the forced removal of a portion of the water flow rate that enters the cooling tower with the corresponding make-up flow rate.

  19. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  20. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  1. Architectures for wrist-worn energy harvesting

    Science.gov (United States)

    Rantz, R.; Halim, M. A.; Xue, T.; Zhang, Q.; Gu, L.; Yang, K.; Roundy, S.

    2018-04-01

    This paper reports the simulation-based analysis of six dynamical structures with respect to their wrist-worn vibration energy harvesting capability. This work approaches the problem of maximizing energy harvesting potential at the wrist by considering multiple mechanical substructures; rotational and linear motion-based architectures are examined. Mathematical models are developed and experimentally corroborated. An optimization routine is applied to the proposed architectures to maximize average power output and allow for comparison. The addition of a linear spring element to the structures has the potential to improve power output; for example, in the case of rotational structures, a 211% improvement in power output was estimated under real walking excitation. The analysis concludes that a sprung rotational harvester architecture outperforms a sprung linear architecture by 66% when real walking data is used as input to the simulations.

  2. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  3. Tectonic thinking in contemporary industrialized architecture

    DEFF Research Database (Denmark)

    Beim, Anne

    2013-01-01

    a creative force in building constructions, structural features and architectural design (construing) – helps to identify and refine technology transfer in contemporary industrialized building construction’. Through various references from the construction industry, business theory and architectural practice......This paper argues for a new critical approach to the ways architectural design strategies are developing. Contemporary construction industry appears to evolve into highly specialized and optimized processes driven by industrialized manufacturing, therefore the role of the architect...... and the understanding of the architectural design process ought to be revised. The paper is based on the following underlying hypothesis: ‘Tectonic thinking – defined as a central attention towards the nature, the properties, and the application of building materials (construction) and how this attention forms...

  4. Synthesis of biorefinery networks using a superstructure optimization based approach

    DEFF Research Database (Denmark)

    Bertran, Maria-Ona; Anaya-Reza, Omar; Lopez-Arenas, Maria Teresa

    Petroleum is currently the primary raw material for the production of fuels and chemicals. Consequently, our society is highly dependent on fossil non-renewable resources. However, renewable raw materials are recently receiving increasing interest for the production of chemicals and fuels, so a n...... of the proposed approach is shown through a practical case study for the production of valuable products (i.e. lysine and lactic acid) from sugarcane molasses; these alternatives are considered with respect to availability and demands in Mexico [4]....

  5. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  6. Reliability-redundancy optimization by means of a chaotic differential evolution approach

    International Nuclear Information System (INIS)

    Coelho, Leandro dos Santos

    2009-01-01

    The reliability design is related to the performance analysis of many engineering systems. The reliability-redundancy optimization problems involve selection of components with multiple choices and redundancy levels that produce maximum benefits, can be subject to the cost, weight, and volume constraints. Classical mathematical methods have failed in handling nonconvexities and nonsmoothness in optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solution in reliability-redundancy optimization problems. Evolutionary algorithms (EAs) - paradigms of evolutionary computation field - are stochastic and robust meta-heuristics useful to solve reliability-redundancy optimization problems. EAs such as genetic algorithm, evolutionary programming, evolution strategies and differential evolution are being used to find global or near global optimal solution. A differential evolution approach based on chaotic sequences using Lozi's map for reliability-redundancy optimization problems is proposed in this paper. The proposed method has a fast convergence rate but also maintains the diversity of the population so as to escape from local optima. An application example in reliability-redundancy optimization based on the overspeed protection system of a gas turbine is given to show its usefulness and efficiency. Simulation results show that the application of deterministic chaotic sequences instead of random sequences is a possible strategy to improve the performance of differential evolution.

  7. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  8. A comparison of two closely-related approaches to aerodynamic design optimization

    Science.gov (United States)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  9. A Domain-Driven Approach to Digital Curation and Preservation of 3D Architectural Data

    DEFF Research Database (Denmark)

    Lindlar, Michelle; Tamke, Martin

    2014-01-01

    and geometric enrichment, consistent naming schemas and ontologies, as well as pre-ingest tasks in OAIS compliant digital preservation workflows. In a first step, the project identified stakeholders for methods and processes. Since the project strives for a holistic digital preservation approach, different...

  10. A FINITE-ELEMENTS APPROACH TO THE STUDY OF FUNCTIONAL ARCHITECTURE IN SKELETAL-MUSCLE

    NARCIS (Netherlands)

    OTTEN, E; HULLIGER, M

    1994-01-01

    A mathematical model that simulates the mechanical processes inside a skeletal muscle under various conditions of muscle recruitment was formulated. The model is based on the finite-elements approach and simulates both contractile and passive elastic elements. Apart from the classic strategy of

  11. Optimal and Approximate Approaches for Deployment of Heterogeneous Sensing Devices

    Directory of Open Access Journals (Sweden)

    Rabie Ramadan

    2007-04-01

    Full Text Available A modeling framework for the problem of deploying a set of heterogeneous sensors in a field with time-varying differential surveillance requirements is presented. The problem is formulated as mixed integer mathematical program with the objective to maximize coverage of a given field. Two metaheuristics are used to solve this problem. The first heuristic adopts a genetic algorithm (GA approach while the second heuristic implements a simulated annealing (SA algorithm. A set of experiments is used to illustrate the capabilities of the developed models and to compare their performance. The experiments investigate the effect of parameters related to the size of the sensor deployment problem including number of deployed sensors, size of the monitored field, and length of the monitoring horizon. They also examine several endogenous parameters related to the developed GA and SA algorithms.

  12. [Optimization of organizational approaches to management of patients with atherosclerosis].

    Science.gov (United States)

    Barbarash, L S; Barbarash, O L; Artamonova, G V; Sumin, A N

    2014-01-01

    Despite undoubted achievements of modern cardiology in prevention and treatment of atherosclerosis, cardiologists, neurologists, and vascular surgeons are still facing severe stenotic atherosclerotic lesions in different vascular regions, both symptomatic and asymptomatic. As a rule hemodynamically significant stenoses of different locations are found after development of acute vascular events. In this regard, active detection of arterial stenoses localized in different areas just at primary contact of patients presenting with symptoms of ischemia of various locations with care providers appears to be crucial. Further monitoring of these stenoses is also important. The article is dedicated to innovative organizational approaches to provision of healthcare to patients suffering from circulatory system diseases that have contributed to improvement of demographic situation in Kuzbass.

  13. Optimization approaches for treating nuclear power plant problems

    International Nuclear Information System (INIS)

    Abdelgoad, A.S.A.

    2012-01-01

    Electricity generation is the process of generating electric energy from other forms of energy. There are many technologies that can be and are used to generate electricity. One of these technologies is the nuclear power. A nuclear power plant (NPP) is a thermal power station in which the heat source is one or more nuclear reactors. As in a conventional thermal power station the heat is used to generate steam which drives a steam turbine connected to a generator which produces electricity. As of February 2nd, 2012, there were 439 nuclear power plants in operation through the world. NPP are usually considered to be base load stations, which are best suited to constant power output. The thesis consists of five chapters: Chapter I presents a survey on some important concepts of the NPP problems. Chapter II introduces the economic future of nuclear power. It presents nuclear energy scenarios beyond 2015, market potential for electricity generation to 2030 and economics of new plant construction. Chapter III presents a reliability centered problem of power plant preventive maintenance scheduling. NPP preventive maintenance scheduling problem with fuzzy parameters in the constraints is solved. A case study is provided to demonstrate the efficiency of proposed model. A comparison study between the deterministic case and fuzzy case for the problem of concern is carried out. Chapter IV introduces a fuzzy approach to the generation expansion planning problem (GEP) in a multiobjective environment. The GEP problem as an integer programming model with fuzzy parameters in the constraints is formulated. A parametric study is carried out for the GEP problem. A case study is provided to demonstrate the efficiency of our proposed model. A comparison study between our approach and the deterministic one is made. Chapter V is concerned with the conclusions arrived in carrying out this thesis and gives some suggestions for further research.

  14. A general approach for optimal kinematic design of 6-DOF parallel ...

    Indian Academy of Sciences (India)

    Optimal kinematic design of parallel manipulators is a challenging problem. In this work, an attempt has been made to present a generalized approach of kinematic design for a 6-legged parallel manipulator, by considering only the minimally required design parameters. The same approach has been used to design a ...

  15. Tomographic Reconstruction from a Few Views: A Multi-Marginal Optimal Transport Approach

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, I., E-mail: isabelle.abraham@cea.fr [CEA Ile de France (France); Abraham, R., E-mail: romain.abraham@univ-orleans.fr; Bergounioux, M., E-mail: maitine.bergounioux@univ-orleans.fr [Université d’Orléans, UFR Sciences, MAPMO, UMR 7349 (France); Carlier, G., E-mail: carlier@ceremade.dauphine.fr [CEREMADE, UMR CNRS 7534, Université Paris IX Dauphine, Pl. de Lattre de Tassigny (France)

    2017-02-15

    In this article, we focus on tomographic reconstruction. The problem is to determine the shape of the interior interface using a tomographic approach while very few X-ray radiographs are performed. We use a multi-marginal optimal transport approach. Preliminary numerical results are presented.

  16. An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method

    Directory of Open Access Journals (Sweden)

    Jibum Kim

    2014-01-01

    Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.

  17. Stochastic Real-Time Optimal Control: A Pseudospectral Approach for Bearing-Only Trajectory Optimization

    Science.gov (United States)

    2011-09-01

    measurements suitable for algorithms such as Ekelund or Spiess ranging [104], followed by one extra turn to eliminate ambiguities. A maneuver that...and Dynamics, 7(3), 1984. [104] Spiess , F. N. “Complete Solution of the Bearings Only Approach Problem”. UC San Diego: Scripps Institution of...spectral methods, 21 Spiess ranging, 4 state augmentation, 81 state transition matrix, 66 stereo ranging, 4 sUAS, 5–8, 14, 38, 39, 69, 77, 82, 86, 117, 167

  18. A complex systems approach to planning, optimization and decision making for energy networks

    International Nuclear Information System (INIS)

    Beck, Jessica; Kempener, Ruud; Cohen, Brett; Petrie, Jim

    2008-01-01

    This paper explores a new approach to planning and optimization of energy networks, using a mix of global optimization and agent-based modeling tools. This approach takes account of techno-economic, environmental and social criteria, and engages explicitly with inherent network complexity in terms of the autonomous decision-making capability of individual agents within the network, who may choose not to act as economic rationalists. This is an important consideration from the standpoint of meeting sustainable development goals. The approach attempts to set targets for energy planning, by determining preferred network development pathways through multi-objective optimization. The viability of such plans is then explored through agent-based models. The combined approach is demonstrated for a case study of regional electricity generation in South Africa, with biomass as feedstock

  19. Surface laser marking optimization using an experimental design approach

    Science.gov (United States)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  20. Gynecomastia associated with herniated nipples: an optimal surgical approach.

    Science.gov (United States)

    Jaiswal, Rohit; Pu, Lee L Q

    2012-04-01

    Gynecomastia is a common disorder observed in male plastic surgery patients. Treatment options may include observation, surgical excision, or liposuction techniques. Congenital herniated nipple is a more rare condition, especially in male patients. We present the case of a 12-year-old boy with bilateral gynecomastia and herniated nipple-areolar complexes. A staged repair was undertaken in this patient with grade 2 gynecomastia. The first operation was ultrasonic liposuction bilaterally, yielding 200 mL of aspirate from the left and 400 mL on the right, to correct the gynecomastia. The second procedure, performed 6 months later, was a bilateral periareolar mastopexy to repair the herniated nipple-areolar complexes. The result of the first procedure was flattened and symmetrical breast tissue bilaterally, essentially a correction of the gynecomastia. The herniated nipples were still present, however. Bilateral periareolar mastopexies were then performed with resulting reduction of the herniations. There were no complications with either procedure, and a good cosmetic result was achieved. A staged surgical approach was successful in correcting both conditions with an excellent aesthetic result and the advantage of decreased risk for nipple complications.