WorldWideScience

Sample records for optimized architectural approaches

  1. Techniques and Tools for Optimizing Codes on Modern Architectures: : A Low-Level Approach

    OpenAIRE

    2009-01-01

    This thesis describes novel techniques and test implementations for optimizing numerically intensive codes. Our main focus is on how given algorithms can be adapted to run efficiently on modern microprocessor exploring several architectural features including, instruction selection, and access patterns related to having several levels of cache. Our approach is also shown to be relevant for multicore architectures. Our primary target applications are linear algebra routines in the form of ma...

  2. Comparison of Planar Parallel Manipulator Architectures based on a Multi-objective Design Optimization Approach

    CERN Document Server

    Chablat, Damien; Ur-Rehman, Raza; Wenger, Philippe

    2010-01-01

    This paper deals with the comparison of planar parallel manipulator architectures based on a multi-objective design optimization approach. The manipulator architectures are compared with regard to their mass in motion and their regular workspace size, i.e., the objective functions. The optimization problem is subject to constraints on the manipulator dexterity and stiffness. For a given external wrench, the displacements of the moving platform have to be smaller than given values throughout the obtained maximum regular dexterous workspace. The contributions of the paper are highlighted with the study of 3-RPR, 3-RPR and 3-RPR planar parallel manipulator architectures, which are compared by means of their Pareto frontiers obtained with a genetic algorithm.

  3. Discrete optimization in architecture architectural & urban layout

    CERN Document Server

    Zawidzki, Machi

    2016-01-01

    This book presents three projects that demonstrate the fundamental problems of architectural design and urban composition – the layout design, evaluation and optimization. Part I describes the functional layout design of a residential building, and an evaluation of the quality of a town square (plaza). The algorithm for the functional layout design is based on backtracking using a constraint satisfaction approach combined with coarse grid discretization. The algorithm for the town square evaluation is based on geometrical properties derived directly from its plan. Part II introduces a crowd-simulation application for the analysis of escape routes on floor plans, and optimization of a floor plan for smooth crowd flow. The algorithms presented employ agent-based modeling and cellular automata.

  4. An Analytical Approach for Optimal Clustering Architecture for Maximizing Lifetime in Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mr. Yogesh Rai

    2011-09-01

    Full Text Available Many methods have been researched to prolong sensor network lifetime using mobile technologies. In the mobile sink research, there are the track based methods and the anchor points based methods as representative operation methods for mobile sinks. However, the existing methods decrease Quality of Service (QoS and lead the routing hotspot in the vicinity of the mobile sink. In large scale wireless sensor networks, clustering is an effective technique for the purpose of improving the utilization of limited energy and prolonging the network lifetime. However, the problem of unbalanced energy dissipation exists in the multi-hop clustering model, where the cluster heads closer to the sink have to relay heavier traffic and consume more energy than farther nodes. In this paper we analyze several aspects based on the optimal clustering architecture for maximizing lifetime for large scale wireless sensor network. We also provide some analytical concepts for energy-aware head rotation and routing protocols to further balance the energy consumption among all nodes.

  5. Future city architecture for optimal living

    CERN Document Server

    Pardalos, Panos

    2015-01-01

      This book offers a wealth of interdisciplinary approaches to urbanization strategies in architecture centered on growing concerns about the future of cities and their impacts on essential elements of architectural optimization, livability, energy consumption and sustainability. It portrays the urban condition in architectural terms, as well as the living condition in human terms, both of which can be optimized by mathematical modeling as well as mathematical calculation and assessment.   Special features include:   ·        new research on the construction of future cities and smart cities   ·        discussions of sustainability and new technologies designed to advance ideas to future city developments   Graduate students and researchers in architecture, engineering, mathematical modeling, and building physics will be engaged by the contributions written by eminent international experts from a variety of disciplines including architecture, engineering, modeling, optimization, and relat...

  6. An Approach for Optimizing the On-Orbit Servicing Architecture for a Given Client Satellite Constellation

    Science.gov (United States)

    2005-03-01

    In February of 1980, NASA launched the Solar Maximum Mission spacecraft to collect observations of solar flares, sunspots, magnetic fields, and the...spacecraft autonomously delivered supplies to and returned waste from the Mir space station, the second generation of Russian manned orbiting facilities...Vehicle Routing Problems The classic vehicle routing problem ( VRP ) is a combinatorial optimization problem that minimizes the cost of routing a

  7. Optimized Architectural Approaches in Hardware and Software Enabling Very High Performance Shared Storage Systems

    CERN Document Server

    CERN. Geneva

    2004-01-01

    There are issues encountered in high performance storage systems that normally lead to compromises in architecture. Compute clusters tend to have compute phases followed by an I/O phase that must move data from the entire cluster in one operation. That data may then be shared by a large number of clients creating unpredictable read and write patterns. In some cases the aggregate performance of a server cluster must exceed 100 GB/s to minimize the time required for the I/O cycle thus maximizing compute availability. Accessing the same content from multiple points in a shared file system leads to the classical problems of data "hot spots" on the disk drive side and access collisions on the data connectivity side. The traditional method for increasing apparent bandwidth usually includes data replication which is costly in both storage and management. Scaling a model that includes replicated data presents additional management challenges as capacity and bandwidth expand asymmetrically while the system is scaled. ...

  8. Approaching Environmental Issues in Architecture

    DEFF Research Database (Denmark)

    Petersen, Mads Dines; Knudstrup, Mary-Ann

    2013-01-01

    The research presented here takes its point of departure in the design process with a specific focus on how it is approached when designing energy efficient architecture. This is done through a case-study of a design process in a Danish architectural office. This study shows the importance...... of having a clear strategy about how to work with optimizing the energy efficiency in the building from the first stages of the design process. It is not just a task that can be addressed by the engineers when a concept has been developed and approved by the client. It requires architects and engineers...... to address it from the beginning and work actively with it, but it also requires the client to state it clearly in the brief. From this study, it is evident that the work with energy efficiency requires us to focus on the formal framework for the design process. It must allow and support a multi...

  9. An Architecture for Performance Optimization in a Collaborative Knowledge-Based Approach for  Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Juan Ramon Velasco

    2011-09-01

    Full Text Available Over the past few years, Intelligent Spaces (ISs have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a an optimized design for the inference engine; (b a visual interface; (c a module to reduce the redundancy and complexity of the knowledge bases; (d a module to evaluate the accuracy of the new knowledge base; (e a module to adapt the format of the rules to the structure used by the inference engine; and (f a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern. and repilo (caused by the fungus Spilocaea oleagina. The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery without a substantial decrease in the accuracy of the inferred values.

  10. Multilayer Perceptron: Architecture Optimization and Training

    Directory of Open Access Journals (Sweden)

    Hassan Ramchoun

    2016-09-01

    Full Text Available The multilayer perceptron has a large wide of classification and regression applications in many fields: pattern recognition, voice and classification problems. But the architecture choice has a great impact on the convergence of these networks. In the present paper we introduce a new approach to optimize the network architecture, for solving the obtained model we use the genetic algorithm and we train the network with a back-propagation algorithm. The numerical results assess the effectiveness of the theoretical results shown in this paper, and the advantages of the new modeling compared to the previous model in the literature.

  11. Battery-Less Electroencephalogram System Architecture Optimization

    Science.gov (United States)

    2016-12-01

    ARL-TR-7909•DEC 2016 US Army Research Laboratory Battery-Less ElectroencephalogramSystem Architecture Optimization by Peter Gadfort and Renooka...DEC 2016 US Army Research Laboratory Battery-Less ElectroencephalogramSystem Architecture Optimization by Peter GadfortSensors and Electron Devices... Architecture Optimization Peter Gadfort and Renooka Karmarkar ARL-TR-7909 Approved for public release; distribution is unlimited. 13 June 2016 – 26 September

  12. Approaching Environmental Issues in Architecture

    DEFF Research Database (Denmark)

    Petersen, Mads Dines; Knudstrup, Mary-Ann

    2013-01-01

    The research presented here takes its point of departure in the design process with a specific focus on how it is approached when designing energy efficient architecture. This is done through a case-study of a design process in a Danish architectural office. This study shows the importance of hav...

  13. Approaching Environmental Issues in Architecture

    DEFF Research Database (Denmark)

    Petersen, Mads Dines; Knudstrup, Mary-Ann

    2013-01-01

    The research presented here takes its point of departure in the design process with a specific focus on how it is approached when designing energy efficient architecture. This is done through a case-study of a design process in a Danish architectural office. This study shows the importance of hav...

  14. Assessing optimal software architecture maintainability

    NARCIS (Netherlands)

    Bosch, Jan; Bengtsson, P.O.; Smedinga, Rein; Sousa, P; Ebert, J

    2000-01-01

    Over the last decade, several authors have studied the maintainability of software architectures. In particular, the assessment of maintainability has received attention. However, even when one has a quantitative assessment of the maintainability of a software architecture, one still does not have

  15. Assessing optimal software architecture maintainability

    NARCIS (Netherlands)

    Bosch, Jan; Bengtsson, P.O.; Smedinga, Rein; Sousa, P; Ebert, J

    2000-01-01

    Over the last decade, several authors have studied the maintainability of software architectures. In particular, the assessment of maintainability has received attention. However, even when one has a quantitative assessment of the maintainability of a software architecture, one still does not have a

  16. Optimal architectures for long distance quantum communication.

    Science.gov (United States)

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D; Jiang, Liang

    2016-02-15

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥ 1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances.

  17. Optimal architectures for long distance quantum communication

    Science.gov (United States)

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D.; Jiang, Liang

    2016-02-01

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances.

  18. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  19. Architectural Optimization of Digital Libraries

    Science.gov (United States)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  20. A Topology Optimisation Approach to Learning in Architectural Design

    DEFF Research Database (Denmark)

    Mullins, Michael; Kirkegaard, Poul Henning; Jessen, Rasmus Zederkof

    2005-01-01

    design, financial and a number of other pragmatic reasons. But in an artistic/architectural perspective these are not decisive. Analogical design qualities include a tectonic appreciation of the properties of materials, metaphoric interpretation of intention and considerations of context. The paper...... describes an attempt to unify analytic and analogical approaches in an architectural education setting, using topology optimization software. It uses as examples recent student projects where the architectural design process based on a topology optimization approach has been investigated. The paper...... describes and presents results obtained by the students during the project. Further, a discussion is delivered concerning the improved understanding of tectonic design obtained by the student during the projects....

  1. The Aeronautical Data Link: Taxonomy, Architectural Analysis, and Optimization

    Science.gov (United States)

    Morris, A. Terry; Goode, Plesent W.

    2002-01-01

    The future Communication, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) System will rely on global satellite navigation, and ground-based and satellite based communications via Multi-Protocol Networks (e.g. combined Aeronautical Telecommunications Network (ATN)/Internet Protocol (IP)) to bring about needed improvements in efficiency and safety of operations to meet increasing levels of air traffic. This paper will discuss the development of an approach that completely describes optimal data link architecture configuration and behavior to meet the multiple conflicting objectives of concurrent and different operations functions. The practical application of the approach enables the design and assessment of configurations relative to airspace operations phases. The approach includes a formal taxonomic classification, an architectural analysis methodology, and optimization techniques. The formal taxonomic classification provides a multidimensional correlation of data link performance with data link service, information protocol, spectrum, and technology mode; and to flight operations phase and environment. The architectural analysis methodology assesses the impact of a specific architecture configuration and behavior on the local ATM system performance. Deterministic and stochastic optimization techniques maximize architectural design effectiveness while addressing operational, technology, and policy constraints.

  2. Electromagnetic Vibration Energy Harvesting Devices Architectures, Design, Modeling and Optimization

    CERN Document Server

    Spreemann, Dirk

    2012-01-01

    Electromagnetic vibration transducers are seen as an effective way of harvesting ambient energy for the supply of sensor monitoring systems. Different electromagnetic coupling architectures have been employed but no comprehensive comparison with respect to their output performance has been carried out up to now. Electromagnetic Vibration Energy Harvesting Devices introduces an optimization approach which is applied to determine optimal dimensions of the components (magnet, coil and back iron). Eight different commonly applied coupling architectures are investigated. The results show that correct dimensions are of great significance for maximizing the efficiency of the energy conversion. A comparison yields the architectures with the best output performance capability which should be preferably employed in applications. A prototype development is used to demonstrate how the optimization calculations can be integrated into the design–flow. Electromagnetic Vibration Energy Harvesting Devices targets the design...

  3. Architecture Knowledge Management: Challenges, Approaches, and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Babar, Muhammad A.; Gorton, Ian

    2007-08-01

    Capturing the technical knowledge, contextual information, and rationale surrounding the design decisions underpinning system architectures can greatly improve the software development process. If not managed, this critical knowledge is implicitly embedded in the architecture, becoming tacit knowledge which erodes as personnel on the project change. Moreover, the unavailability of architecture knowledge precludes organizations from growing their architectural capabilities. In this tutorial, we highlight the benefits and challenges in managing software architecture knowledge. We discuss various approaches to characterize architecture knowledge based on the requirements of a particular domain. We describe various concepts and approaches to manage the architecture knowledge from both management and technical perspectives. We also demonstrate the utility of captured knowledge to support software architecture activities with a case study covering the use of architecture knowledge management techniques and tools in an industrial project.

  4. System deployment optimization in architecture design

    Institute of Scientific and Technical Information of China (English)

    Xiaoxue Zhang; Shu Tang; Aimin Luo; Xueshan Luo

    2014-01-01

    Optimization of architecture design has recently drawn research interest. System deployment optimization (SDO) refers to the process of optimizing systems that are being deployed to activi-ties. This paper first formulates a mathematical model to theorize and operationalize the SDO problem and then identifies optimal so-lutions to solve the SDO problem. In the solutions, the success rate of the combat task is maximized, whereas the execution time of the task and the cost of changes in the system structure are mini-mized. The presented optimized algorithm generates an optimal solution without the need to check the entire search space. A novel method is final y proposed based on the combination of heuristic method and genetic algorithm (HGA), as wel as the combination of heuristic method and particle swarm optimization (HPSO). Experi-ment results show that the HPSO method generates solutions faster than particle swarm optimization (PSO) and genetic algo-rithm (GA) in terms of execution time and performs more efficiently than the heuristic method in terms of determining the best solution.

  5. An architectural approach to level design

    CERN Document Server

    Totten, Christopher W

    2014-01-01

    Explore Level Design through the Lens of Architectural and Spatial Experience TheoryWritten by a game developer and professor trained in architecture, An Architectural Approach to Level Design is one of the first books to integrate architectural and spatial design theory with the field of level design. It explores the principles of level design through the context and history of architecture, providing information useful to both academics and game development professionals.Understand Spatial Design Principles for Game Levels in 2D, 3D, and Multiplayer ApplicationsThe book presents architectura

  6. Optimization of Highly Architectured Stereolithographic Microtrusses

    Science.gov (United States)

    Bird, Adam Gregory

    Stereolithography allows architectural freedom which can be used to fabricate optimal architectures with the potential for significant enhancements in structural efficiency. In this study, buckling behaviour of compressive struts is explored. Experimental failure stress values for stereolithographic polymer tubes are found to agree with existing predictive models for Euler and local shell buckling, validating the methodology. Experimental testing on a space frame compressive strut design proposed in literature reveals end constraints between fixed and free, and stereolithographic design freedom is considered to reduce over-engineered features, improving performance. Finally, a novel sandwich wall tubular strut design is introduced: experimental results show successful inhibition of local shell buckling while also enhancing Euler buckling performance (improvements of 45% and 30% in failure strength when compared to equal-mass simple tubes, respectively). A new failure mode termed "wall splitting" is identified, and a preliminary model is developed to predict the increases in failure stress.

  7. Code optimizations for narrow bitwidth architectures

    OpenAIRE

    Bhagat, Indu

    2012-01-01

    This thesis takes a HW/SW collaborative approach to tackle the problem of computational inefficiency in a holistic manner. The hardware is redesigned by restraining the datapath to merely 16-bit datawidth (integer datapath only) to provide an extremely simple, low-cost, low-complexity execution core which is best at executing the most common case efficiently. This redesign, referred to as the Narrow Bitwidth Architecture, is unique in that although the datapath is squeezed to 16-bits...

  8. Classifying Enterprise Architecture Analysis Approaches

    Science.gov (United States)

    Buckl, Sabine; Matthes, Florian; Schweda, Christian M.

    Enterprise architecture (EA) management forms a commonly accepted means to enhance the alignment of business and IT, and to support the managed evolution of the enterprise. One major challenge of EA management is to provide decision support by analyzing as-is states of the architecture as well as assessing planned future states. Thus, different kinds of analysis regarding the EA exist, each relying on certain conditions and demands for models, methods, and techniques.

  9. Performance optimization of scientific applications on emerging architectures

    Science.gov (United States)

    Dursun, Hikmet

    The shift to many-core architecture design paradigm in computer market has provided unprecedented computational capabilities. This also marks the end of the free-ride era---scientific software must now evolve with new chips. Hence, it is of great importance to develop large legacy-code optimization frameworks to achieve an optimal system architecture-algorithm mapping that maximizes processor utilization and thereby achieves higher application performance. To address this challenge, this thesis studies and develops scalable algorithms for leveraging many-core resources optimally to improve the performance of massively parallel scientific applications. This work presents a systematic approach to optimize scientific codes on emerging architectures, which consists of three major steps: (1) Develop a performance profiling framework to identify application performance bottlenecks on clusters of emerging architectures; (2) explore common algorithmic kernels in a suite of real world scientific applications and develop performance tuning strategies to provide insight into how to maximally utilize underlying hardware; and (3) unify experience in performance optimization to develop a top-down optimization framework for the optimization of scientific applications on emerging high-performance computing platforms. This thesis makes the following contributions. First, we have designed and implemented a performance analysis methodology for Cell-accelerated clusters. Two parallel scientific applications---lattice Boltzmann (LB) flow simulation and atomistic molecular dynamics (MD) simulation---are analyzed and valuable performance insights are gained on a Cell processor based PlayStation3 cluster as well as a hybrid Opteron+Cell based cluster similar to the design of Roadrunner---the first petaflop supercomputer of the world. Second, we have developed a novel parallelization framework for finite-difference time-domain applications. The approach is validated in a seismic

  10. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    Science.gov (United States)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  11. An Evolutionary Optimization Framework for Neural Networks and Neuromorphic Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Plank, James [University of Tennessee (UT); Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2016-01-01

    As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.

  12. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2014-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...... in topological optimization: Interactive control and continuous visualization; embedding flexible voids within the design space; consideration of distinct tension / compression properties; and optimization of dual material systems. In extension, optimization procedures for skeletal structures such as trusses...... and frames are implemented. The developed procedures allow for the exploration of new territories in optimization of architectural structures, and offer new methodological strategies for bridging conceptual gaps between optimization and architectural practice....

  13. An Approach on Theme Related Website Architecture Optimization%面向主题相关的网站架构优化方法研究

    Institute of Scientific and Technical Information of China (English)

    冯秀珍; 赵翠芬

    2012-01-01

    网站架构优化是网站信息资源管理者面临的重要挑战之一.解决方案是针对网站信息的主题特征,构建主题相关的网站架构.基本思路是根据网站主题与主题之间、web页面与主题之间,以及新增信息与主题的相关性度量,确定主题节点的变更以期达到网站结构优化的目的.本文实例说明面向主题相关性的分析方法是网站结构优化的有效工具.%Website optimization is one of the significant challenges for these managers who are responsible for managing information resources on websites. To deal with such problems, the solution is to construct theme related architecture based on the information of the website. The idea is to merge the relativity between exiting themes and new themes, new webpages and themes, which could pave the foundation for website optimization. The numerical examples presented conform that theme related analysis is an efficient and effective tool for website optimization.

  14. A Declarative Approach to Architectural Reflection

    DEFF Research Database (Denmark)

    Ingstrup, Mads; Hansen, Klaus Marius

    2005-01-01

    Recent research shows runtime architectural reflection is instrumental in, for instance, building adaptive and flexible systems or checking correspondence between design and implementation. Moreover, experience with computational reflection in various branches of computer science shows that the i......Recent research shows runtime architectural reflection is instrumental in, for instance, building adaptive and flexible systems or checking correspondence between design and implementation. Moreover, experience with computational reflection in various branches of computer science shows...... is described. Specifically, our contributions are: (1) a presentation of the general idea of a query-based approach to architectural reflection, (2) a definition of an Architectural Query Language (AQL) in which perspectives on an architectural model can be expressed as queries, (3) a prototype of a system...

  15. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2011-01-01

    The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im

  16. New approaches for contemporary architecture

    Directory of Open Access Journals (Sweden)

    Alberto Sposito

    2016-11-01

    Full Text Available On the subject of contemporary architecture, the Author proposes new “approaches”, in consideration of the social and political situation, which, with the continuous migratory fluxes, demands solidarity, hospitality and integration: an “anthropological approach”, given the complex phenomenology of social life as conditioned by the technology of interactivity; a “management approach” to develop in professionals, institutions and their unifying networks, abilities geared towards conservation and enhancement of historical centres, and to set goals that can be achieved in sustainable fashion; a “participatory approach” between local communities and those newly arrived in a city’s old quarters, programming joint activities, from which relationships between different peoples may emerge and in which everybody is a “player” and a driving force for “unity”.

  17. Topology Optimized Architectures with Programmable Poisson's Ratio over Large Deformations

    DEFF Research Database (Denmark)

    Clausen, Anders; Wang, Fengwen; Jensen, Jakob Søndergaard

    2015-01-01

    Topology optimized architectures are designed and printed with programmable Poisson's ratios ranging from -0.8 to 0.8 over large deformations of 20% or more.......Topology optimized architectures are designed and printed with programmable Poisson's ratios ranging from -0.8 to 0.8 over large deformations of 20% or more....

  18. Optimization of Forward Wave Modeling on Contemporary HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, Jens [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Micikevicius, Paulius [NVIDIA, Santa Clara, CA (United States); Williams, Samuel [Fraunhofer ITWM, Kaiserslautern (Germany)

    2012-07-20

    Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization for TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.

  19. Enterprise architecture approach to mining companies engineering

    Directory of Open Access Journals (Sweden)

    Ilin’ Igor

    2017-01-01

    Full Text Available As Russian economy is still largely oriented on commodities production, there are a lot of cities where mining and commodity-oriented enterprises are the backbone of city economy. The mentioned enterprises mostly define the life quality of citizens in such cities, thus there are high requirements for engineering of city-forming enterprises. The paper describes the enterprise architecture approach for management system engineering of the mining enterprises. The paper contains the model of the mining enterprise architecture, the approach to the development and implementation of an integrated management system based on the concept of enterprise architecture and the structure of information systems and information technology infrastructure of the mining enterprise.

  20. Approaching Technical Issues in Architectural Education

    DEFF Research Database (Denmark)

    Pugnale, Alberto; Parigi, Dario

    2012-01-01

    This paper discusses teaching of technical subjects in architecture, presenting two experimental activities, recently organized at Aalborg University - a two week long workshop and a one day long lecture. From the pedagogical point of view, the activities are strategically placed between...... conventional disciplinary courses and architectural design studios. On the one hand, this allows a better mix of theoretical lectures, exercises and design practice; on the other hand, narrow topic related to structural design may be deepened on the basis of a research-based approach to design....

  1. An Overlay Architecture for Throughput Optimal Multipath Routing

    Science.gov (United States)

    2017-01-14

    1 An Overlay Architecture for Throughput Optimal Multipath Routing Nathaniel M. Jones, Georgios S. Paschos, Brooke Shrader, and Eytan Modiano...decisions. In this work, we study an overlay architecture for dynamic routing such that only a subset of devices (overlay nodes) need to make dynamic routing...a legacy network. Network overlays are frequently used to deploy new communication architectures in legacy networks [13]. To accomplish this, messages

  2. A Hybrid Architecture Approach for Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Mohammad R.S. Aghaei

    2009-01-01

    Full Text Available Problem statement: In this study, a general plan of hybrid architecture for quantum algorithms is proposed. Approach: Analysis of the quantum algorithms shows that these algorithms were hybrid with two parts. First, the relationship of classical and quantum parts of the hybrid algorithms was extracted. Then a general plan of hybrid structure was designed. Results: This plan was illustrated the hybrid architecture and the relationship of classical and quantum parts of the algorithms. This general plan was used to increase implementation performance of quantum algorithms. Conclusion/Recommendations: Moreover, simulation results of quantum algorithms on the hybrid architecture proved that quantum algorithms can be implemented on the general plan as well.

  3. Architecture Optimization of More Electric Aircraft Actuation System

    Institute of Scientific and Technical Information of China (English)

    QI Haitao; FU Yongling; QI Xiaoye; LANG Yan

    2011-01-01

    The optional types of power source and actuator in the aircraft are more and more diverse due to fast development in more electric technology,which makes the combinations of different power sources and actuators become extremely complex in the architecture optimization process of airborne actuation system.The traditional “trial and error” method cannot satisfy the design demands.In this paper,firstly,the composition of more electric aircraft(MEA) flight control actuation system(FCAS) is introduced,and the possible architecture quantity is calculated.Secondly,the evaluation criteria of FCAS architecture with respect to safe reliability,weight and efficiency are proposed,and the evaluation criteria values are calculated in the case that each control surface adopts the same actuator configuration.Finally,the optimization results of MEA FCAS architecture are obtained by applying genetic algorithm(GA).Compared to the traditional actuation system architecture,which only adopts servo valve controlled hydraulic actuators,the weight of the optimized more electric actuation system architecture can be reduced by 6%,and the efficiency can be improved by 30% based on the safe reliability requirements.

  4. Evolutionary approach for spatial architecture layout design enhanced by an agent-based topology finding system

    Directory of Open Access Journals (Sweden)

    Zifeng Guo

    2017-03-01

    Full Text Available This paper presents a method for the automatic generation of a spatial architectural layout from a user-specified architectural program. The proposed approach binds a multi-agent topology finding system and an evolutionary optimization process. The former generates topology satisfied layouts for further optimization, while the latter focuses on refining the layouts to achieve predefined architectural criteria. The topology finding process narrows the search space and increases the performance in subsequent optimization. Results imply that the spatial layout modeling and the multi-floor topology are handled.

  5. An Optimal Controller Architecture for Poset-Causal Systems

    CERN Document Server

    Shah, Parikshit

    2011-01-01

    We propose a novel and natural architecture for decentralized control that is applicable whenever the underlying system has the structure of a partially ordered set (poset). This controller architecture is based on the concept of Moebius inversion for posets, and enjoys simple and appealing separation properties, since the closed-loop dynamics can be analyzed in terms of decoupled subsystems. The controller structure provides rich and interesting connections between concepts from order theory such as Moebius inversion and control-theoretic concepts such as state prediction, correction, and separability. In addition, using our earlier results on H_2-optimal decentralized control for arbitrary posets, we prove that the H_2-optimal controller in fact possesses the proposed structure, thereby establishing the optimality of the new controller architecture.

  6. Architectural modifications for flexible supercapacitor performance optimization

    Science.gov (United States)

    Keskinen, Jari; Lehtimäki, Suvi; Dastpak, Arman; Tuukkanen, Sampo; Flyktman, Timo; Kraft, Thomas; Railanmaa, Anna; Lupo, Donald

    2016-09-01

    We have developed material and architectural alternatives for flexible supercapacitors and investigated their effect on practical performance. The substrate alternatives include paperboard as well as various polyethylene terephthalate (PET) films and laminates, with aqueous NaCl electrolyte used in all devices. In all the supercapacitors, activated carbon is used as the active layer and graphite ink as the current collector, with various aluminium or copper structures applied to enhance the current collectors' conductivity. The capacitance of the supercapacitors was between 0.05 F and 0.58 F and their equivalent series resistance (ESR) was from <1 Ω to 14 Ω, depending mainly on the current collector structure. Furthermore, leakage current and selfdischarge rates were defined and compared for the various architectures. The barrier properties of the supercapacitor encapsulation have a clear correlation with leakage current, as was clearly shown by the lower leakage in devices with an aluminium barrier layer. A cycle life test showed that after 40000 charge-discharge cycles the capacitance decreases by less than 10%.

  7. Microgrid management architecture considering optimal battery dispatch

    Science.gov (United States)

    Paul, Tim George

    Energy management and economic operation of microgrids with energy storage systems at the distribution level have attracted significant research interest in recent years. One of the challenges in this area has been the coordination of energy management functions with decentralized and centralized dispatch. In this thesis a distributed dispatch algorithm for a microgrid consisting of a photovoltaic source with energy storage which can work with a centralized dispatch algorithm that ensure stability of the microgrid is proposed. To this end, first a rule based dispatch algorithm is formulated which is based on maximum resource utilization and can work in both off grid and grid connected mode. Then a fixed horizon optimization algorithm which minimizes the cost of power taken from the grid is developed. In order to schedule the battery based on changes in the PV farm a predictive horizon methodology based optimization is designed. Further, the rule based and optimization based dispatch methodologies is linked to optimize the voltage deviations at the microgrid Point of Common Coupling (PCC). The main advantage of the proposed method is that, an optimal active power dispatch considering the nominal voltage bandwidth can be initiated for the microgrid in both grid connected or off grid mode of operation. Also, the method allows the grid operator to consider cost based optimal renewable generation scheduling and/or the maximum power extraction based modes of operation simultaneously or separately based on grid operating conditions and topologies. Further, the methods allows maintaining PCC voltage within the limits during these modes of operation and at the same time ensure that the battery dispatch is optimal.

  8. Optimizing Neural Network Architectures Using Generalization Error Estimators

    DEFF Research Database (Denmark)

    Larsen, Jan

    1994-01-01

    This paper addresses the optimization of neural network architectures. It is suggested to optimize the architecture by selecting the model with minimal estimated averaged generalization error. We consider a least-squares (LS) criterion for estimating neural network models, i.e., the associated...... neural network applications, it is impossible to suggest a perfect model, and consequently the ability to handle incomplete models is urgent. A concise derivation of the GEN-estimator is provided, and its qualities are demonstrated by comparative numerical studies...

  9. Optimizing Neural Network Architectures Using Generalization Error Estimators

    DEFF Research Database (Denmark)

    Larsen, Jan

    1994-01-01

    This paper addresses the optimization of neural network architectures. It is suggested to optimize the architecture by selecting the model with minimal estimated averaged generalization error. We consider a least-squares (LS) criterion for estimating neural network models, i.e., the associated...... neural network applications, it is impossible to suggest a perfect model, and consequently the ability to handle incomplete models is urgent. A concise derivation of the GEN-estimator is provided, and its qualities are demonstrated by comparative numerical studies...

  10. Heterogeneous architecture to process swarm optimization algorithms

    Directory of Open Access Journals (Sweden)

    Maria A. Dávila-Guzmán

    2014-01-01

    Full Text Available Since few years ago, the parallel processing has been embedded in personal computers by including co-processing units as the graphics processing units resulting in a heterogeneous platform. This paper presents the implementation of swarm algorithms on this platform to solve several functions from optimization problems, where they highlight their inherent parallel processing and distributed control features. In the swarm algorithms, each individual and dimension problem are parallelized by the granularity of the processing system which also offer low communication latency between individuals through the embedded processing. To evaluate the potential of swarm algorithms on graphics processing units we have implemented two of them: the particle swarm optimization algorithm and the bacterial foraging optimization algorithm. The algorithms’ performance is measured using the acceleration where they are contrasted between a typical sequential processing platform and the NVIDIA GeForce GTX480 heterogeneous platform; the results show that the particle swarm algorithm obtained up to 36.82x and the bacterial foraging swarm algorithm obtained up to 9.26x. Finally, the effect to increase the size of the population is evaluated where we show both the dispersion and the quality of the solutions are decreased despite of high acceleration performance since the initial distribution of the individuals can converge to local optimal solution.

  11. Optimality theory as a general cognitive architecture

    NARCIS (Netherlands)

    Biró, T.; Gervain, J.

    2011-01-01

    It was exactly 25 years ago that Paul Smolensky introduced Harmony Theory (Smolensky, 1986), a framework that would pursue an exciting, but certainly not straight path through linguistics (namely, Optimality Theory) and other cognitive domains. The goal of this workshop is not so much to look back t

  12. Optimality theory as a general cognitive architecture

    NARCIS (Netherlands)

    Biró, T.; Gervain, J.

    2011-01-01

    It was exactly 25 years ago that Paul Smolensky introduced Harmony Theory (Smolensky, 1986), a framework that would pursue an exciting, but certainly not straight path through linguistics (namely, Optimality Theory) and other cognitive domains. The goal of this workshop is not so much to look back t

  13. Optimality theory as a general cognitive architecture

    OpenAIRE

    Biró, T.; Gervain, J.

    2011-01-01

    It was exactly 25 years ago that Paul Smolensky introduced Harmony Theory (Smolensky, 1986), a framework that would pursue an exciting, but certainly not straight path through linguistics (namely, Optimality Theory) and other cognitive domains. The goal of this workshop is not so much to look back to this path, but rather to discuss its potential continuation(s).

  14. Discrete optimization in architecture extremely modular systems

    CERN Document Server

    Zawidzki, Machi

    2017-01-01

    This book is comprised of two parts, both of which explore modular systems: Pipe-Z (PZ) and Truss-Z (TZ), respectively. It presents several methods of creating PZ and TZ structures subjected to discrete optimization. The algorithms presented employ graph-theoretic and heuristic methods. The underlying idea of both systems is to create free-form structures using the minimal number of types of modular elements. PZ is more conceptual, as it forms single-branch mathematical knots with a single type of module. Conversely, TZ is a skeletal system for creating free-form pedestrian ramps and ramp networks among any number of terminals in space. In physical space, TZ uses two types of modules that are mirror reflections of each other. The optimization criteria discussed include: the minimal number of units, maximal adherence to the given guide paths, etc.

  15. Proposing an Optimal Learning Architecture for the Digital Enterprise.

    Science.gov (United States)

    O'Driscoll, Tony

    2003-01-01

    Discusses the strategic role of learning in information age organizations; analyzes parallels between the application of technology to business and the application of technology to learning; and proposes a learning architecture that aligns with the knowledge-based view of the firm and optimizes the application of technology to achieve proficiency…

  16. Proposing an Optimal Learning Architecture for the Digital Enterprise.

    Science.gov (United States)

    O'Driscoll, Tony

    2003-01-01

    Discusses the strategic role of learning in information age organizations; analyzes parallels between the application of technology to business and the application of technology to learning; and proposes a learning architecture that aligns with the knowledge-based view of the firm and optimizes the application of technology to achieve proficiency…

  17. A Systems Engineering Approach to Architecture Development

    Science.gov (United States)

    Di Pietro, David A.

    2015-01-01

    Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles

  18. Systems approaches to study root architecture dynamics

    Directory of Open Access Journals (Sweden)

    Candela eCuesta

    2013-12-01

    Full Text Available The plant root system is essential for providing anchorage to the soil, supplying minerals and water, and synthesizing metabolites. It is a dynamic organ modulated by external cues such as environmental signals, water and nutrients availability, salinity and others. Lateral roots are initiated from the primary root post-embryonically, after which they progress through discrete developmental stages which can be independently controlled, providing a high level of plasticity during root system formation.Within this review, main contributions are presented, from the classical forward genetic screens to the more recent high-throughput approaches, combined with computer model predictions, dissecting how lateral roots and thereby root system architecture is established and developed.

  19. A genetic approach to architectural pattern discovery

    NARCIS (Netherlands)

    Peters, J.G.T.; van der Werf, J.M.E.M.

    2016-01-01

    Architectural patterns represent reusable design of software architecture at a high level of abstraction. They can be used to structure new applications and to recover the modular structure of existing systems. Techniques like Architecture Compliance Checking (ACC) focus on testing whether realised

  20. Topology optimization approaches

    DEFF Research Database (Denmark)

    Sigmund, Ole; Maute, Kurt

    2013-01-01

    Topology optimization has undergone a tremendous development since its introduction in the seminal paper by Bendsøe and Kikuchi in 1988. By now, the concept is developing in many different directions, including “density”, “level set”, “topological derivative”, “phase field”, “evolutionary...

  1. A Rigorous Architectural Approach to Adaptive Software Engineering

    Institute of Scientific and Technical Information of China (English)

    Jeff Kramer; Jeff Magee

    2009-01-01

    The engineering of distributed adaptive software is a complex task which requires a rigorous approach. Software architectural (structural) concepts and principles are highly beneficial in specifying, designing, analysing, constructing and evolving distributed software. A rigorous architectural approach dictates formalisms and techniques that are compositional,components that are context independent and systems that can be constructed and evolved incrementally. This paper overviews some of the underlying reasons for adopting an architectural approach, including a brief "rational history" of our research work, and indicates how an architectural model can potentially facilitate the provision of self-managed adaptive software system.

  2. Overview and Software Architecture of the Copernicus Trajectory Design and Optimization System

    Science.gov (United States)

    Williams, Jacob; Senent, Juan S.; Ocampo, Cesar; Mathur, Ravi; Davis, Elizabeth C.

    2010-01-01

    The Copernicus Trajectory Design and Optimization System represents an innovative and comprehensive approach to on-orbit mission design, trajectory analysis and optimization. Copernicus integrates state of the art algorithms in optimization, interactive visualization, spacecraft state propagation, and data input-output interfaces, allowing the analyst to design spacecraft missions to all possible Solar System destinations. All of these features are incorporated within a single architecture that can be used interactively via a comprehensive GUI interface, or passively via external interfaces that execute batch processes. This paper describes the Copernicus software architecture together with the challenges associated with its implementation. Additionally, future development and planned new capabilities are discussed. Key words: Copernicus, Spacecraft Trajectory Optimization Software.

  3. Modular production line optimization: The exPLORE architecture

    Directory of Open Access Journals (Sweden)

    Spinellis Diomidis D.

    2000-01-01

    Full Text Available The general design problem in serial production lines concerns the allocation of resources such as the number of servers, their service rates, and buffers given production-specific constraints, associated costs, and revenue projections. We describe the design of exPLOre: a modular, object-oriented, production line optimization software architecture. An abstract optimization module can be instantiated using a variety of stochastic optimization methods such as simulated annealing and genetic algorithms. Its search space is constrained by a constraint checker while its search direction is guided by a cost analyser which combines the output of a throughput evaluator with the business model. The throughput evaluator can be instantiated using Markovian, generalised queueing network methods, a decomposition, or an expansion method algorithm.

  4. Modular production line optimization: The exPLORE architecture

    Directory of Open Access Journals (Sweden)

    Diomidis D. Spinellis

    2001-01-01

    Full Text Available The general design problem in serial production lines concerns the allocation of resources such as the number of servers, their service rates, and buffers given production-specific constraints, associated costs, and revenue projections. We describe the design of exPLOre: a modular, object-oriented, production line optimization software architecture. An abstract optimization module can be instantiated using a variety of stochastic optimization methods such as simulated annealing and genetic algorithms. Its search space is constrained by a constraint checker while its search direction is guided by a cost analyser which combines the output of a throughput evaluator with the business model. The throughput evaluator can be instantiated using Markovian, generalised queueing network methods, a decomposition, or an expansion method algorithm.

  5. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  6. Geometric constraints for shape and topology optimization in architectural design

    Science.gov (United States)

    Dapogny, Charles; Faure, Alexis; Michailidis, Georgios; Allaire, Grégoire; Couvelas, Agnes; Estevez, Rafael

    2017-02-01

    This work proposes a shape and topology optimization framework oriented towards conceptual architectural design. A particular emphasis is put on the possibility for the user to interfere on the optimization process by supplying information about his personal taste. More precisely, we formulate three novel constraints on the geometry of shapes; while the first two are mainly related to aesthetics, the third one may also be used to handle several fabrication issues that are of special interest in the device of civil structures. The common mathematical ingredient to all three models is the signed distance function to a domain, and its sensitivity analysis with respect to perturbations of this domain; in the present work, this material is extended to the case where the ambient space is equipped with an anisotropic metric tensor. Numerical examples are discussed in two and three space dimensions.

  7. Dynamic reconfigurable architectures and transparent optimization techniques automatic acceleration of software execution

    CERN Document Server

    Fl, Antonio Carlos

    2010-01-01

    This book provides a clear review of static and dynamic architecture optimization strategies in the field of reconfigurable computing. It includes a number of case studies and a quantitative analysis of the DIM architecture.

  8. Fabrication of microfluidic architectures for optimal flow rate and concentration measurement for lab on chip application

    Science.gov (United States)

    Adam, Tijjani; Hashim, U.

    2017-03-01

    Optimum flow in micro channel for sensing purpose is challenging. In this study, The optimizations of the fluid sample flows are made through the design and characterization of the novel microfluidics' architectures to achieve the optimal flow rate in the micro channels. The biocompatibility of the Polydimetylsiloxane (Sylgard 184 silicon elastomer) polymer used to fabricate the device offers avenue for the device to be implemented as the universal fluidic delivery system for bio-molecules sensing in various bio-medical applications. The study uses the following methodological approaches, designing a novel microfluidics' architectures by integrating the devices on a single 4 inches silicon substrate, fabricating the designed microfluidic devices using low-cost solution soft lithography technique, characterizing and validating the flow throughput of urine samples in the micro channels by generating pressure gradients through the devices' inlets. The characterization on the urine samples flow in the micro channels have witnessed the constant flow throughout the devices.

  9. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    Directory of Open Access Journals (Sweden)

    Arne Riekstiņš

    2011-04-01

    Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English

  10. A tectonic approach to healthcare- and welfare architecture?

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen; Hvejsel, Marie Frier

    2017-01-01

    the everyday health and wellbeing of the families of Glasgow. It is an exceptional piece of healthcare architecture, and we argue that a tectonic approach has been applied that bridge material, construction, body, mind and socio-cultural dimensions which can be recalled and applied in future healthcare related...... indicating that the quality of architecture influence health and wellbeing. Hence, this attention also raises new demands for architectural practices and a questioning of the role of architects as researchers, which define the scope of this paper. Because, often these evidence-based studies are positioned...

  11. Architectural framework for resource management optimization over heterogeneous wireless networks

    Science.gov (United States)

    Tselikas, Nikos; Kapellaki, Sofia; Koutsoloukas, Eleftherios; Venieris, Iakovos S.

    2003-11-01

    The main goal of wireless telecommunication world can be briefly summarized as: "communication anywhere, anytime, any-media and principally at high-data rates." On the other hand, this goal is in conflict with the co-existence of plenty different current and emerging wireless systems covering almost the whole world, since each one follows its own architecture and is based on its particular bedrocks. This results in a heterogeneous depiction of the hyper-set of wireless communications systems. The scope of this paper is to present a highly innovative and scalable architectural framework, which will allow different wireless systems to be interconnected in a common way, able to achieve resource management optimization, augmentation of network performance and maximum utilization of the networks. It will describe a hierarchical management system covering all GSM, GPRS, UMTS and WLAN networks each one individually, as well as a unified and wide wireless telecommunication system including all later, in order to provide enhanced capacity and quality via the accomplished network interworking. The main idea is to monitor all the resources using distributed monitoring components with intention to feed an additional centralized system with alarms, so that a set of management techniques will be selected and applied where needed. In parallel, the centralized system will be able to combine the aforementioned alarms with business models for the efficient use of the available networks according to the type of user, the type of application as well as the user"s location.

  12. Approach to Evaluation of Maturity Level in Enterprise Architecture

    Directory of Open Access Journals (Sweden)

    Annette Malleuve Martínez

    2015-12-01

    Full Text Available The current business competitiveness is given by a high capacity of these to adapt and be fl exible to change, the context in which are important enterprise architectures that align technology with business goals. One way to achieve such alignment is through the application of models to assess the maturity to move to the architecture of the current state to the desired state. The literature of process maturity models, information systems, frameworks and even enterprise architecture, evaluating variables generally organizational performance, but lacks a maturity model to assess these variables within each dimension enterprise architecture with an integrated approach, making more specifi c areas for improvement. This article guidelines for assessing the level of maturity of enterprise architecture from a theoretical elements based on the size of enterprise architecture and a set of phases to determine the maturity level of the architecture are proposed based questionnaire pattern analysis measuring the maturity level proposed by some authors. Finally results of the application of the instrument in some Cuban companies leaders in the use of technology are presented.

  13. Computational Approach for Multi Performances Optimization of EDM

    Directory of Open Access Journals (Sweden)

    Yusoff Yusliza

    2016-01-01

    Full Text Available This paper proposes a new computational approach employed in obtaining optimal parameters of multi performances EDM. Regression and artificial neural network (ANN are used as the modeling techniques meanwhile multi objective genetic algorithm (multiGA is used as the optimization technique. Orthogonal array L256 is implemented in the procedure of network function and network architecture selection. Experimental studies are carried out to verify the machining performances suggested by this approach. The highest MRR value obtained from OrthoANN – MPR – MultiGA is 205.619 mg/min and the lowest Ra value is 0.0223μm.

  14. A Structured Approach for Reviewing Architecture Documentation

    Science.gov (United States)

    2009-12-01

    existing methods for that already [SARA 2002, Bass 2003, Clements 2002, Dobrica 2002, Babar 2004]. Rather, we are proposing an approach for evaluating the...understand which method to choose and its applicability to a particular situation [ Babar 2004, Dobrica 2002]. There is at least one attempt in the...http://webstore.ansi.org/RecordDetail.aspx?sku=ANSI%2fEIA-748-B (2007). [ Babar 2004] Babar M.A. & Gorton, I. ―Comparison of Scenario-Based Software

  15. Locality Aware Optimal Task Scheduling Algorithm for TriBA——A Novel Scalable Architecture

    Institute of Scientific and Technical Information of China (English)

    KHAN Haroon-Ur-Rashid; SHI Feng

    2008-01-01

    An optimal algorithmic approach to task scheduling for,triplet based architecture(TriBA),is proposed in this paper.TriBA is considered to be a high performance,distributed parallel computing architecture.TriBA consists of a 2D grid of small,programmable processing units,each physically connected to its three neighbors.In parallel or distributed environment an efficient assignment of tasks to the processing elements is imperatire to achieve fast job turnaround time.Moreover,the sojourn time experienced by each individual job should be minimized.The arriving jobs are comprised of parallel applications,each consisting of multiple-independent tasks that must be instantaneously assigned to processor queues,as they arrive.The processors indeDendently and concurrently service these tasks.The key scheduling issues is,when some queue backlogs are small,an incoming job should first spread its tasks to those lightly loaded queues in order to take advantage of the parallel processing gain.Our algorithmic approach achieves optimality in task scheduling by assigning consecutive tasks to a triplet of processors exploiting locality in tasks.The experimental results show that tasks allocatian to triplets of processing elements is efficient and optimal.Comparison to well accepted interconnection strategy,2D mesh,is shown to prove the effectiveness of our algorithmic approach for TriBA.Finally we conclude that TriBA can be an efficient interconnection strategy for computations intensive applications,if tasks assignment is carried out optimally using algorithmic approach.

  16. An Approach to Software Architecture Description Using UML

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Corry, Aino Vonge; Hansen, Klaus Marius

    This document presents a practical way of describing software architectures using the Unied Modeling Language. The approach is based on a "3+1" structure in which three viewpoints on the described system are used - module, component & connector, and allocation - are used to describe a solution fo...

  17. An Approach to Secure Mobile Enterprise Architectures

    Directory of Open Access Journals (Sweden)

    Florian Georg Furtmanduuml;ller

    2013-01-01

    Full Text Available Due to increased security awareness of enterprises for mobile applications operating with sensitive or personal data as well as extended regulations form legislative (the principle of proportionality various approaches, how to implement (extended two-factor authentication, multi-factor authentication or virtual private network within enterprise mobile environments to ensure delivery of secure applications, have been developed. Within mobile applications it will not be sufficient to rely on security measures of the individual components or interested parties, an overall concept of a security solution has to be established which requires the interaction of several technologies, standards and system components. These include the physical fuses on the device itself as well as on the network layer (such as integrated security components, security measures (such as employee agreements, contract clauses, insurance coverage, but also software technical protection at the application level (e.g. password protection, encryption, secure container. The purpose of this paper is to summarize the challenges and practical successes, providing best practices to fulfill appropriate risk coverage of mobile applications. I present a use case, in order to proof the concept in actual work settings, and to demonstrate the adaptability of the approach.

  18. Design of silicon brains in the nano-CMOS era: spiking neurons, learning synapses and neural architecture optimization.

    Science.gov (United States)

    Cassidy, Andrew S; Georgiou, Julius; Andreou, Andreas G

    2013-09-01

    We present a design framework for neuromorphic architectures in the nano-CMOS era. Our approach to the design of spiking neurons and STDP learning circuits relies on parallel computational structures where neurons are abstracted as digital arithmetic logic units and communication processors. Using this approach, we have developed arrays of silicon neurons that scale to millions of neurons in a single state-of-the-art Field Programmable Gate Array (FPGA). We demonstrate the validity of the design methodology through the implementation of cortical development in a circuit of spiking neurons, STDP synapses, and neural architecture optimization. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Efficient VLSI architecture of CAVLC decoder with power optimized

    Institute of Scientific and Technical Information of China (English)

    CHEN Guang-hua; HU Deng-ji; ZHANG Jin-yi; ZHENG Wei-feng; ZENG Wei-min

    2009-01-01

    This paper presents an efficient VLSI architecture of the contest-based adaptive variable length code (CAVLC) decoder with power optimized for the H.264/advanced video coding (AVC) standard. In the proposed design, according to the regularity of the codewords, the first one detector is used to solve the low efficiency and high power dissipation problem within the traditional method of table-searching. Considering the relevance of the data used in the process of runbefore's decoding,arithmetic operation is combined with finite state machine (FSM), which achieves higher decoding efficiency. According to the CAVLC decoding flow, clock gating is employed in the module level and the register level respectively, which reduces 43% of the overall dynamic power dissipation. The proposed design can decode every syntax element in one clock cycle. When the proposed design is synthesized at the clock constraint of 100 MHz, the synthesis result shows that the design costs 11 300gates under a 0.25 μm CMOS technology, which meets the demand of real time decoding in the H.264/AVC standard.

  20. Optimized batteries for cars with dual electrical architecture

    Science.gov (United States)

    Douady, J. P.; Pascon, C.; Dugast, A.; Fossati, G.

    During recent years, the increase in car electrical equipment has led to many problems with traditional starter batteries (such as cranking failure due to flat batteries, battery cycling etc.). The main causes of these problems are the double function of the automotive battery (starter and service functions) and the difficulties in designing batteries well adapted to these two functions. In order to solve these problems a new concept — the dual-concept — has been developed with two separate batteries: one battery is dedicated to the starter function and the other is dedicated to the service function. Only one alternator charges the two batteries with a separation device between the two electrical circuits. The starter battery is located in the engine compartment while the service battery is located at the rear of the car. From the analysis of new requirements, battery designs have been optimized regarding the two types of functions: (i) a small battery with high specific power for the starting function; for this function a flooded battery with lead-calcium alloy grids and thin plates is proposed; (ii) for the service function, modified sealed gas-recombinant batteries with cycling and deep-discharge ability have been developed. The various advantages of the dual-concept are studied in terms of starting reliability, battery weight, and voltage supply. The operating conditions of the system and several dual electrical architectures have also been studied in the laboratory and the car. The feasibility of the concept is proved.

  1. Infill architecture: Design approaches for in-between buildings and 'bond' as integrative element

    Directory of Open Access Journals (Sweden)

    Alfirević Đorđe

    2015-01-01

    Full Text Available The aim of the paper is to draw attention to the view that the two key elements in achieving good quality of architecture infill in immediate, current surroundings, are the selection of optimal creative method of infill architecture and adequate application of 'the bond' as integrative element, The success of achievement and the quality of architectural infill mainly depend on the assessment of various circumstances, but also on the professionalism, creativity, sensibility, and finally innovativeness of the architect, In order for the infill procedure to be carried out adequately, it is necessary to carry out the assessment of quality of the current surroundings that the object will be integrated into, and then to choose the creative approach that will allow the object to establish an optimal dialogue with its surroundings, On a wider scale, both theory and the practice differentiate thee main creative approaches to infill objects: amimetic approach (mimesis, bassociative approach and ccontrasting approach, Which of the stated approaches will be chosen depends primarily on the fact whether the existing physical structure into which the object is being infilled is 'distinct', 'specific' or 'indistinct', but it also depends on the inclination of the designer, 'The bond' is a term which in architecture denotes an element or zone of one object, but in some instances it can refer to the whole object which has been articulated in a specific way, with an aim of reaching the solution for the visual conflict as is often the case in situations when there is a clash between the existing objects and the newly designed or reconstructed object, This paper provides in-depth analysis of different types of bonds, such as 'direction as bond', 'cornice as bond', 'structure as bond', 'texture as bond' and 'material as bond', which indicate complexity and multiple layers of the designing process of object interpolation.

  2. Development of Enterprise Architecture using a Framework with Agile Approach

    Directory of Open Access Journals (Sweden)

    Fanny Sandoval

    2017-02-01

    Full Text Available The development of an enterprise architecture (EA in large organizations is complex. Thus, is important that the implementation of EA creates value in early stages of the process. This document contains a proposal of an EA framework design with agile approach based in TOGAF. This proposal is done with the objective to streamline the EA process. This framework presents a new design taking into account the current regulations, the target line of the organization and the principles proposed by the agile approach of EA. The objectives of each phase of the architecture development method ADM of TOGAF are matched with the requirements of the organization to extract only those that are aligned to the business. The deliverables proposed by TOGAF are analyzed with the goal to obtain, integrate and reduce the documentation in the implementation and modeling phases. This reduction allows more flexibility, less impact in the processes, reduction in development time and costs.

  3. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2015-01-01

    in topological optimization: Interactive control and continuous visualization; embedding flexible voids within the design space; consideration of distinct tension / compression properties; and optimization of dual material systems. In extension, optimization procedures for skeletal structures such as trusses...

  4. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    Science.gov (United States)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while

  5. POWER OPTIMIZED DATAPATH UNITS OF HYBRID EMBEDDED CORE ARCHITECTURE USING CLOCK GATING TECHNIQUE

    National Research Council Canada - National Science Library

    T.Subhashini; M.Kamaraju

    2015-01-01

    ...% of the total power dissipation. The main goal of this work is to implement a prototype power optimized datapath unit and ALU of Hybrid Embedded Controller Architecture targeted on to the FPGA chip and analyze the power consumption...

  6. A software architecture centric self-adaptation approach for Internetware

    Institute of Scientific and Technical Information of China (English)

    MEI Hong; HUANG Gang; LAN Ling; LI JunGuo

    2008-01-01

    Being one of the basic features of Internetware, self-adaptation means that the software system can monitor its runtime state and behavior and adjust them when necessary according to pre-defined policies. Focusingon the three fundamental issues of self-adaptation, including the scope, operability and trustworthiness, a software architecture (SA) centric approach for Internetware's self-adaptation is presented in this paper. All of the self-adaptive actions, i.e. monitoring, analyzing, planning and executing, are performed based on SA. In detail, runtime state and behavior of Internetware are represented and changed in the form of runtime soft-ware architecture. The knowledge for self-adaptation is captured, organized and reasoned in the form of SA so that automatic analysis and decision-making are achieved.

  7. A software architecture centric engineering approach for Internetware

    Institute of Scientific and Technical Information of China (English)

    MEI Hong; HUANG Gang; ZHAO Haiyan; JIAO Wenpin

    2006-01-01

    As a new software paradigm evolved by the Internet, Internetware brings many challenges for the traditional software development methods and techniques. Though architecture-based component composition (ABC) approach is originated in the traditional software paradigm, it supports the engineering of Internetware effectively due to its philosophy, rationales and mechanisms. ABC has three major contributions to the engineering of Internetware in detail. First, the feature oriented domain modeling method can structure the "disordered" "software entities" to "ordered Internetware" bottom-up in the problem space. Second, the architecture centric design and analysis method can support the development of self-adaptive Internetware. Third, the component operating platform is a reflective and self-adaptive middleware that not only provides Internetware with a powerful and flexible runtime infrastructure but also enables the self-adaptation of the structure and individual entities of Internetware.

  8. Power optimization of digital baseband WCDMA receiver components on algorithmic and architectural level

    Directory of Open Access Journals (Sweden)

    M. Schämann

    2008-05-01

    Full Text Available High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices.

    This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS and a Levinson (RLS algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library.

    An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an

  9. Delay Optimized Architecture for On-Chip Communication

    Institute of Scientific and Technical Information of China (English)

    Sheraz Anjum; Jie Chen; Pei-Pei Yue; Jian Liu

    2009-01-01

    Networks-on-chip (NoC), a new system on chip (SoC) paradigm, has become a great focus of research by many groups during the last few years. Among all the NoC architectures that have been proposed until now, 2D-Mesh has proved to be the best architecture for implementation due to its regular and simple intercon- nection structure. In this paper, we propose a new interconnect architecture called 2D-diagonal mesh (2DDgl-Mesh) for on-chip communication. The 2DDgl- Mesh is almost similar to traditional 2D-Mesh in aspects of cost, area, and implementation, but it can outperform the later in delay. The both architectures are compared by using NS-2 (a network simulator) and CINSIM (a component based interconnection simulator) under the same traffic models and parametric conditions. The results of comparison show that under the proposed architecture, the packets can almost always be routed to their destinations in less time. In addition, our archi- tecture can sometimes perform better than 2D-Mesh in drop ratio for special fixed traffic models.

  10. Approach for Mitigating Pressure Garment Design Risks in a Mobile Lunar Surface Systems Architecture

    Science.gov (United States)

    Aitchison, Lindsay

    2009-01-01

    The stated goals of the 2004 Vision for Space Exploration focus on establishing a human presence throughout the solar system beginning with the establishment of a permanent human presence on the Moon. However, the precise objectives to be accomplished on the lunar surface and the optimal system architecture to achieve those objectives have been a topic of much debate since the inception of the Constellation Program. There are two basic styles of system architectures being traded at the Programmatic level: a traditional large outpost that would focus on techniques for survival off our home planet and a greater depth of exploration within one area, or a mobile approach- akin to a series of nomadic camps- that would allow greater breadth of exploration opportunities. The traditional outpost philosophy is well within the understood pressure garment design space with respect to developing interfaces and operational life cycle models. The mobile outpost, however, combines many unknowns with respect to pressure garment performance and reliability that could dramatically affect the cost and schedule risks associated with the Constellation space suit system. This paper provides an overview of the concepts being traded for a mobile architecture from the operations and hardware implementation perspective, describes the primary risks to the Constellation pressure garment associated with each of the concepts, and summarizes the approach necessary to quantify the pressure garment design risks to enable the Constellation Program to make informed decisions when deciding on an overall lunar surface systems architecture.

  11. Comparison of Human Exploration Architecture and Campaign Approaches

    Science.gov (United States)

    Goodliff, Kandyce; Cirillo, William; Mattfeld, Bryan; Stromgren, Chel; Shyface, Hilary

    2015-01-01

    As part of an overall focus on space exploration, National Aeronautics and Space Administration (NASA) continues to evaluate potential approaches for sending humans beyond low Earth orbit (LEO). In addition, various external organizations are studying options for beyond LEO exploration. Recent studies include NASA's Evolvable Mars Campaign and Design Reference Architecture (DRA) 5.0, JPL's Minimal Mars Architecture; the Inspiration Mars mission; the Mars One campaign; and the Global Exploration Roadmap (GER). Each of these potential exploration constructs applies unique methods, architectures, and philosophies for human exploration. It is beneficial to compare potential approaches in order to better understand the range of options available for exploration. Since most of these studies were conducted independently, the approaches, ground rules, and assumptions used to conduct the analysis differ. In addition, the outputs and metrics presented for each construct differ substantially. This paper will describe the results of an effort to compare and contrast the results of these different studies under a common set of metrics. The paper will first present a summary of each of the proposed constructs, including a description of the overall approach and philosophy for exploration. Utilizing a common set of metrics for comparison, the paper will present the results of an evaluation of the potential benefits, critical challenges, and uncertainties associated with each construct. The analysis framework will include a detailed evaluation of key characteristics of each construct. These will include but are not limited to: a description of the technology and capability developments required to enable the construct and the uncertainties associated with these developments; an analysis of significant operational and programmatic risks associated with that construct; and an evaluation of the extent to which exploration is enabled by the construct, including the destinations

  12. Information security architecture an integrated approach to security in the organization

    CERN Document Server

    Killmeyer, Jan

    2000-01-01

    An information security architecture is made up of several components. Each component in the architecture focuses on establishing acceptable levels of control. These controls are then applied to the operating environment of an organization. Functionally, information security architecture combines technical, practical, and cost-effective solutions to provide an adequate and appropriate level of security.Information Security Architecture: An Integrated Approach to Security in the Organization details the five key components of an information security architecture. It provides C-level executives

  13. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John

    2010-06-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  14. Smooth Optimization Approach for Sparse Covariance Selection

    OpenAIRE

    Lu, Zhaosong

    2009-01-01

    In this paper we first study a smooth optimization approach for solving a class of nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations. In particular, we apply Nesterov's smooth optimization technique [Y.E. Nesterov, Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543--547; Y. E. Nesterov, Math. Programming, 103 (2005), pp. 127--152] to their dual counterparts that are smooth convex problems. It is shown that the resulting approach...

  15. System Optimization Using a Parallel Stochastic Approach

    Directory of Open Access Journals (Sweden)

    ZAPLATILEK, K.

    2013-05-01

    Full Text Available This paper describes an original stochastic algorithm based on a parallel approach. The algorithm is suitable especially for a real technical system optimization. A few independent pseudorandom generators are used. They generate independent variable vectors along all of the optimized system axes. Local optimal values are used to define a final pseudorandom generator with a narrower interval around the global optimum. Theoretical foundations are introduced and a few practical experiments are presented. The described method is also suitable for the quality classification of the pseudorandom generators using the selected RGB color scheme. Main advantages of this approach are discussed. The algorithm was developed in the MATLAB environment.

  16. Huber Optimization of Circuits: A Robust Approach

    DEFF Research Database (Denmark)

    Bandler, J. W.; Biernacki, R.; Chen, S.

    1993-01-01

    The authors introduce an approach to robust circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting points, and statistical...... uncertainties. They demonstrate FET statistical modeling, multiplexer optimization, analog fault location, and data fitting. They extend the Huber concept by introducing a one-sided Huber function for large-scale optimization. For large-scale problems, the designer often attempts, by intuition, a preliminary...

  17. Huber Optimization of Circuits: A Robust Approach

    DEFF Research Database (Denmark)

    Bandler, J. W.; Biernacki, R.; Chen, S.;

    1993-01-01

    The authors introduce an approach to robust circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting points, and statistical...... uncertainties. They demonstrate FET statistical modeling, multiplexer optimization, analog fault location, and data fitting. They extend the Huber concept by introducing a one-sided Huber function for large-scale optimization. For large-scale problems, the designer often attempts, by intuition, a preliminary...

  18. [Providing the Optimal Insolation of a Photobiological Architectural Shell for Microalgae Cultivation].

    Science.gov (United States)

    Ermachenko, P A; Buzalo, N S; Perevjazka, D S

    2016-01-01

    Translucent architectural shells with microalgae are considered as an element of local photobiological treatment facilities integrated in the urban environment. A mathematical microalgae growth model for the prediction of insolation and temperature behaviour in the medium during microalgae cultivation under dynamically fluctuating natural lighting is presented. The task of optimizing the parameters of photobiological architectural shell with respect to temperature and insolation is set. The results of numerical experiments for the model problem are shown.

  19. A Grid Model for the Design, Coordination and Dimensional Optimization in Architecture.

    OpenAIRE

    Léonard, Daniel; Malcurat, Olivier

    2008-01-01

    Our article treats layout grids in architecture and their use by the architects for the purposes not only ofdesign but also of dimensional coordination and optimization. It initially proposes to define anarchitectural grid model as well as a set of operations to construct them. Then, it discusses this model andits capacity to assist the designers in their everyday work of (re)dimensioning 3 .The architectural grid, as an instrument of design, is omnipresent in the work of architects whatever ...

  20. Dynamical System Approaches to Combinatorial Optimization

    DEFF Research Database (Denmark)

    Starke, Jens

    2013-01-01

    Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods....... Many of them are investigated analytically, and the costs of the solutions are compared numerically with those of solutions obtained by simulated annealing and the costs of a global optimal solution. Using dynamical systems, a solution to the combinatorial optimization problem emerges in the limit...... of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization...

  1. Computational study of Wolff's law with trabecular architecture in the human proximal femur using topology optimization.

    Science.gov (United States)

    Jang, In Gwun; Kim, Il Yong

    2008-08-07

    In the field of bone adaptation, it is believed that the morphology of bone is affected by its mechanical loads, and bone has self-optimizing capability; this phenomenon is well known as Wolff's law of the transformation of bone. In this paper, we simulated trabecular bone adaptation in the human proximal femur using topology optimization and quantitatively investigated the validity of Wolff's law. Topology optimization iteratively distributes material in a design domain producing optimal layout or configuration, and it has been widely and successfully used in many engineering fields. We used a two-dimensional micro-FE model with 50 microm pixel resolution to represent the full trabecular architecture in the proximal femur, and performed topology optimization to study the trabecular morphological changes under three loading cases in daily activities. The simulation results were compared to the actual trabecular architecture in previous experimental studies. We discovered that there are strong similarities in trabecular patterns between the computational results and observed data in the literature. The results showed that the strain energy distribution of the trabecular architecture became more uniform during the optimization; from the viewpoint of structural topology optimization, this bone morphology may be considered as an optimal structure. We also showed that the non-orthogonal intersections were constructed to support daily activity loadings in the sense of optimization, as opposed to Wolff's drawing.

  2. Modeling and optimization of multiple unmanned aerial vehicles system architecture alternatives.

    Science.gov (United States)

    Qin, Dongliang; Li, Zhifei; Yang, Feng; Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios.

  3. DP-FPGA: An FPGA Architecture Optimized for Datapaths

    Directory of Open Access Journals (Sweden)

    Don Cherepacha

    1996-01-01

    Full Text Available This paper presents a new Field-Programmable Gate Array (FPGA architecture which reduces the density gap between FPGAs and Mask-Programmed Gate Arrays (MPGAs for datapath oriented circuits. This is primarily achieved by operating on data as a number of identically programmed four-bit slices. The interconnection network incorporates distinct sets of resources for routing control and data signals. These features reduce circuit area by sharing programming bits among four-bit slices, reducing the total number of storage cells required.

  4. Shape and Topology Optimization Methodologies Used in Architectural Design

    DEFF Research Database (Denmark)

    Frier, Christian; Kirkegaard, Poul Henning; Christiansen, Karl

    2004-01-01

    the conceptual design for a structural engineer tends to be more concrete in nature, i.e. the choice between an arch and a suspension structure, between concrete and steel. Still, these differences tend to vanish in some of the most successful examples of integrated architectural and engineering design.......Architects and engineers both employ iterative multi-stage design procedures, starting with initial conceptual design and progressing to more detailed final design. The conceptual design for an architect can be very abstract, with content that might be more poetic than geometric. On the other hand...

  5. A Systems Approach to Developing an Affordable Space Ground Transportation Architecture using a Commonality Approach

    Science.gov (United States)

    Garcia, Jerry L.; McCleskey, Carey M.; Bollo, Timothy R.; Rhodes, Russel E.; Robinson, John W.

    2012-01-01

    This paper presents a structured approach for achieving a compatible Ground System (GS) and Flight System (FS) architecture that is affordable, productive and sustainable. This paper is an extension of the paper titled "Approach to an Affordable and Productive Space Transportation System" by McCleskey et al. This paper integrates systems engineering concepts and operationally efficient propulsion system concepts into a structured framework for achieving GS and FS compatibility in the mid-term and long-term time frames. It also presents a functional and quantitative relationship for assessing system compatibility called the Architecture Complexity Index (ACI). This paper: (1) focuses on systems engineering fundamentals as it applies to improving GS and FS compatibility; (2) establishes mid-term and long-term spaceport goals; (3) presents an overview of transitioning a spaceport to an airport model; (4) establishes a framework for defining a ground system architecture; (5) presents the ACI concept; (6) demonstrates the approach by presenting a comparison of different GS architectures; and (7) presents a discussion on the benefits of using this approach with a focus on commonality.

  6. Optimized Architecture-centric Function Points’ Clustering and Aggregating About Service Flow

    Directory of Open Access Journals (Sweden)

    Xiaona Xia

    2013-05-01

    Full Text Available This study puts forward the “Cluster” concept about service function points. For the uniform of timely information’s capturing, logical optimization and requirement implementation, service topology’s relationship is adaptively formed. Tracking architecture goal and building distributed clustering logic timely, “Cluster head” and “Cluster node” are serialized and optimal architecture-centric “Cluster group” is formed, it is to achieve the balance service adjustment of service topology network driven by goal.

  7. A Global Optimization Approach to Quantum Mechanics

    OpenAIRE

    Huang, Xiaofei

    2006-01-01

    This paper presents a global optimization approach to quantum mechanics, which describes the most fundamental dynamics of the universe. It suggests that the wave-like behavior of (sub)atomic particles could be the critical characteristic of a global optimization method deployed by nature so that (sub)atomic systems can find their ground states corresponding to the global minimum of some energy function associated with the system. The classic time-independent Schrodinger equation is shown to b...

  8. Techniques for developing reliability-oriented optimal microgrid architectures

    Science.gov (United States)

    Patra, Shashi B.

    2007-12-01

    Alternative generation technologies such as fuel cells, micro-turbines, solar etc. have been the focus of active research in the past decade. These energy sources are small and modular. Because of these advantages, these sources can be deployed effectively at or near locations where they are actually needed, i.e. in the distribution network. This is in contrast to the traditional electricity generation which has been "centralized" in nature. The new technologies can be deployed in a "distributed" manner. Therefore, they are also known as Distributed Energy Resources (DER). It is expected that the use of DER, will grow significantly in the future. Hence, it is prudent to interconnect the energy resources in a meshed or grid-like structure, so as to exploit the reliability and economic benefits of distributed deployment. These grids, which are smaller in scale but similar to the electric transmission grid, are known as "microgrids". This dissertation presents rational methods of building microgrids optimized for cost and subject to system-wide and locational reliability guarantees. The first method is based on dynamic programming and consists of determining the optimal interconnection between microsources and load points, given their locations and the rights of way for possible interconnections. The second method is based on particle swarm optimization. This dissertation describes the formulation of the optimization problem and the solution methods. The applicability of the techniques is demonstrated in two possible situations---design of a microgrid from scratch and expansion of an existing distribution system.

  9. Optimizing a High Energy Physics (HEP) Toolkit on Heterogeneous Architectures

    CERN Document Server

    Lindal, Yngve Sneen; Jarp, Sverre

    2011-01-01

    A desired trend within high energy physics is to increase particle accelerator luminosities, leading to production of more collision data and higher probabilities of finding interesting physics results. A central data analysis technique used to determine whether results are interesting or not is the maximum likelihood method, and the corresponding evaluation of the negative log-likelihood, which can be computationally expensive. As the amount of data grows, it is important to take benefit from the parallelism in modern computers. This, in essence, means to exploit vector registers and all available cores on CPUs, as well as utilizing co-processors as GPUs. This thesis describes the work done to optimize and parallelize a prototype of a central data analysis tool within the high energy physics community. The work consists of optimizations for multicore processors, GPUs, as well as a mechanism to balance the load between both CPUs and GPUs with the aim to fully exploit the power of modern commodity computers. W...

  10. Modeling, analysis and optimization of network-on-chip communication architectures

    CERN Document Server

    Ogras, Umit Y

    2013-01-01

    Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures. This book explores outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.

  11. A New Approach to Optimal Cell Synthesis

    DEFF Research Database (Denmark)

    Madsen, Jan

    1989-01-01

    A set of algorithms is presented for optimal layout generation of CMOS complex gates. The algorithms are able to handle global physical constraints, such as pin placement, and to capture timing aspects. Results show that this novel approach provides better solutions in area and speed compared t...

  12. Parameters Optimization of Synergetic Recognition Approach

    Institute of Scientific and Technical Information of China (English)

    GAOJun; DONGHuoming; SHAOJing; ZHAOJing

    2005-01-01

    Synergetic pattern recognition is a novel and effective pattern recognition method, and has some advantages in image recognition. Researches have shown that attention parameters λ and parameters B, C directly influence on the recognition results, but there is no general research theory to control these parameters in the recognition process. We abstractly analyze these parameters in this paper, and purpose a novel parameters optimization method based on simulated annealing algorithm. SA algorithm has good optimization performance and is used to search the global optimized solution of these parameters. Theoretic analysis and experimental results both show that the proposed parameters optimization method is effective, which can fully improve the performance of synergetic recognition approach, and the algorithm realization is simple and fast.

  13. Unit 1A: General Approach to the Teaching of Architecture

    DEFF Research Database (Denmark)

    Gammelgaard Nielsen, Anders

    2011-01-01

    An ideal course Ever since the founding of the Aar- hus School of Architecture in 1965 there has been a tradition for lively discussion surrounding the content of the architecture program. The discussion has often been con- ducted from ideological or norma- tive positions, with the tendency to st...... never brought us any closer to the truth about the ideal architectural paradigm. Exper- imentation and the continuing em- pirical construction of knowledge have, on the other hand, constantly developed the degree course....

  14. Constraint-Preserving Architecture Transformations: A Graph Rewriting Approach

    Institute of Scientific and Technical Information of China (English)

    YUAN Chun; CHEN Yiyun

    2001-01-01

    Architecture transformations are frequently performed during software design and maintenance. However this activity is not well supported at a sufficiently abstract level.In this paper, the authors characterize architecture transformations using graph rewriting rules,where architectures are represented in graph notations. Architectures are usually required to satisfy certain constraints during evolution. Therefore a way is presented to construct the sufficient and necessary condition for a transformation to preserve a constraint. The condition can be verified before the application of the transformation. Validated transformations are guaranteed not to violate corresponding constraints whenever applied.

  15. A general approach for combining diverse rare variant association tests provides improved robustness across a wider range of genetic architectures.

    Science.gov (United States)

    Greco, Brian; Hainline, Allison; Arbet, Jaron; Grinde, Kelsey; Benitez, Alejandra; Tintle, Nathan

    2016-05-01

    The widespread availability of genome sequencing data made possible by way of next-generation technologies has yielded a flood of different gene-based rare variant association tests. Most of these tests have been published because they have superior power for particular genetic architectures. However, for applied researchers it is challenging to know which test to choose in practice when little is known a priori about genetic architecture. Recently, tests have been proposed which combine two particular individual tests (one burden and one variance components) to minimize power loss while improving robustness to a wider range of genetic architectures. In our analysis we propose an expansion of these approaches, yielding a general method that works for combining any number of individual tests. We demonstrate that running multiple different tests on the same data set and using a Bonferroni correction for multiple testing is never better than combining tests using our general method. We also find that using a test statistic that is highly robust to the inclusion of non-causal variants (joint-infinity) together with a previously published combined test (sequence kernel adaptive test-optimal) provides improved robustness to a wide range of genetic architectures and should be considered for use in practice. Software for this approach is supplied. We support the increased use of combined tests in practice - as well as further exploration of novel combined testing approaches using the general framework provided here - to maximize robustness of rare variant testing strategies against a wide range of genetic architectures.

  16. SERVICE-ORIENTED APPROACH FOR OPTIMAL ROUTING OF INFORMATION FLOWS IN MULTISERVICE NETWORKS

    Directory of Open Access Journals (Sweden)

    N. I. Listopad

    2015-01-01

    Full Text Available New approach for optimal routing of information flows is developed based on service-oriented architecture. To find the shortest path it’s require to take into account the QoS-parameters such as delay, jitter, bandwidth, lost of the packets and cost of telecommunication resources.

  17. Expected Utility Optimization - Calculus of Variations Approach

    CERN Document Server

    Tran, Khoa

    2007-01-01

    In this paper, I'll derive the Hamilton-Jacobi (HJ) equation for Merton's problem in Utility Optimization Theory using a Calculus of Variations (CoV) Approach. For stochastic control problems, Dynamic Programming (DP) has been used as a standard method. To the best of my knowledge, no one has used CoV for this problem. In addition, while the DP approach cannot guarantee that the optimum satisfies the HJ equation, the CoV approach does. Be aware that this is the first draft of this paper and many flaws might be introduced.

  18. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    Directory of Open Access Journals (Sweden)

    Haijing Tang

    2013-01-01

    Full Text Available Clustering has become a common trend in very long instruction words (VLIW architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file.

  19. An FFT Performance Model for Optimizing General-Purpose Processor Architecture

    Institute of Scientific and Technical Information of China (English)

    Ling Li; Yun-Ji Chen; Dao-Fu Liu; Cheng Qian; Wei-Wu Hu

    2011-01-01

    General-purpose processor (GPP) is an important platform for fast Fourier transform (FFT),due to its flexibility,reliability and practicality.FFT is a representative application intensive in both computation and memory access,optimizing the FFT performance of a GPP also benefits the performances of many other applications.To facilitate the analysis of FFT,this paper proposes a theoretical model of the FFT processing.The model gives out a tight lower bound of the runtime of FFT on a GPP,and guides the architecture optimization for GPP as well.Based on the model,two theorems on optimization of architecture parameters are deduced,which refer to the lower bounds of register number and memory bandwidth.Experimental results on different processor architectures (including Intel Core i7 and Godson-3B) validate the performance model.The above investigations were adopted in the development of Godson-3B,which is an industrial GPP.The optimization techniques deduced from our performance model improve the FFT performance by about 40%,while incurring only 0.8% additional area cost.Consequently,Godson-3B solves the 1024-point single-precision complex FFT in 0.368 μs with about 40 Watt power consumption,and has the highest performance-per-watt in complex FFT among processors as far as we know.This work could benefit optimization of other GPPs as well.

  20. Stencil Computation Optimization and Auto-tuning on State-of-the-Art Multicore Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Kaushik; Murphy, Mark; Volkov, Vasily; Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Patterson, David; Shalf, John; Yelick, Katherine

    2008-08-22

    Understanding the most efficient design and utilization of emerging multicore systems is one of the most challenging questions faced by the mainstream and scientific computing industries in several decades. Our work explores multicore stencil (nearest-neighbor) computations -- a class of algorithms at the heart of many structured grid codes, including PDE solvers. We develop a number of effective optimization strategies, and build an auto-tuning environment that searches over our optimizations and their parameters to minimize runtime, while maximizing performance portability. To evaluate the effectiveness of these strategies we explore the broadest set of multicore architectures in the current HPC literature, including the Intel Clovertown, AMD Barcelona, Sun Victoria Falls, IBM QS22 PowerXCell 8i, and NVIDIA GTX280. Overall, our auto-tuning optimization methodology results in the fastest multicore stencil performance to date. Finally, we present several key insights into the architectural trade-offs of emerging multicore designs and their implications on scientific algorithm development.

  1. Design health village with the approach of sustainable architecture ...

    African Journals Online (AJOL)

    Journal Home > Vol 8, No 3 (2016) >. Log in or Register to get access to full text ... parts of the country can benefit. For people to have this design can be found in different locations accomplished and the success and benefits enjoyed it. Keywords: Health; city health; smart; sustainability in architecture; architectural design ...

  2. A Pattern-based Approach Against Architectural Knowledge Vaporization

    NARCIS (Netherlands)

    Heesch, Uwe van; Avgeriou, Paris

    2009-01-01

    Architectural documentation is often considered as a tedious and resource intensive task, that is usually skipped or performed inadequately. As a result the rationale of the architect’s decisions gets lost. This problem is known as architectural knowledge vaporization. We propose a documentation

  3. A synthetic approach to multiobjective optimization

    CERN Document Server

    Lovison, Alberto

    2010-01-01

    We propose a strategy for approximating Pareto optimal sets based on the global analysis framework proposed by Smale (Dynamical systems, Academic Press, New York (1973) 531--544). We speak about \\emph{synthetic} approach because the optimal set is natively approximated by means of a compound geometrical object, i.e., a simplicial complex, rather than by an unstructured scatter of individual optima. The method distinguishes the hierarchy between singular set, Pareto critical set and stable Pareto critical set. Furthermore, a quadratic convergence result in set wise sense is proven and tested over numerical examples.

  4. Applying a cloud computing approach to storage architectures for spacecraft

    Science.gov (United States)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  5. A portable approach for PIC on emerging architectures

    Science.gov (United States)

    Decyk, Viktor

    2016-03-01

    A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.

  6. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  7. Green Architecture Approach on Mosque Design in Cipendawa Village, Cianjur, West Java, Indonesia

    Science.gov (United States)

    Purisari, Rahma; Safitri, Ratna; Permanasari, Eka; Hendola, Feby

    2017-06-01

    Climate change has been a global concern as many buildings are constructed and maintained without considering environmental impacts. Global warming, flood and environmental damage have become everyday phenomena. Architecture plays a vital role to in shifting design paradigm green architecture that is sustainable and environmental friendly. This research-based design investigates the implementation of green architectural approach on a mosque design at a waqf land in Cipendawa Village, Cianjur. The site is located on the hill where water catchment is required. Green architecture approach is expected to minimize negative environmental impacts and provide solution for low budget mosque.

  8. An optimized, universal hardware-based adaptive correlation receiver architecture

    Science.gov (United States)

    Zhu, Zaidi; Suarez, Hernan; Zhang, Yan; Wang, Shang

    2014-05-01

    The traditional radar RF transceivers, similar to communication transceivers, have the basic elements such as baseband waveform processing, IF/RF up-down conversion, transmitter power circuits, receiver front-ends, and antennas, which are shown in the upper half of Figure 1. For modern radars with diversified and sophisticated waveforms, we can frequently observe that the transceiver behaviors, especially nonlinear behaviors, are depending on the waveform amplitudes, frequency contents and instantaneous phases. Usually, it is a troublesome process to tune an RF transceiver to optimum when different waveforms are used. Another issue arises from the interference caused by the waveforms - for example, the range side-lobe (RSL) caused by the waveforms, once the signals pass through the entire transceiver chain, may be further increased due to distortions. This study is inspired by the two existing solutions from commercial communication industry, digital pre-distortion (DPD) and adaptive channel estimation and Interference Mitigation (AIM), while combining these technologies into a single chip or board that can be inserted into the existing transceiver system. This device is then named RF Transceiver Optimizer (RTO). The lower half of Figure 1 shows the basic element of RTO. With RTO, the digital baseband processing does not need to take into account the transceiver performance with diversified waveforms, such as the transmitter efficiency and chain distortion (and the intermodulation products caused by distortions). Neither does it need to concern the pulse compression (or correlation receiver) process and the related mitigation. The focus is simply the information about the ground truth carried by the main peak of correlation receiver outputs. RTO can be considered as an extension of the existing calibration process, while it has the benefits of automatic, adaptive and universal. Currently, the main techniques to implement the RTO are the digital pre- or -post

  9. Novel Optimization Approach to Mixing Process Intensification

    Institute of Scientific and Technical Information of China (English)

    Guo Kai; Liu Botan; Li Qi; Liu Chunjiang

    2015-01-01

    An approach was presented to intensify the mixing process. Firstly, a novel concept, the dissipationof mass transfer ability(DMA) associated with convective mass transfer, was defined via an analogy to the heat-work conversion. Accordingly, the focus on mass transfer enhancement can be shifted to seek the extremum of the DMA of the system. To this end, an optimization principle was proposed. A mathematical model was then developed to formu-late the optimization into a variational problem. Subsequently, the intensification of the mixing process for a gas mix-ture in a micro-tube was provided to demonstrate the proposed principle. In the demonstration example, an optimized velocity field was obtained in which the mixing ability was improved, i.e., the mixing process should be intensifiedby adjusting the velocity field in related equipment. Therefore, a specific procedure was provided to produce a mixer with geometric irregularities associated with an ideal velocity.

  10. Robust Portfolio Optimization using CAPM Approach

    Directory of Open Access Journals (Sweden)

    mohsen gharakhani

    2013-08-01

    Full Text Available In this paper, a new robust model of multi-period portfolio problem has been developed. One of the key concerns in any asset allocation problem is how to cope with uncertainty about future returns. There are some approaches in the literature for this purpose including stochastic programming and robust optimization. Applying these techniques to multi-period portfolio problem may increase the problem size in a way that the resulting model is intractable. In this paper, a novel approach has been proposed to formulate multi-period portfolio problem as an uncertain linear program assuming that asset return follows the single-index factor model. Robust optimization technique has been also used to solve the problem. In order to evaluate the performance of the proposed model, a numerical example has been applied using simulated data.

  11. An Approach for Detecting Inconsistencies between Behavioral Models of the Software Architecture and the Code

    Energy Technology Data Exchange (ETDEWEB)

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2012-07-16

    In practice, inconsistencies between architectural documentation and the code might arise due to improper implementation of the architecture or the separate, uncontrolled evolution of the code. Several approaches have been proposed to detect the inconsistencies between the architecture and the code but these tend to be limited for capturing inconsistencies that might occur at runtime. We present a runtime verification approach for detecting inconsistencies between the dynamic behavior of the architecture and the actual code. The approach is supported by a set of tools that implement the architecture and the code patterns in Prolog, and support the automatic generation of runtime monitors for detecting inconsistencies. We illustrate the approach and the toolset for a Crisis Management System case study.

  12. A New Architecture for Extending the Capabilities of the Copernicus Trajectory Optimization Program

    Science.gov (United States)

    Williams, Jacob

    2015-01-01

    This paper describes a new plugin architecture developed for the Copernicus spacecraft trajectory optimization program. Details of the software architecture design and development are described, as well as examples of how the capability can be used to extend the tool in order to expand the type of trajectory optimization problems that can be solved. The inclusion of plugins is a significant update to Copernicus, allowing user-created algorithms to be incorporated into the tool for the first time. The initial version of the new capability was released to the Copernicus user community with version 4.1 in March 2015, and additional refinements and improvements were included in the recent 4.2 release. It is proving quite useful, enabling Copernicus to solve problems that it was not able to solve before.

  13. A Bayesian approach to optimizing cryopreservation protocols

    Directory of Open Access Journals (Sweden)

    Sammy Sambu

    2015-06-01

    Full Text Available Cryopreservation is beset with the challenge of protocol alignment across a wide range of cell types and process variables. By taking a cross-sectional assessment of previously published cryopreservation data (sample means and standard errors as preliminary meta-data, a decision tree learning analysis (DTLA was performed to develop an understanding of target survival using optimized pruning methods based on different approaches. Briefly, a clear direction on the decision process for selection of methods was developed with key choices being the cooling rate, plunge temperature on the one hand and biomaterial choice, use of composites (sugars and proteins as additional constituents, loading procedure and cell location in 3D scaffolding on the other. Secondly, using machine learning and generalized approaches via the Naïve Bayes Classification (NBC method, these metadata were used to develop posterior probabilities for combinatorial approaches that were implicitly recorded in the metadata. These latter results showed that newer protocol choices developed using probability elicitation techniques can unearth improved protocols consistent with multiple unidimensionally-optimized physical protocols. In conclusion, this article proposes the use of DTLA models and subsequently NBC for the improvement of modern cryopreservation techniques through an integrative approach.

  14. Time and Power Optimizations in FPGA-Based Architectures for Polyphase Channelizers

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Harris, Fred; Koch, Peter

    2012-01-01

    This paper presents the time and power optimization considerations for Field Programmable Gate Array (FPGA) based architectures for a polyphase filter bank channelizer with an embedded square root shaping filter in its polyphase engine. This configuration performs two different re-sampling tasks......% slice register resources of a Xilinx Virtex-5 FPGA, operating at 400 and 480 MHz, and consuming 1.9 and 2.6 Watts of dynamic power, respectively....

  15. Analysis and Optimization of Mixed-Criticality Applications on Partitioned Distributed Architectures

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Marinescu, S. O.; Pop, Paul

    2012-01-01

    In this paper we are interested in mixed-criticality applications implemented using distributed heterogenous architectures, composed of processing elements (PEs) interconnected using the TTEthernet protocol. At the PE-level, we use partitioning, such that each application is allowed to run only...... Constrained (RC) messages, transmitted if there are no TT messages, and Best Effort (BE) messages. We assume that applications are scheduled using Static Cyclic Scheduling (SCS) or Fixed-Priority Preemptive Scheduling (FPS). We are interested in analysis and optimization methods and tools, which decide......-heuristic to solve this optimization problem, which has been evaluated using several benchmarks....

  16. APPLICATION OF ARCHITECTURE-BASED NEURAL NETWORKS IN MODELING AND PARAMETER OPTIMIZATION OF HYDRAULIC BUMPER

    Institute of Scientific and Technical Information of China (English)

    Yang Haiwei; Zhan Yongqi; Qiao Junwei; Shi Guanglin

    2003-01-01

    The dynamic working process of 52SFZ-140-207B type of hydraulic bumper is analyzed. The modeling method using architecture-based neural networks is introduced. Using this modeling method, the dynamic model of the hydraulic bumper is established; Based on this model the structural parameters of the hydraulic bumper are optimized with Genetic algorithm. The result shows that the performance of the dynamic model is close to that of the hydraulic bumper, and the dynamic performance of the hydraulic bumper is improved through parameter optimization.

  17. Optimization approaches for planning external beam radiotherapy

    Science.gov (United States)

    Gozbasi, Halil Ozan

    Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can

  18. Software representation methodology for agile application development: An architectural approach

    Directory of Open Access Journals (Sweden)

    Alejandro Paolo Daza Corredor

    2016-06-01

    Full Text Available The generation of Web applications represents the execution of repetitive tasks, this process involves determining information structures, the generation of different types of components and finally deployment tasks and tuning applications. In many applications of this type are coincident components generated from application to application. Current trends in software engineering as MDE, MDA or MDD pretend to automate the generation of applications based on structuring a model to apply transformations to the achievement of the application. This document intends to translate an architectural foundation that facilitates the generation of these applications relying on model-driven architecture but without ignoring the existence and relevance of existing trends mentioned in this summary architectural models.

  19. Architecture Design Approaches and Issues in Cross Layer Systems

    DEFF Research Database (Denmark)

    Cattoni, Andrea Fabio; Sørensen, Troels Bundgaard; Mogensen, Preben

    2012-01-01

    the traditional protocol stack design methodology. However, Cross Layer also carries a risk due to possibly unexpected and undesired effects. In this chapter we want to provide architecture designers with a set of tools and recommendations synthesized from an analysis of the state of art, but enriched......Wireless communications are a fast grown part of the telecommunication market. While new types of traffic and challenges related to the wireless medium are appearing, the methodologies for designing system architectures are substantially remaining the same. Under the increasing pressure of market...

  20. Optimizing IT Infrastructure by Virtualization Approach

    Science.gov (United States)

    Budiman, Thomas; Suroso, Jarot S.

    2017-04-01

    The goal of this paper is to get the best potential configuration which can be applied to a physical server without compromising service performance for the clients. Data were compiled by direct observation in the data center observed. Data was then analyzed using the hermeneutics approach to understand the condition by textual data gathered understanding. The results would be the best configuration for a physical server which contains several virtual machines logically separated by its functions. It can be concluded that indeed one physical server machine can be optimized using virtualization so that it may deliver the peak performance of the machine itself and the impact are throughout the organization.

  1. Jordan algebraic approach to symmetric optimization

    NARCIS (Netherlands)

    Vieira, M.V.C.

    2007-01-01

    In this thesis we present a generalization of interior-point methods for linear optimization based on kernel functions to symmetric optimization. It covers the three standard cases of conic optimization: linear optimization, second-order cone optimization and semi-definite optimization. We give an

  2. The Integration of Interior Architecture Education with Digital Design Approaches

    Science.gov (United States)

    Yazicioglu, Deniz Ayse

    2011-01-01

    It is inevitable that as a result of progress in technology and the changes in the ways with which design is conceived, interior architecture schools should be updated according to these requirements and that new educational processes should be tried out. It is for this reason that the scope and aim of this study have been determined as being the…

  3. A Topology Optimisation Approach to Learning in Architectural Design

    DEFF Research Database (Denmark)

    Mullins, Michael; Kirkegaard, Poul Henning; Jessen, Rasmus Zederkof

    2005-01-01

    design, financial and a number of other pragmatic reasons. But in an artistic/architectural perspective these are not decisive. Analogical design qualities include a tectonic appreciation of the properties of materials, metaphoric interpretation of intention and considerations of context. The paper...

  4. Quantitative Architectural Analysis: A New Approach to Cortical Mapping

    Science.gov (United States)

    Schleicher, Axel; Morosan, Patricia; Amunts, Katrin; Zilles, Karl

    2009-01-01

    Results from functional imaging studies are often still interpreted using the classical architectonic brain maps of Brodmann and his successors. One obvious weakness in traditional, architectural mapping is the subjective nature of localizing borders between cortical areas by means of a purely visual, microscopical examination of histological…

  5. Quantitative Architectural Analysis: A New Approach to Cortical Mapping

    Science.gov (United States)

    Schleicher, Axel; Morosan, Patricia; Amunts, Katrin; Zilles, Karl

    2009-01-01

    Results from functional imaging studies are often still interpreted using the classical architectonic brain maps of Brodmann and his successors. One obvious weakness in traditional, architectural mapping is the subjective nature of localizing borders between cortical areas by means of a purely visual, microscopical examination of histological…

  6. Lifelong Learning in Architectural Design Studio: The Learning Contract Approach

    Science.gov (United States)

    Hassanpour, B.; Che-Ani, A. I.; Usman, I. M. S.; Johar, S.; Tawil, N. M.

    2015-01-01

    Avant-garde educational systems are striving to find lifelong learning methods. Different fields and majors have tested a variety of proposed models and found varying difficulties and strengths. Architecture is one of the most critical areas of education because of its special characteristics, such as learning by doing and complicated evaluation…

  7. Application of knowledge-based approaches in software architecture : A systematic mapping study

    NARCIS (Netherlands)

    Li, Zengyang; Liang, Peng; Avgeriou, Paris

    2013-01-01

    Context: Knowledge management technologies have been employed across software engineering activities for more than two decades. Knowledge-based approaches can be used to facilitate software architecting activities (e.g., architectural evaluation). However, there is no comprehensive understanding on

  8. An Architectural Approach towards Innovative Renewable Energy Infrastructure in Kapisillit, Greenland

    DEFF Research Database (Denmark)

    Carruth, Susan; Krogh, Peter

    2014-01-01

    This paper claims that an architectural approach to the planning of renewable energy infrastructures - specifically architects’ ability to closely read and conceptualise the characteristic material practices of a place beyond the boundaries of a site - can underwrite development that is more...... practices indigenous to this region, and how such a vocabulary can be useful in developing culturally sustainable planning, before reflecting upon how such an architectural approach to infrastructural planning could be carried out in other peripheral regions, expanding the definition of sustainability...

  9. The optimization approach to regional environmental security

    Directory of Open Access Journals (Sweden)

    N.V. Kameneva

    2014-03-01

    Full Text Available The aim of the article. The aim of this paper is to work out a conceptual approach to the problem of environmental safety securing, the protection of population against unfavourable environmental impact and ecological risks and the maximization of economic effect from business activity including its ecological part. The following purposes were set and achieved: definition of the notion of optimal level of environmental safety; working out of a more precise classification of the elements of economic effect from ecological activity; outlining of possibilities to use economic tools in managing environmental safety at the regional level. The results of the analysis. Economically optimal level of environmental safety is one, which meets basic requirements concerning protection of population against negative environmental impact and threats of such impact, and provides the maximum economic effect from ecological activity. The gradation of environmental safety levels is based on the assessment of levels of ecological risk. The final economic result of ecological activity may be positive or negative depending on the amounts of expenditures and effect. The purpose of optimization will therefore be either maximization of earnings or minimization of loss. In general, the expenditures related to ecological activity grow when the level of environmental safety gets higher. For most of populated territories, the achievement of the maximum theoretically possible level of environmental safety is not only impractical in present but also not desirable in principle. Elimination or reducing to insignificant values of all ecological risks would actually require transforming a given territory to a natural reserve with consequent stopping all business activity, which would lead to very high economic losses. Conclusions and directions of further researches. Environmental safety of a territory may have different levels, which are characterized, in particular, by the

  10. Optimization of a neural architecture for the direct control of a Boost converter

    Directory of Open Access Journals (Sweden)

    Fredy Hernán Martinez Sarmiento

    2012-06-01

    Full Text Available In research related to control of DC/DC converters, artificial intelligence techniques are a great improvement in the design and performance. However, some of these tools require the use of trial and error strategies in the design, making it difficult to obtain an optimal structure. In this paper, we propose a direct control based on artificial neural network, whose design has been optimized using bio-inspired searching strategies, with the idea of optimizing simultaneously two different but important aspects of the network: architecture and weights connections. The control was successfully applied to a boost type converter. The results obtained allow us to observe the dynamic performance of the scheme, in which the response time and variation in the output voltage can be concluded that the criteria used for the control loop design were appropriate.

  11. FPGA Implementation of an Area Optimized Architecture for 128 bit AES Algorithm

    Directory of Open Access Journals (Sweden)

    S Ramanathan

    2016-05-01

    Full Text Available This paper aims at FPGA Implementation of an Area Optimized Architecture for 128 bit AES Algorithm. The conventional designs use a separate module for 32 bit byte substitution and 128 bit byte substitution. The 32 bit byte substitution is used in round key generation and the 128 bit byte substitution is used in the rounds. This report presents a modified architecture of 128 bit byte substitution module using a single 32 bit byte substitution module to reduce area.The AES encryption and decryption algorithm were designed using Verilog HDL. The functionality of the modules were checked using ModelSim. The simulations were carried out in ModelSim and Quartus II. The algorithm was implemented in FPGA and achieved a 2% reduction in the total logic element utilization

  12. Integrating Environmental and Information Systems Management: An Enterprise Architecture Approach

    Science.gov (United States)

    Noran, Ovidiu

    Environmental responsibility is fast becoming an important aspect of strategic management as the reality of climate change settles in and relevant regulations are expected to tighten significantly in the near future. Many businesses react to this challenge by implementing environmental reporting and management systems. However, the environmental initiative is often not properly integrated in the overall business strategy and its information system (IS) and as a result the management does not have timely access to (appropriately aggregated) environmental information. This chapter argues for the benefit of integrating the environmental management (EM) project into the ongoing enterprise architecture (EA) initiative present in all successful companies. This is done by demonstrating how a reference architecture framework and a meta-methodology using EA artefacts can be used to co-design the EM system, the organisation and its IS in order to achieve a much needed synergy.

  13. Technical framework for Internetware:An architecture centric approach

    Institute of Scientific and Technical Information of China (English)

    YANG FuQing; L(ü) Jian; MEI Hong

    2008-01-01

    Being a new software paradigm evolved by the Internet, Internetware brings many challenges to the traditional software methods and techniques. Sponsored by the national basic research program (973), researchers in China have developed an architecture centric technical framework for the definition, incarnation and engi-neering of Internetware. First of all, a software model for Internetware is defined for what to be, including that Internetware entities should be packaged as components, behaving as agents, interoperating as services, collaborating in a structured and on demand manner, etc. Secondly, a middleware for Internetware is designed and im-plemented for how to be, including that Internetware entities are incarnated by runtime containers, structured collaborations are enabled by runtime software ar-chitecture, Internetware can be managed in a reflective and autonomic manner, etc. Thirdly, an engineering methodology for Internetware is proposed for how to do, including the way to develop Internetware entities and their collaborations by transforming and refining a set of software architectures which cover all the phases of software lifecycle, the way to identify and organize the disordered software as-sets by domain modeling, etc.

  14. Space Missions Trade Space Generation and Assessment Using JPL Rapid Mission Architecture (RMA) Team Approach

    Science.gov (United States)

    Moeller, Robert C.; Borden, Chester; Spilker, Thomas; Smythe, William; Lock, Robert

    2011-01-01

    The JPL Rapid Mission Architecture (RMA) capability is a novel collaborative team-based approach to generate new mission architectures, explore broad trade space options, and conduct architecture-level analyses. RMA studies address feasibility and identify best candidates to proceed to further detailed design studies. Development of RMA first began at JPL in 2007 and has evolved to address the need for rapid, effective early mission architectural development and trade space exploration as a precursor to traditional point design evaluations. The RMA approach integrates a small team of architecture-level experts (typically 6-10 people) to generate and explore a wide-ranging trade space of mission architectures driven by the mission science (or technology) objectives. Group brainstorming and trade space analyses are conducted at a higher level of assessment across multiple mission architectures and systems to enable rapid assessment of a set of diverse, innovative concepts. This paper describes the overall JPL RMA team, process, and high-level approach. Some illustrative results from previous JPL RMA studies are discussed.

  15. Characterization of real-world vibration sources with a view toward optimal energy harvesting architectures

    Science.gov (United States)

    Rantz, Robert; Roundy, Shad

    2016-04-01

    A tremendous amount of research has been performed on the design and analysis of vibration energy harvester architectures with the goal of optimizing power output; most studies assume idealized input vibrations without paying much attention to whether such idealizations are broadly representative of real sources. These "idealized input signals" are typically derived from the expected nature of the vibrations produced from a given source. Little work has been done on corroborating these expectations by virtue of compiling a comprehensive list of vibration signals organized by detailed classifications. Vibration data representing 333 signals were collected from the NiPS Laboratory "Real Vibration" database, processed, and categorized according to the source of the signal (e.g. animal, machine, etc.), the number of dominant frequencies, the nature of the dominant frequencies (e.g. stationary, band-limited noise, etc.), and other metrics. By categorizing signals in this way, the set of idealized vibration inputs commonly assumed for harvester input can be corroborated and refined, and heretofore overlooked vibration input types have motivation for investigation. An initial qualitative analysis of vibration signals has been undertaken with the goal of determining how often a standard linear oscillator based harvester is likely the optimal architecture, and how often a nonlinear harvester with a cubic stiffness function might provide improvement. Although preliminary, the analysis indicates that in at least 23% of cases, a linear harvester is likely optimal and in no more than 53% of cases would a nonlinear cubic stiffness based harvester provide improvement.

  16. From Requirements to code: an Architecture-centric Approach for producing Quality Systems

    CERN Document Server

    Bucchiarone, Antonio; Muccini, Henry; Pelliccione, Patrizio

    2009-01-01

    When engineering complex and distributed software and hardware systems (increasingly used in many sectors, such as manufacturing, aerospace, transportation, communication, energy, and health-care), quality has become a big issue, since failures can have economics consequences and can also endanger human life. Model-based specifications of a component-based system permit to explicitly model the structure and behaviour of components and their integration. In particular Software Architectures (SA) has been advocated as an effective means to produce quality systems. In this chapter by combining different technologies and tools for analysis and development, we propose an architecture-centric model-driven approach to validate required properties and to generate the system code. Functional requirements are elicited and used for identifying expected properties the architecture shall express. The architectural compliance to the properties is formally demonstrated, and the produced architectural model is used to automa...

  17. A Parallel Trade Study Architecture for Design Optimization of Complex Systems

    Science.gov (United States)

    Kim, Hongman; Mullins, James; Ragon, Scott; Soremekun, Grant; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    Design of a successful product requires evaluating many design alternatives in a limited design cycle time. This can be achieved through leveraging design space exploration tools and available computing resources on the network. This paper presents a parallel trade study architecture to integrate trade study clients and computing resources on a network using Web services. The parallel trade study solution is demonstrated to accelerate design of experiments, genetic algorithm optimization, and a cost as an independent variable (CAIV) study for a space system application.

  18. Optimizing root system architecture in biofuel crops for sustainable energy production and soil carbon sequestration.

    Science.gov (United States)

    To, Jennifer Pc; Zhu, Jinming; Benfey, Philip N; Elich, Tedd

    2010-09-08

    Root system architecture (RSA) describes the dynamic spatial configuration of different types and ages of roots in a plant, which allows adaptation to different environments. Modifications in RSA enhance agronomic traits in crops and have been implicated in soil organic carbon content. Together, these fundamental properties of RSA contribute to the net carbon balance and overall sustainability of biofuels. In this article, we will review recent data supporting carbon sequestration by biofuel crops, highlight current progress in studying RSA, and discuss future opportunities for optimizing RSA for biofuel production and soil carbon sequestration.

  19. Approaches Regarding Business Logic Modeling in Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2011-01-01

    Full Text Available As part of the Service Oriented Computing (SOC, Service Oriented Architecture (SOA is a technology that has been developing for almost a decade and during this time there have been published many studies, papers and surveys that are referring to the advantages of projects using it. In this article we discuss some ways of using SOA in the business environment, as a result of the need to reengineer the internal business processes with the scope of moving forward towards providing and using standardized services and achieving enterprise interoperability.

  20. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    Science.gov (United States)

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  1. A Powerful Optimization Approach for the Multi Channel Dissemination Networks

    CERN Document Server

    Al-Mogren, Ahmad Saad

    2010-01-01

    In the wireless environment, dissemination techniques may improve data access for the users. In this paper, we show a description of dissemination architecture that fits the overall telecommunication network. This architecture is designed to provide efficient data access and power saving for the mobile units. A concurrency control approach, MCD, is suggested for data consistency and conflict checking. A performance study shows that the power consumption, space overhead, and response time associated with MCD is far less than other previous techniques.

  2. The Enactive Approach to Architectural Experience: A Neurophysiological Perspective on Embodiment, Motivation, and Affordances.

    Science.gov (United States)

    Jelić, Andrea; Tieri, Gaetano; De Matteis, Federico; Babiloni, Fabio; Vecchiato, Giovanni

    2016-01-01

    Over the last few years, the efforts to reveal through neuroscientific lens the relations between the mind, body, and built environment have set a promising direction of using neuroscience for architecture. However, little has been achieved thus far in developing a systematic account that could be employed for interpreting current results and providing a consistent framework for subsequent scientific experimentation. In this context, the enactive perspective is proposed as a guide to studying architectural experience for two key reasons. Firstly, the enactive approach is specifically selected for its capacity to account for the profound connectedness of the organism and the world in an active and dynamic relationship, which is primarily shaped by the features of the body. Thus, particular emphasis is placed on the issues of embodiment and motivational factors as underlying constituents of the body-architecture interactions. Moreover, enactive understanding of the relational coupling between body schema and affordances of architectural spaces singles out the two-way bodily communication between architecture and its inhabitants, which can be also explored in immersive virtual reality settings. Secondly, enactivism has a strong foothold in phenomenological thinking that corresponds to the existing phenomenological discourse in architectural theory and qualitative design approaches. In this way, the enactive approach acknowledges the available common ground between neuroscience and architecture and thus allows a more accurate definition of investigative goals. Accordingly, the outlined model of architectural subject in enactive terms-that is, a model of a human being as embodied, enactive, and situated agent, is proposed as a basis of neuroscientific and phenomenological interpretation of architectural experience.

  3. The enactive approach to architectural experience: a neurophysiological perspective on embodiment, motivation, and affordances

    Directory of Open Access Journals (Sweden)

    Andrea eJelic

    2016-03-01

    Full Text Available Over the last few years, the efforts to reveal through neuroscientific lens the relations between the mind, body, and built environment have set a promising direction of using neuroscience for architecture. However, little has been achieved thus far in developing a systematic account that could be employed for interpreting current results and providing a consistent framework for subsequent scientific experimentation. In this context, the enactive perspective is proposed as a guide to studying architectural experience for two key reasons. Firstly, the enactive approach is specifically selected for its capacity to account for the profound connectedness of the organism and the world in an active and dynamic relationship, which is primarily shaped by the features of the body. Thus, particular emphasis is placed on the issues of embodiment and motivational factors as underlying constituents of the body-architecture interactions. Moreover, enactive understanding of the relational coupling between body schema and affordances of architectural spaces singles out the two-way bodily communication between architecture and its inhabitants, which can be also explored in immersive virtual reality settings. Secondly, enactivism has a strong foothold in phenomenological thinking that corresponds to the existing phenomenological discourse in architectural theory and qualitative design approaches. In this way, the enactive approach acknowledges the available common ground between neuroscience and architecture and thus allows a more accurate definition of investigative goals. Accordingly, the outlined model of architectural subject in enactive terms – that is, a model of a human being as embodied, enactive, and situated agent, is proposed as a basis of neuroscientific and phenomenological interpretation of architectural experience.

  4. Optimizing Oceanographic Big Data Browse and Visualization Response Times by Implementing the Lambda Architecture

    Science.gov (United States)

    Currier, R. D.; Howard, M.; Kirkpatrick, B. A.

    2016-02-01

    Visualizing large-scale data sets using standard web-based mapping tools can result in significant delays and response time issues for users. Load times for data sets comprised of millions of records can be in excess of thirty seconds when the data sets are served using traditional architectures and techniques. In this paper we demonstrate the efficiency gains created by utilizing the Lambda Architecture on a low velocity, high volume hypoxia-nutrient decision support system with 25M records. While traditionally employed on high velocity, high volume data we demonstrate significant improvements in data load times and the user browse experience on low velocity, high volume data. Optimizing query and visualization response times becomes increasingly important as data sets grow in size. Time series data from extended autonomous underwater vehicle deployments can exceed 500M records. Applying the Lambda Architecture to these data sets will allow users to browse, visualize and fuse data in a manner not possible using traditional methodologies.

  5. Power Optimized 7-Port Router Design with BIST Capability for 3D NoC Architecture

    Directory of Open Access Journals (Sweden)

    WASEEM Shaik Mohammed

    2017-05-01

    Full Text Available Three-Dimensional (3D Network-on-Chip (NoC architectures being the combination of NoC and 3D die stacking chip integration technology are more prone to faults that might occur with ageing, low supply voltage or physical damage of the chip. Being the heart of NoC, the router is known for consuming significantly high power. With an on-chip testability feature for faults, the amount of power consumed increases linearly with the addition of logical components. This paper, proposes a power optimized Built-In Self-Test (BIST capable 7-port router architecture for 3D NoC by considering Marching Memory Through Type for buffer design and Cellular Automata Rule 45 for test sequence generation. The power report generated for the proposed design has been compared against that of the conventional router without BIST capability and widely used Linear Feedback Shift Register based BIST capable router architectures and further a constructive discussion in this paper, states the advantages of the proposed design in comparison to its analogues.

  6. VENDOR-INDEPENDENT DATABASE APPLICATIONS – AN ARCHITECTURAL APPROACH

    Directory of Open Access Journals (Sweden)

    Mircea Petrescu

    2004-12-01

    Full Text Available The ability to switch between different Database Management Systems (DBMS is a requirement for many database applications in which effort was invested by many researchers. The main obstacle is the non-uniformity across vendors of the SQL language, the de-facto standard in the industry. Also, an application that maps between an object-oriented application and a relation database needs to be designed in a proper way, in order to achieve the required level of performance and maintainability. This paper presents, extends and further details the Vendor-Independent Database Application (VIDA framework, initially proposed by us in [9]. The proposed VIDA architecture is described in-depth, based on our practice and experience in this field. The design decisions are presented along with supporting arguments. The VIDA architecture presented here aims to fully decouple the application both from the query language and from the database access technology, providing a uniform view of the database. The problems encountered, both during design and implementation, are presented along with their solutions. Also, the available data access technologies and languages are surveyed and their conformity with a standard is debated.

  7. How to ensure sustainable interoperability in heterogeneous distributed systems through architectural approach.

    Science.gov (United States)

    Pape-Haugaard, Louise; Frank, Lars

    2011-01-01

    A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.

  8. From Smart-Eco Building to High-Performance Architecture: Optimization of Energy Consumption in Architecture of Developing Countries

    Science.gov (United States)

    Mahdavinejad, M.; Bitaab, N.

    2017-08-01

    Search for high-performance architecture and dreams of future architecture resulted in attempts towards meeting energy efficient architecture and planning in different aspects. Recent trends as a mean to meet future legacy in architecture are based on the idea of innovative technologies for resource efficient buildings, performative design, bio-inspired technologies etc. while there are meaningful differences between architecture of developed and developing countries. Significance of issue might be understood when the emerging cities are found interested in Dubaization and other related booming development doctrines. This paper is to analyze the level of developing countries’ success to achieve smart-eco buildings’ goals and objectives. Emerging cities of West of Asia are selected as case studies of the paper. The results of the paper show that the concept of high-performance architecture and smart-eco buildings are different in developing countries in comparison with developed countries. The paper is to mention five essential issues in order to improve future architecture of developing countries: 1- Integrated Strategies for Energy Efficiency, 2- Contextual Solutions, 3- Embedded and Initial Energy Assessment, 4- Staff and Occupancy Wellbeing, 5- Life-Cycle Monitoring.

  9. An Enhanced System Architecture for Optimized Demand Side Management in Smart Grid

    Directory of Open Access Journals (Sweden)

    Anzar Mahmood

    2016-04-01

    Full Text Available Demand Side Management (DSM through optimization of home energy consumption in the smart grid environment is now one of the well-known research areas. Appliance scheduling has been done through many different algorithms to reduce peak load and, consequently, the Peak to Average Ratio (PAR. This paper presents a Comprehensive Home Energy Management Architecture (CHEMA with integration of multiple appliance scheduling options and enhanced load categorization in a smart grid environment. The CHEMA model consists of six layers and has been modeled in Simulink with an embedded MATLAB code. A single Knapsack optimization technique is used for scheduling and four different cases of cost reduction are modeled at the second layer of CHEMA. Fault identification and electricity theft control have also been added in CHEMA. Furthermore, carbon footprint calculations have been incorporated in order to make the users aware of environmental concerns. Simulation results prove the effectiveness of the proposed model.

  10. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    CERN Document Server

    Blazewicz, Marek; Koppelman, David M; Brandt, Steven R; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Tao, Jian

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of va...

  11. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    Directory of Open Access Journals (Sweden)

    Marek Blazewicz

    2013-01-01

    Full Text Available Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.

  12. Multi-layered Security Approaches for a Modular Open Network Architecture-based Satellite

    OpenAIRE

    Shirley, Brandon; Young, Quinn; Wegner, Peter; Christensen, Jacob; Janicik, Jeffrey

    2014-01-01

    A growing trend in satellite development includes shortening the development lifecycle for hardware and software, cost reduction, and promoting reuse for future missions. Department of Defense (DoD) acquisition policies mandate system providers use Open Systems Architecture (OSA) where feasible. Modular Open Network Architecture (MONA) is a subset of OSA and paves the way to achieve cost reduction and reuse during a reduced development lifecycle. MONA approaches provide opportunities to enhan...

  13. Managing the Evolution of an Enterprise Architecture using a MAS-Product-Line Approach

    Science.gov (United States)

    Pena, Joaquin; Hinchey, Michael G.; Resinas, manuel; Sterritt, Roy; Rash, James L.

    2006-01-01

    We view an evolutionary system ns being n software product line. The core architecture is the unchanging part of the system, and each version of the system may be viewed as a product from the product line. Each "product" may be described as the core architecture with sonre agent-based additions. The result is a multiagent system software product line. We describe an approach to such n Software Product Line-based approach using the MaCMAS Agent-Oriented nzethoclology. The approach scales to enterprise nrchitectures as a multiagent system is an approprinre means of representing a changing enterprise nrchitectclre nnd the inferaction between components in it.

  14. DESIGN AS GENERATOR (DAG: AN ARCHITECTURAL APPROACH FOR EMPOWERING COMMUNITY

    Directory of Open Access Journals (Sweden)

    KATOPPO Martin

    2014-12-01

    Full Text Available Design as we imagine should act like a generator, to become light for others. This is the concept of Design as Generator (DAG. It main goal is for bring the essence of Architecture in which to enhance the quality of life not just for self but for others. For this purpose we create D-Apps (Dwelling Applications, D-PaD and A-PaD (Dwelling and Area Prototype Applicative Design that will explained by our experimental dwelling project 200 Rumah Besi and our future project taki (taman kita Community Sustainable Park. We also developed DAG particular empowering and participatory design methodology – inspired by PAR (Participatory Action Research and DT (Design Thinking through mixed methods research, the Sequential Embedded Experimental Model.

  15. Design Approach for Fault Tolerance in FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Ms. Shweta S. Meshram

    2011-03-01

    Full Text Available Failures of nano-metric technologies owing to defects and shrinking process tolerances give rise tosignificant challenges for IC testing. In recent years the application space of reconfigurable devices hasgrown to include many platforms with a strong need for fault tolerance. While these systems frequentlycontain hardware redundancy to allow for continued operation in the presence of operational faults, theneed to recover faulty hardware and return it to full functionality quickly and efficiently is great. Inaddition to providing functional density, FPGAs provide a level of fault tolerance generally not found inmask-programmable devices by including the capability to reconfigure around operational faults in thefield. Reliability and process variability are serious issues for FPGAs in the future. With advancement inprocess technology, the feature size is decreasing which leads to higher defect densities, moresophisticated techniques at increased costs are required to avoid defects. If nano-technology fabricationare applied the yield may go down to zero as avoiding defect during fabrication will not be a feasibleoption Hence, feature architecture have to be defect tolerant. In regular structure like FPGA, redundancyis commonly used for fault tolerance. In this work we present a solution in which configuration bit-streamof FPGA is modified by a hardware controller that is present on the chip itself. The technique usesredundant device for replacing faulty device and increases the yield.

  16. An Integrated Modeling Approach to Evaluate and Optimize Data Center Sustainability, Dependability and Cost

    Directory of Open Access Journals (Sweden)

    Gustavo Callou

    2014-01-01

    Full Text Available Data centers have evolved dramatically in recent years, due to the advent of social networking services, e-commerce and cloud computing. The conflicting requirements are the high availability levels demanded against the low sustainability impact and cost values. The approaches that evaluate and optimize these requirements are essential to support designers of data center architectures. Our work aims to propose an integrated approach to estimate and optimize these issues with the support of the developed environment, Mercury. Mercury is a tool for dependability, performance and energy flow evaluation. The tool supports reliability block diagrams (RBD, stochastic Petri nets (SPNs, continuous-time Markov chains (CTMC and energy flow (EFM models. The EFM verifies the energy flow on data center architectures, taking into account the energy efficiency and power capacity that each device can provide (assuming power systems or extract (considering cooling components. The EFM also estimates the sustainability impact and cost issues of data center architectures. Additionally, a methodology is also considered to support the modeling, evaluation and optimization processes. Two case studies are presented to illustrate the adopted methodology on data center power systems.

  17. A Modeling Approach based on UML/MARTE for GPU Architecture

    CERN Document Server

    Rodrigues, Antonio Wendell De Oliveira; Dekeyser, Jean-Luc

    2011-01-01

    Nowadays, the High Performance Computing is part of the context of embedded systems. Graphics Processing Units (GPUs) are more and more used in acceleration of the most part of algorithms and applications. Over the past years, not many efforts have been done to describe abstractions of applications in relation to their target architectures. Thus, when developers need to associate applications and GPUs, for example, they find difficulty and prefer using API for these architectures. This paper presents a metamodel extension for MARTE profile and a model for GPU architectures. The main goal is to specify the task and data allocation in the memory hierarchy of these architectures. The results show that this approach will help to generate code for GPUs based on model transformations using Model Driven Engineering (MDE).

  18. Hardware Genetic Algorithm Optimization by Critical Path Analysis using a Custom VLSI Architecture

    Directory of Open Access Journals (Sweden)

    Farouk Smith

    2015-07-01

    Full Text Available This paper propose a Virtual-Field Programmable Gate Array (V-FPGA architecture that allows direct access to its configuration bits to facilitate hardware evolution, thereby allowing any combinational or sequential digital circuit to be realized. By using the V-FPGA, this paper investigates two possible ways of making evolutionary hardware systems more scalable: by optimizing the system’s genetic algorithm (GA; and by decomposing the solution circuit into smaller, evolvable sub-circuits. GA optimization is done by: omitting a canonical GA’s crossover operator (i.e. by using a 1+λ algorithm; applying evolution constraints; and optimizing the fitness function. A noteworthy contribution this research has made is the in-depth analysis of the phenotypes’ CPs. Through analyzing the CPs, it has been shown that a great amount of insight can be gained into a phenotype’s fitness. We found that as the number of columns in the Cartesian Genetic Programming array increases, so the likelihood of an external output being placed in the column decreases. Furthermore, the number of used LEs per column also substantially decreases per added column. Finally, we demonstrated the evolution of a state-decomposed control circuit. It was shown that the evolution of each state’s sub-circuit was possible, and suggest that modular evolution can be a successful tool when dealing with scalability.

  19. Greenhouse climate management : an optimal control approach

    NARCIS (Netherlands)

    Henten, van E.J.

    1994-01-01

    In this thesis a methodology is developed for the construction and analysis of an optimal greenhouse climate control system.

    In chapter 1, the results of a literature survey are presented and the research objectives are defined. In the literature, optimal greenhouse climate

  20. Greenhouse climate management: an optimal control approach.

    NARCIS (Netherlands)

    Henten, van E.J.

    1994-01-01

    In this thesis a methodology is developed for the construction and analysis of an optimal greenhouse climate control system.In chapter 1, the results of a literature survey are presented and the research objectives are defined. In the literature, optimal greenhouse climate management systems have be

  1. Pressure Vessel Optimization a Fuzzy Approach

    Directory of Open Access Journals (Sweden)

    Mr. Uday V. Aswalekar

    2015-05-01

    Full Text Available Optimization has become a significant area of development, both in research and for practicing design engineers. In this work here for optimization of air receiver tank, of reciprocating air compressor, the sequential linear programming method is being used. The capacity of tank is considered as optimization constraint. Conventional dimension of the tank are utilized as reference for defining range. Inequality constraints such as different design stresses for different parts of tank are determined and suitable values are selected. Algorithm is prepared and conventional SLP is done in MATLAB Software with C++ interface toget optimized dimension of tank. The conventional SLP is modified by introducing fuzzy heuristics and the relevant algorithm is prepared. Fuzzy based sequential linear programming is prepared and executed in MATLAB Software using fuzzy toolbox and optimization tool box and corresponding dimension are obtained. After comparison FSLP with SLP it is observed that FSLP is easier in execution.

  2. Tai Chi Chuan Optimizes the Functional Organization of the Intrinsic Human Brain Architecture in Older Adults

    Directory of Open Access Journals (Sweden)

    Gao-Xia eWei

    2014-04-01

    Full Text Available Whether Tai Chi Chuan (TCC can influence the intrinsic functional architecture of the human brain remains unclear. To examine TCC-associated changes in functional connectomes, resting-state functional magnetic resonance images were acquired from 40 older individuals including 22 experienced TCC practitioners (experts and 18 demographically matched TCC-naïve healthy controls, and their local functional homogeneities across the cortical mantle were compared. Compared to the controls, the TCC experts had significantly greater and more experience-dependent functional homogeneity in the right postcentral gyrus (PosCG and less functional homogeneity in the left anterior cingulate cortex (ACC and the right dorsal lateral prefrontal cortex (DLPFC. Increased functional homogeneity in the PosCG was correlated with TCC experience. Intriguingly, decreases in functional homogeneity (improved functional specialization in the left ACC and increases in functional homogeneity (improved functional integration in the right PosCG both predicted performance gains on attention network behavior tests. These findings provide evidence for the functional plasticity of the brain’s intrinsic architecture toward optimizing locally functional organization, with great implications for understanding the effects of TCC on cognition, behavior and health in aging population.

  3. IT Confidentiality Risk Assessment for an Architecture-Based Approach

    NARCIS (Netherlands)

    Morali, A.; Zambon, Emmanuele; Etalle, Sandro; Overbeek, Paul

    2008-01-01

    Information systems require awareness of risks and a good understanding of vulnerabilities and their exploitations. In this paper, we propose a novel approach for the systematic assessment and analysis of confidentiality risks caused by disclosure of operational and functional information. The

  4. IT Confidentiality Risk Assessment for an Architecture-Based Approach

    NARCIS (Netherlands)

    Morali, A.; Zambon, Emmanuele; Etalle, Sandro; Overbeek, Paul

    2008-01-01

    Information systems require awareness of risks and a good understanding of vulnerabilities and their exploitations. In this paper, we propose a novel approach for the systematic assessment and analysis of confidentiality risks caused by disclosure of operational and functional information. The appro

  5. A hardware/software co-optimization approach for embedded software of MP3 decoder

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei; LIU Peng; ZHAI Zhi-bo

    2007-01-01

    In order to improve the efficiency of embedded software running on processor core, this paper proposes a hardware/software co-optimization approach for embedded software from the system point of view. The proposed stepwise methods aim at exploiting the structure and the resources of the processor as much as possible for software algorithm optimization. To achieve low memory usage and low frequency need for the same performance, this co-optimization approach was used to optimize embedded software of MP3 decoder based on a 16-bit fixed-point DSP core. After the optimization, the results of decoding 128kbps, 44.1 kHz stereo MP3 on DSP evaluation platform need 45.9 MIPS and 20.4 kbytes memory space. The optimization rate achieves 65.6% for memory and 49.6% for frequency respectively compared with the results by compiler using floating-point computation. The experimental result indicates the availability of the hardware/software co-optimization approach depending on the algorithm and architecture.

  6. Two-Channel Transparency-Optimized Control Architectures in Bilateral Teleoperation With Time Delay

    Science.gov (United States)

    Kim, Jonghyun; Chang, Pyung Hun; Park, Hyung-Soon

    2013-01-01

    This paper introduces transparency-optimized control architectures (TOCAs) using two communication channels. Two classes of two-channel TOCAs are found, thereby showing that two channels are sufficient to achieve transparency. These TOCAs achieve a greater level of transparency but poorer stability than three-channel TOCAs and four-channel TOCAs. Stability of the two-channel TOCAs has been enhanced while minimizing transparency degradation by adding a filter; and a combined use of the two classes of two-channel TOCAs is proposed for both free space and constrained motion, which involve switching between two TOCAs for transition between free space and constrained motions. The stability condition of the switched teleoperation system is derived for practical applications. Through the one degree-of-freedom (DOF) experiment, the proposed two-channel TOCAs were shown to operate stably, while achieving better transparency under time delay than the other TOCAs. PMID:23833548

  7. OPTIMIZATION OF NEURAL NETWORK ARCHITECTURE FOR BIOMECHANIC CLASSIFICATION TASKS WITH ELECTROMYOGRAM INPUTS

    Directory of Open Access Journals (Sweden)

    Alayna Kennedy

    2016-09-01

    Full Text Available Electromyogram signals (EMGs contain valuable information that can be used in man-machine interfacing between human users and myoelectric prosthetic devices. However, EMG signals are complicated and prove difficult to analyze due to physiological noise and other issues. Computational intelligence and machine learning techniques, such as artificial neural networks (ANNs, serve as powerful tools for analyzing EMG signals and creating optimal myoelectric control schemes for prostheses. This research examines the performance of four different neural network architectures (feedforward, recurrent, counter propagation, and self organizing map that were tasked with classifying walking speed when given EMG inputs from 14 different leg muscles. Experiments conducted on the data set suggest that self organizing map neural networks are capable of classifying walking speed with greater than 99% accuracy.

  8. The holistic architectural approach to integrating the healthcare record in the overall information system.

    Science.gov (United States)

    Ferrara, F M; Sottile, P A; Grimson, W

    1999-01-01

    The integration and evolution of existing systems represents one of the most urgent problems facing those responsible for healthcare information systems so that the needs of the whole organisation are addressed. The management of the healthcare record represents one of the major requirements in the overall process, however it is also necessary to ensure that the healthcare record and other healthcare information is integrated within the context of an overall healthcare information system. The CEN ENV 12967-1 'Healthcare Information Systems Architecture' standard defines a holistic architectural approach where the various, organisational, clinical, administrative and managerial requirements co-exist and cooperate, relying on a common heritage of information and services. This paper reviews the middleware-based approach adopted by CEN ENV 12967-1 and the specialisation necessary for the healthcare record based on CEN ENV 12265 'Electronic Healthcare Record Architecture'.

  9. Energy- and Performance-Driven NoC Communication Architecture Synthesis Using a Decomposition Approach

    CERN Document Server

    Ogras, Umit Y

    2011-01-01

    In this paper, we present a methodology for customized communication architecture synthesis that matches the communication requirements of the target application. This is an important problem, particularly for network-based implementations of complex applications. Our approach is based on using frequently encountered generic communication primitives as an alphabet capable of characterizing any given communication pattern. The proposed algorithm searches through the entire design space for a solution that minimizes the system total energy consumption, while satisfying the other design constraints. Compared to the standard mesh architecture, the customized architecture generated by the newly proposed approach shows about 36% throughput increase and 51% reduction in the energy required to encrypt 128 bits of data with a standard encryption algorithm.

  10. An automatic grid generation approach over free-form surface for architectural design

    Institute of Scientific and Technical Information of China (English)

    苏亮; 祝顺来; 肖南; 高博青

    2014-01-01

    An essential step for the realization of free-form surface structures is to create an efficient structural gird that satisfies not only the architectural aesthetics, but also the structural performance. Employing the main stress trajectories as the representation of force flows on a free-form surface, an automatic grid generation approach is proposed for the architectural design. The algorithm automatically plots the main stress trajectories on a 3D free-form surface, and adopts a modified advancing front meshing technique to generate the structural grid. Based on the proposed algorithm, an automatic grid generator named “St-Surmesh” is developed for the practical architectural design of free-form surface structure. The surface geometry of one of the Sun Valleys in Expo Axis for the Expo Shanghai 2010 is selected as a numerical example for validating the proposed approach. Comparative studies are performed to demonstrate how different structural grids affect the design of a free-form surface structure.

  11. Examining the Bernstein global optimization approach to optimal power flow problem

    Science.gov (United States)

    Patil, Bhagyesh V.; Sampath, L. P. M. I.; Krishnan, Ashok; Ling, K. V.; Gooi, H. B.

    2016-10-01

    This work addresses a nonconvex optimal power flow problem (OPF). We introduce a `new approach' in the context of OPF problem based on the Bernstein polynomials. The applicability of the approach is studied on a real-world 3-bus power system. The numerical results obtained with this new approach for a 3-bus system reveal a satisfactory improvement in terms of optimality. The results are found to be competent with generic global optimization solvers BARON and COUENNE.

  12. Non-technical approach to the challenges of ecological architecture: Learning from Van der Laan

    Directory of Open Access Journals (Sweden)

    María-Jesús González-Díaz

    2016-06-01

    Full Text Available Up to now, ecology has a strong influence on the development of technical and instrumental aspects of architecture, such as renewable and efficient of resources and energy, CO2 emissions, air quality, water reuse, some social and economical aspects. These concepts define the physical keys and codes of the current ׳sustainable׳ architecture, normally instrumental but rarely and insufficiently theorised. But is not there another way of bringing us to nature? We need a theoretical referent. This is where we place the Van der Laan׳s thoughts: he considers that art completes nature and he builds his theoretical discourse on it, trying to better understand many aspects of architecture. From a conceptual point of view, we find in his works sense of timelessness, universality, special attention on the ׳locus׳ and a strict sense of proportions and use of materials according to nature. Could these concepts complement our current sustainable architecture? How did Laan apply the current codes of ecology in his architecture? His work may help us to get a theoretical interpretation of nature and not only physical. This paper develops this idea through the comparison of thoughts and works of Laan with the current technical approach to ׳sustainable׳ architecture.

  13. Optimal Reinsurance: A Risk Sharing Approach

    Directory of Open Access Journals (Sweden)

    Alejandro Balbas

    2013-08-01

    Full Text Available This paper proposes risk sharing strategies, which allow insurers to cooperate and diversify non-systemic risk. We deal with both deviation measures and coherent risk measures and provide general mathematical methods applying to optimize them all. Numerical examples are given in order to illustrate how efficiently the non-systemic risk can be diversified and how effective the presented mathematical tools may be. It is also illustrated how the existence of huge disasters may lead to wrong solutions of our optimal risk sharing problem, in the sense that the involved risk measure could ignore the existence of a non-null probability of "global ruin" after the design of the optimal risk sharing strategy. To overcome this caveat, one can use more conservative risk measures. The stability in the large of the optimal sharing plan guarantees that "the global ruin caveat" may be also addressed and solved with the presented methods.

  14. Group Counseling Optimization: A Novel Approach

    Science.gov (United States)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  15. On a Variational Approach to Optimization of Hybrid Mechanical Systems

    Directory of Open Access Journals (Sweden)

    Vadim Azhmyakov

    2010-01-01

    Full Text Available This paper deals with multiobjective optimization techniques for a class of hybrid optimal control problems in mechanical systems. We deal with general nonlinear hybrid control systems described by boundary-value problems associated with hybrid-type Euler-Lagrange or Hamilton equations. The variational structure of the corresponding solutions makes it possible to reduce the original “mechanical” problem to an auxiliary multiobjective programming reformulation. This approach motivates possible applications of theoretical and computational results from multiobjective optimization related to the original dynamical optimization problem. We consider first order optimality conditions for optimal control problems governed by hybrid mechanical systems and also discuss some conceptual algorithms.

  16. Optimization of residual stresses in MMC's through the variation of interfacial layer architectures and processing parameters

    Science.gov (United States)

    Pindera, Marek-Jerzy; Salzar, Robert S.

    1996-01-01

    The objective of this work was the development of efficient, user-friendly computer codes for optimizing fabrication-induced residual stresses in metal matrix composites through the use of homogeneous and heterogeneous interfacial layer architectures and processing parameter variation. To satisfy this objective, three major computer codes have been developed and delivered to the NASA-Lewis Research Center, namely MCCM, OPTCOMP, and OPTCOMP2. MCCM is a general research-oriented code for investigating the effects of microstructural details, such as layered morphology of SCS-6 SiC fibers and multiple homogeneous interfacial layers, on the inelastic response of unidirectional metal matrix composites under axisymmetric thermomechanical loading. OPTCOMP and OPTCOMP2 combine the major analysis module resident in MCCM with a commercially-available optimization algorithm and are driven by user-friendly interfaces which facilitate input data construction and program execution. OPTCOMP enables the user to identify those dimensions, geometric arrangements and thermoelastoplastic properties of homogeneous interfacial layers that minimize thermal residual stresses for the specified set of constraints. OPTCOMP2 provides additional flexibility in the residual stress optimization through variation of the processing parameters (time, temperature, external pressure and axial load) as well as the microstructure of the interfacial region which is treated as a heterogeneous two-phase composite. Overviews of the capabilities of these codes are provided together with a summary of results that addresses the effects of various microstructural details of the fiber, interfacial layers and matrix region on the optimization of fabrication-induced residual stresses in metal matrix composites.

  17. Franz Kafka in the Design Studio: A Hermeneutic-Phenomenological Approach to Architectural Design Education

    Science.gov (United States)

    Hisarligil, Beyhan Bolak

    2012-01-01

    This article demonstrates the outcomes of taking a hermeneutic phenomenological approach to architectural design and discusses the potentials for imaginative reasoning in design education. This study tests the use of literature as a verbal form of art and design and the contribution it can make to imaginative design processes--which are all too…

  18. Franz Kafka in the Design Studio: A Hermeneutic-Phenomenological Approach to Architectural Design Education

    Science.gov (United States)

    Hisarligil, Beyhan Bolak

    2012-01-01

    This article demonstrates the outcomes of taking a hermeneutic phenomenological approach to architectural design and discusses the potentials for imaginative reasoning in design education. This study tests the use of literature as a verbal form of art and design and the contribution it can make to imaginative design processes--which are all too…

  19. A linear programming approach for optimal contrast-tone mapping.

    Science.gov (United States)

    Wu, Xiaolin

    2011-05-01

    This paper proposes a novel algorithmic approach of image enhancement via optimal contrast-tone mapping. In a fundamental departure from the current practice of histogram equalization for contrast enhancement, the proposed approach maximizes expected contrast gain subject to an upper limit on tone distortion and optionally to other constraints that suppress artifacts. The underlying contrast-tone optimization problem can be solved efficiently by linear programming. This new constrained optimization approach for image enhancement is general, and the user can add and fine tune the constraints to achieve desired visual effects. Experimental results demonstrate clearly superior performance of the new approach over histogram equalization and its variants.

  20. Optimal Reverse Carpooling Over Wireless Networks - A Distributed Optimization Approach

    CERN Document Server

    ParandehGheibi, Ali; Effros, Michelle; Medard, Muriel

    2010-01-01

    We focus on a particular form of network coding, reverse carpooling, in a wireless network where the potentially coded transmitted messages are to be decoded immediately upon reception. The network is fixed and known, and the system performance is measured in terms of the number of wireless broadcasts required to meet multiple unicast demands. Motivated by the structure of the coding scheme, we formulate the problem as a linear program by introducing a flow variable for each triple of connected nodes. This allows us to have a formulation polynomial in the number of nodes. Using dual decomposition and projected subgradient method, we present a decentralized algorithm to obtain optimal routing schemes in presence of coding opportunities. We show that the primal sub-problem can be expressed as a shortest path problem on an \\emph{edge-graph}, and the proposed algorithm requires each node to exchange information only with its neighbors.

  1. Assessment of limits of optimal use of cylindrical and multisphere pressure hulls in the architecture of submarines

    OpenAIRE

    Lutsenko, Andrii A.

    2015-01-01

    The article considers the urgent problem of the selection of the optimal architectural and structural type of the submarines pressure hull.  The aim of the study is to define the optimal limit of the use of the cylindrical and multisphere pressure hull in the construction of submarines. The analytical solution of this problem has been obtained by generating and solving the equations of masses and volumes with subsequent comparison of the results. The criterion of the comparison is the submerg...

  2. An Architecture-Centric Approach for Acquiring Software-Reliant Systems

    Science.gov (United States)

    2011-04-30

    Approach for Acquiring Software-Reliant Systems Lawrence Jones and John Bergey , Software Engineering Institute Published: 30 April 2011 Report...Fain, IBM An Architecture-Centric Approach for Acquiring Software-Reliant Systems Lawrence Jones and John Bergey , Software Engineering Institute...of Directors Executive Committee. [lgj@sei.cmu.edu] John Bergey —Mr. Bergey joined the SEI in 1993 as a Visiting Scientist and became a member of the

  3. A Hierarchical Joint Optimized Bit—allocation Strategy for HDTV Encoder with Parallel Coding Architecture

    Institute of Scientific and Technical Information of China (English)

    XIONGHongkai; YUSongyu; YEWei

    2003-01-01

    Because real-time compression and high-speed digital processing circuitry are crucial for digital high definition television (HDTV) coding, parallel processing has become a feasible scheme in most applications as yet. This paper presents a novel bit-allocation strategy for an HDTV encoder system with parallel architecture, in which the original HDTV-picture is divided into six hor-izontal sub-pictures. It is shown that the MPEG-2 Test Model 5 (TMS) rate control scheme would not only give rise to non-consistent sub-pictures visual quality in a com-posite HDTV frame, but also make the coding quality de-grade abruptly and the buffer underfiow at scene changes.How to allocate bit-rates among sub-pictures becomes a great challenge in literatures. The proposed strategy is dedicated to a hierarchical joint optimized bit-allocation with sub-pictures' average complexity and average bits measure, and moreover, capable of alleviating serious pic-ture quality inconsistence at scene changes. The optimized bit-allocation and its complementary rate adaptive proce-dures are formulated and described. In the paper, the pro-posed strategy is compared with the independent coding,in which each sub-picture sequence is assigned the same proportion of the channel bandwidth. Experimental re-suits demonstrate the effectiveness of the proposed scheme not only alleviates the boundary effect but also promises the sub-pictures quality consistency.

  4. An Array-based Approach to Modelling Production Management System Architectures

    DEFF Research Database (Denmark)

    Falster, Peter

    2000-01-01

    Several proposals to a conceptual framework for production management architecture are briefly reviewed. It is suggested that an array-based approach and a classic engineering-economic model, is used as tools for a conceptualisation of ideas. Traditional architectural design is usually based...... on a geometrical thinking. Accordingly, elements from measurement and array theory are introduced, but in a more abstract way than traditionally connected with 3D-geometry. The paper concludes that a few set of concepts, like products, resources, activities, events, stages, etc. can be synthesized and analogies...

  5. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    Science.gov (United States)

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2017-02-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  6. Fine Surveying and 3D Modeling Approach for Wooden Ancient Architecture via Multiple Laser Scanner Integration

    Directory of Open Access Journals (Sweden)

    Qingwu Hu

    2016-03-01

    Full Text Available A multiple terrestrial laser scanner (TLS integration approach is proposed for the fine surveying and 3D modeling of ancient wooden architecture in an ancient building complex of Wudang Mountains, which is located in very steep surroundings making it difficult to access. Three-level TLS with a scalable measurement distance and accuracy is presented for data collection to compensate for data missed because of mutual sheltering and scanning view limitations. A multi-scale data fusion approach is proposed for data registration and filtering of the different scales and separated 3D data. A point projection algorithm together with point cloud slice tools is designed for fine surveying to generate all types of architecture maps, such as plan drawings, facade drawings, section drawings, and doors and windows drawings. The section drawings together with slicing point cloud are presented for the deformation analysis of the building structure. Along with fine drawings and laser scanning data, the 3D models of the ancient architecture components are built for digital management and visualization. Results show that the proposed approach can achieve fine surveying and 3D documentation of the ancient architecture within 3 mm accuracy. In addition, the defects of scanning view and mutual sheltering can overcome to obtain the complete and exact structure in detail.

  7. New approaches to the design optimization of hydrofoils

    Science.gov (United States)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  8. The Critical Approach of ‘Plug’ in Re-Conceptualisation of Architectural Program

    Directory of Open Access Journals (Sweden)

    Bahar Beslioglu

    2014-03-01

    Full Text Available This paper explores the issue of ‘plug’ in designing program within particular experimental studies in architecture. There was what could be called a critical ‘elaboration’ of program in Archigram’s 1964 ‘Plug-In’ City project, while intriguingly the critical approach taken in the 2001 ‘Un-Plug’ project of Francois Roche and Stephanie Lavaux hinted at a ‘re-evaluation’ of ‘plug’ related to program in architecture. The embedded criticism and creative programmatic suggestions in both projects will be discussed from the point of view of using the accumulated urbanscape as a potential for contemplation, a theme that has also been elaborated, both theoretically and experimentally, by the artist/architect Gordon Matta-Clark in his 1978 ‘Balloon Housing’ project. These experimentations - about the ‘plug’ - need to be discussed in order to understand their contributions as traceable sources to program issue in contemporary architecture.

  9. Fault tolerant onboard packet switch architecture for communication satellites: Shared memory per beam approach

    Science.gov (United States)

    Shalkhauser, Mary JO; Quintana, Jorge A.; Soni, Nitin J.

    1994-01-01

    The NASA Lewis Research Center is developing a multichannel communication signal processing satellite (MCSPS) system which will provide low data rate, direct to user, commercial communications services. The focus of current space segment developments is a flexible, high-throughput, fault tolerant onboard information switching processor. This information switching processor (ISP) is a destination-directed packet switch which performs both space and time switching to route user information among numerous user ground terminals. Through both industry study contracts and in-house investigations, several packet switching architectures were examined. A contention-free approach, the shared memory per beam architecture, was selected for implementation. The shared memory per beam architecture, fault tolerance insertion, implementation, and demonstration plans are described.

  10. Relational Approach to XPath Query Optimization

    NARCIS (Netherlands)

    Verhage, R.

    2005-01-01

    This thesis contributes to the Pathfinder project which aims at creating an XQuery compiler on top of a relational database system. Currently, it is being implemented on top of MonetDB, a main memory database system. For optimization and portability purposes, Pathfinder first compiles an XQuery expr

  11. Optimization of nonlinear controller with an enhanced biogeography approach

    Directory of Open Access Journals (Sweden)

    Mohammed Salem

    2014-07-01

    Full Text Available This paper is dedicated to the optimization of nonlinear controllers basing of an enhanced Biogeography Based Optimization (BBO approach. Indeed, The BBO is combined to a predator and prey model where several predators are used with introduction of a modified migration operator to increase the diversification along the optimization process so as to avoid local optima and reach the optimal solution quickly. The proposed approach is used in tuning the gains of PID controller for nonlinear systems. Simulations are carried out over a Mass spring damper and an inverted pendulum and has given remarkable results when compared to genetic algorithm and BBO.

  12. Service Oriented Architecture for Business Dynamics: An Agent Based Business Modeling Approach

    Directory of Open Access Journals (Sweden)

    O. P. Rishi

    2009-01-01

    Full Text Available In today's rapidly changing environment the industries areinterested in executing business functions that has scope inmultiple applications. Business dynamics and technologicalinnovations have felt organizations to comply with adisparate mix of operating systems, applications anddatabases. This makes it difficult, time-consuming and costlyfor IT departments to deliver new applications that integrateheterogeneous technologies. It demands high interoperabilityand more flexible and adaptive business processmanagement. The inclination is to have systems assembled,from a loosely coupled collection of Web services, which areuniversal and integrated. This technical area appears tohave scope where the Agent Technology can be exploitedwith significant advantages. With Service OrientedArchitecture a decomposable architecture, and associatedset of development and IT management disciplines,composed of loosely coupled services communicating viapre-established protocols, these services can be assembledad-hoc to form customized applications that address a widevariety of business requirements.In the present paper, we propose a conceptual frameworkfor agent-based Service Oriented Architecture (SOA. Inwhich we try to integrate Service Oriented Architecture withthe agent technology & other tactical technologies like webservices, business workflow services, Business meta-rules,search optimization of services and semantic Webtechnology for business service mappings.

  13. A Robust Method to Integrate End-to-End Mission Architecture Optimization Tools

    Science.gov (United States)

    Lugo, Rafael; Litton, Daniel; Qu, Min; Shidner, Jeremy; Powell, Richard

    2016-01-01

    End-to-end mission simulations include multiple phases of flight. For example, an end-to-end Mars mission simulation may include launch from Earth, interplanetary transit to Mars and entry, descent and landing. Each phase of flight is optimized to meet specified constraints and often depend on and impact subsequent phases. The design and optimization tools and methodologies used to combine different aspects of end-to-end framework and their impact on mission planning are presented. This work focuses on a robust implementation of a Multidisciplinary Design Analysis and Optimization (MDAO) method that offers the flexibility to quickly adapt to changing mission design requirements. Different simulations tailored to the liftoff, ascent, and atmospheric entry phases of a trajectory are integrated and optimized in the MDAO program Isight, which provides the user a graphical interface to link simulation inputs and outputs. This approach provides many advantages to mission planners, as it is easily adapted to different mission scenarios and can improve the understanding of the integrated system performance within a particular mission configuration. A Mars direct entry mission using the Space Launch System (SLS) is presented as a generic end-to-end case study. For the given launch period, the SLS launch performance is traded for improved orbit geometry alignment, resulting in an optimized a net payload that is comparable to that in the SLS Mission Planner's Guide.

  14. Design Buildings Optimally: A Lifecycle Assessment Approach

    KAUST Repository

    Hosny, Ossama

    2013-01-01

    This paper structures a generic framework to support optimum design for multi-buildings in desert environment. The framework is targeting an environmental friendly design with minimum lifecycle cost, using Genetic Algorithms (Gas). GAs function through a set of success measures which evaluates the design, formulates a proper objective, and reflects possible tangible/intangible constraints. The framework optimizes the design and categorizes it under a certain environmental category at minimum Life Cycle Cost (LCC). It consists of three main modules: (1) a custom Building InformationModel (BIM) for desert buildings with a compatibility checker as a central interactive database; (2) a system evaluator module to evaluate the proposed success measures for the design; and (3) a GAs optimization module to ensure optimum design. The framework functions through three levels: the building components, integrated building, and multi-building levels. At the component level the design team should be able to select components in a designed sequence to ensure compatibility among various components, while at the building level; the team can relatively locate and orient each individual building. Finally, at the multi-building (compound) level the whole design can be evaluated using success measures of natural light, site capacity, shading impact on natural lighting, thermal change, visual access and energy saving. The framework through genetic algorithms optimizes the design by determining proper types of building components and relative buildings locations and orientations which ensure categorizing the design under a specific category or meet certain preferences at minimum lifecycle cost.

  15. A CONSTRAINED OPTIMIZATION APPROACH FOR LCP

    Institute of Scientific and Technical Information of China (English)

    Ju-liang Zhang; Jian Chen; Xin-jian Zhuo

    2004-01-01

    In this paper, LCP is converted to an equivalent nonsmooth nonlinear equation system H(x, y) = 0 by using the famous NCP function-Fischer-Burmeister function. Note that some equations in H(x, y) = 0 are nonsmooth and nonlinear hence difficult to solve while the others are linear hence easy to solve. Then we further convert the nonlinear equation system H(x, y) = 0 to an optimization problem with linear equality constraints. After that we study the conditions under which the K T points of the optimization problem are the solutions of the original LCP and propose a method to solve the optimization problem.In this algorithm, the search direction is obtained by solving a strict convex programming at each iterative point. However, our algorithm is essentially different from traditional SQP method. The global convergence of the method is proved under mild conditions. In addition, we can prove that the algorithm is convergent superlinearly under the conditions:M is P0 matrix and the limit point is a strict complementarity solution of LCP. Preliminary numerical experiments are reported with this method.

  16. Molecular Approaches for Optimizing Vitamin D Supplementation.

    Science.gov (United States)

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations. © 2016 Elsevier Inc. All rights reserved.

  17. Optimal Architecture for an Asteroid Mining Mission: System Components and Project Execution

    Science.gov (United States)

    Erickson, Ken R.

    2007-01-01

    Near-Earth asteroids (NEAs) offer potential profits both in the near-term (mining platinum group metals, or PGMs) and long-term (harvesting water, volatiles and ore to provide the economic backbone for lunar, Martian and other space exploration). The abundance of raw materials in NEAs include: water and other volatiles for life-support and power, nickel, iron and other metals for construction and manufacturing; carbonaceous compounds for ceramics and building materials; and PGMs for fuel cells and numerous applications on Earth. An efficient, flexible and cost-effective mission utilizing adaptable and resilient robotic compo-nents is essential to successfully establish NEA mining as a comer-cial enterprise. This paper presents an optimized architecture, detailing necessary engineering components, task integration between them, and methods to address the more likely problems encountered. Candidate NEAs are suggested that could offer optimal PGM resources and that have already been evaluated by rendezvous mapping. Mission delta-V and propellant selection are based upon launch from and return to LEO. On-site equipment includes AI-guided robotics, with human telecontrol from Earth to minimize risk and cost. A command-control-communication (CCC) unit orbits the NEA, and coordinates four small lander-miners (LMs), each of which acquire and process regolith. Two LMs are specialized for water and volatiles, two for PGM and Ni-Fe ore. A solar-powered unit hydrolyzes water from the NEA into H2 and O2 for use as propellant, and a solar-thermal propulsion unit returns additional water, PGMs and Ni-Fe ore to LEO. The pro-posed architecture emphasizes flexibility, redundancy of critical units, and fail-safes to maximize probability of mission success. Potential problems addressed include: failure of components, varying surface conditions and mineralogic content, fluctuating solar exposure (due to asteroid rotation) and its impact on solar power units, extreme temperature changes

  18. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    Science.gov (United States)

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  19. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    Science.gov (United States)

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  20. The constraints satisfaction problem approach in the design of an architectural functional layout

    Science.gov (United States)

    Zawidzki, Machi; Tateyama, Kazuyoshi; Nishikawa, Ikuko

    2011-09-01

    A design support system with a new strategy for finding the optimal functional configurations of rooms for architectural layouts is presented. A set of configurations satisfying given constraints is generated and ranked according to multiple objectives. The method can be applied to problems in architectural practice, urban or graphic design-wherever allocation of related geometrical elements of known shape is optimized. Although the methodology is shown using simplified examples-a single story residential building with two apartments each having two rooms-the results resemble realistic functional layouts. One example of a practical size problem of a layout of three apartments with a total of 20 rooms is demonstrated, where the generated solution can be used as a base for a realistic architectural blueprint. The discretization of design space is discussed, followed by application of a backtrack search algorithm used for generating a set of potentially 'good' room configurations. Next the solutions are classified by a machine learning method (FFN) as 'proper' or 'improper' according to the internal communication criteria. Examples of interactive ranking of the 'proper' configurations according to multiple criteria and choosing 'the best' ones are presented. The proposed framework is general and universal-the criteria, parameters and weights can be individually defined by a user and the search algorithm can be adjusted to a specific problem.

  1. A Principled Approach to the Specification of System Architectures for Space Missions

    Science.gov (United States)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  2. Optimized Load Shedding Approach for Grid-Connected DC Microgrid Systems under Realistic Constraints

    Directory of Open Access Journals (Sweden)

    Leonardo Trigueiro dos Santos

    2016-12-01

    Full Text Available The microgrid system is an answer to the necessity of increasing renewable energy penetration and also works as a bridge for the future smart grid. Considering the microgrid system applied to commercial building equipped with photovoltaic sources, the usage of a DC microgrid architecture can improve the efficiency of the system, while ensuring robustness and reducing the overall energy cost. Given the power grid stress and the intermittency of the DC microgrid power production, backup power provision and load shedding operations may occur to stabilize the DC bus voltage. Based on the knapsack problem formulation, this paper presents a realistic optimization approach to shedding a building’s appliances, considering the priority of each appliance, and also considering a minimum amount of load that must be attended. The problem is solved by mixed integer linear programming and the CPLEX solver. The proposed architecture ensures critical load supply and voltage stabilization through the real-time operation of the operational algorithm allowing the load shedding optimization approach to be applied without compromising the robustness of the system. The results obtained by simulation prove that the DC microgrid is able to supply the building power network by applying the load shedding optimization program to overcome, mainly, the renewable energy intermittency.

  3. Multiobjective Optimization Methodology A Jumping Gene Approach

    CERN Document Server

    Tang, KS

    2012-01-01

    Complex design problems are often governed by a number of performance merits. These markers gauge how good the design is going to be, but can conflict with the performance requirements that must be met. The challenge is reconciling these two requirements. This book introduces a newly developed jumping gene algorithm, designed to address the multi-functional objectives problem and supplies a viably adequate solution in speed. The text presents various multi-objective optimization techniques and provides the technical know-how for obtaining trade-off solutions between solution spread and converg

  4. Stochastic Optimization Approaches for Solving Sudoku

    CERN Document Server

    Perez, Meir

    2008-01-01

    In this paper the Sudoku problem is solved using stochastic search techniques and these are: Cultural Genetic Algorithm (CGA), Repulsive Particle Swarm Optimization (RPSO), Quantum Simulated Annealing (QSA) and the Hybrid method that combines Genetic Algorithm with Simulated Annealing (HGASA). The results obtained show that the CGA, QSA and HGASA are able to solve the Sudoku puzzle with CGA finding a solution in 28 seconds, while QSA finding a solution in 65 seconds and HGASA in 1.447 seconds. This is mainly because HGASA combines the parallel searching of GA with the flexibility of SA. The RPSO was found to be unable to solve the puzzle.

  5. Optimization of Orthopaedic Drilling: A Taguchi Approach

    Directory of Open Access Journals (Sweden)

    Rupesh Kumar Pandey

    2012-06-01

    Full Text Available Bone drilling is a common procedure to prepare an implant site during orthopaedic surgery. An increase in temperature during such a procedure can result in thermal ostenecrosis which may delay healing or reduce the stability of the fixation. Therefore it is important to minimize the thermal invasion of bone during drilling. The Taguchi method has been applied to investigate the optimal combination of drill diameter, feed rate and spindle speed in dry drilling of Polymethylmethacrylate (PMMA for minimizing the temperature produced.

  6. Stochastic learning and optimization a sensitivity-based approach

    CERN Document Server

    Cao, Xi-Ren

    2007-01-01

    Performance optimization is vital in the design and operation of modern engineering systems. This book provides a unified framework based on a sensitivity point of view. It introduces new approaches and proposes new research topics.

  7. Optimization approaches to volumetric modulated arc therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  8. Optimal Configuration of a Redundant Robotic Arm: Compliance Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Applications of robots in tasks where the robot's end-effector bears loads, such as manipulating or assembling an object, picking-and-placing loads, grinding or drilling, demand precision. One aspect that improves precision is the limitation, if not elimination, of manipulator compliance. This paper presents a manipulator compliance optimization approach for determining an optimal manipulator configuration for a given position in the robot's task space. A numerical solution for minimal compliance, a nonlinear constrained optimization problem, is presented for an arbitrary position and illustrated by an example, using a model developed on ADAMS software and using MATLAB optimization tools. Also, this paper investigates the optimal value function for robot tasks in which the tool-point is subjected to applied force as it generates an important trajectory such as in grinding processes. The optimal value function is needed for optimal configuration control.

  9. An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Kandemir, Mahmut Taylan [PSU; Choudary, Alok [Northwestern; Thakur, Rajeev [ANL

    2014-03-01

    In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.

  10. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    Science.gov (United States)

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  11. Optimal Approach to SAR Image Despeckling

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Speckle filtering of synthetic aperture radar (SAR) images while preserving the spatial signal variability (texture and fine structures) still remains a challenge. Many algorithms have been proposed for the SAR imagery despeckling. However,simulated annealing (SA) method is one of excellent choices currently. A critical problem in the study on SA is to provide appropriate cooling schedules that ensure fast convergence to near-optimal solutions. This paper gives a new necessary and sufficient condition for the cooling schedule so that the algorithm state converges in all probability to the set of globally minimum cost states.Moreover, it constructs an appropriate objective function for SAR image despeckling. An experimental result of the actual SAR image processing is obtained.

  12. An optimization approach for the satisfiability problem

    Directory of Open Access Journals (Sweden)

    S. Noureddine

    2015-01-01

    Full Text Available We describe a new approach for solving the satisfiability problem by geometric programming. We focus on the theoretical background and give details of the algorithmic procedure. The algorithm is provably efficient as geometric programming is in essence a polynomial problem. The correctness of the algorithm is discussed. The version of the satisfiability problem we study is exact satisfiability with only positive variables, which is known to be NP-complete.

  13. Chronopsychological Approach for Optimizing Human Performance.

    Science.gov (United States)

    1980-03-01

    xpovos, meaning time, and psychology), was first introduced by Folkard (1977), but earlier Halberg (1973) proposed "educative chronobiology " to represent...or phase delay was equally rapid. They felt that the rapid adjustment derives from the individuals’ genetic makeup. They noticed, however, that the... chronobiological approach. Agard Lecture Series on "Sleep, Wakefulness and Circadian Rhythm", AGARD-LS-105, 1979. 62 k ,D C):i i ittee I t) ~ c, i n-on on

  14. EPSILON-CONTINUATION APPROACH FOR TRUSS TOPOLOGY OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    GUO Xu; CHENG Gengdong

    2004-01-01

    In the present paper, a so-called epsilon-continuation approach is proposed for the solution of singular optimum in truss topology optimization problems. This approach is an improved version of the epsilon-relaxed approach developed by the authors previously. In the proposed approach,we start the optimization process from a relaxation parameter with a relatively large value and obtain a solution by applying the epsilon-relaxed approach. Then we decrease the value of the relaxation parameter by a small amount and choose the optimal solution found from the previous optimization process as the initial design for the next optimization. This continuation process is continued until a small termination value of the relaxation parameter is reached. Convergence analysis of the proposed approach is also presented. Numerical examples show that this approach can alleviate the dependence of the final solution on the initial choice of the design variable and enhance the probability of finding the singular optimum from rather arbitrary initial designs.

  15. Optimality approaches to describe characteristic fluvial patterns on landscapes.

    Science.gov (United States)

    Paik, Kyungrock; Kumar, Praveen

    2010-05-12

    Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes.

  16. A Metaheuristic Approach for IT Projects Portfolio Optimization

    CERN Document Server

    Pushkar, Shashank; Mishra, Akhileshwar

    2010-01-01

    Optimal selection of interdependent IT Projects for implementation in multi periods has been challenging in the framework of real option valuation. This paper presents a mathematical optimization model for multi-stage portfolio of IT projects. The model optimizes the value of the portfolio within a given budgetary and sequencing constraints for each period. These sequencing constraints are due to time wise interdependencies among projects. A Metaheuristic approach is well suited for solving this kind of a problem definition and in this paper a genetic algorithm model has been proposed for the solution. This optimization model and solution approach can help IT managers taking optimal funding decision for projects prioritization in multiple sequential periods. The model also gives flexibility to the managers to generate alternative portfolio by changing the maximum and minimum number of projects to be implemented in each sequential period.

  17. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    Science.gov (United States)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  18. Performance optimization of dye-sensitized solar cells by multilayer gradient scattering architecture of TiO2 microspheres

    Science.gov (United States)

    Li, Mingyue; Li, Meiya; Liu, Xiaolian; Bai, Lihua; Luoshan, Mengdai; Lei, Wen; Wang, Zhen; Zhu, Yongdan; Zhao, Xingzhong

    2017-01-01

    TiO2 microspheres (TMSs) with unique hierarchical structure and unusual high specific surface area are synthesized and incorporated into a photoanode in various TMS multilayer gradient architectures to form novel photoanodes and dye-sensitized solar cells (DSSCs). Significant influences of these architectures on the photoelectric properties of DSSCs are obtained. The DSSC with the optimal TMS gradient-ascent architecture of M036 has the largest amounts of dye absorption, strongest light absorption, longest electron lifetime and lowest electron recombination, and thus exhibits the maximum short circuit current density (Jsc) of 16.49 mA cm-2 and photoelectric conversion efficiency (η) of 7.01%, notably higher than those of conventional DSSCs by 21% and 22%, respectively. These notable improvements in the properties of DSSCs can be attributed to the TMS gradient-ascent architecture of M036 which can most effectively increase dye absorption and localize incident light within the photoanode by the light scattering of TMSs, and thus utilize the incident light thoroughly. This study provides an optimized and universal configuration for the scattering microspheres incorporated in the hybrid photoanode, which can significantly improve the performance of DSSCs.

  19. A Multifaceted Approach to Modernizing NASA's Advanced Multi-Mission Operations System (AMMOS) System Architecture

    Science.gov (United States)

    Estefan, Jeff A.; Giovannoni, Brian J.

    2014-01-01

    The Advanced Multi-Mission Operations Systems (AMMOS) is NASA's premier space mission operations product line offering for use in deep-space robotic and astrophysics missions. The general approach to AMMOS modernization over the course of its 29-year history exemplifies a continual, evolutionary approach with periods of sponsor investment peaks and valleys in between. Today, the Multimission Ground Systems and Services (MGSS) office-the program office that manages the AMMOS for NASA-actively pursues modernization initiatives and continues to evolve the AMMOS by incorporating enhanced capabilities and newer technologies into its end-user tool and service offerings. Despite the myriad of modernization investments that have been made over the evolutionary course of the AMMOS, pain points remain. These pain points, based on interviews with numerous flight project mission operations personnel, can be classified principally into two major categories: 1) information-related issues, and 2) process-related issues. By information-related issues, we mean pain points associated with the management and flow of MOS data across the various system interfaces. By process-related issues, we mean pain points associated with the MOS activities performed by mission operators (i.e., humans) and supporting software infrastructure used in support of those activities. In this paper, three foundational concepts-Timeline, Closed Loop Control, and Separation of Concerns-collectively form the basis for expressing a set of core architectural tenets that provides a multifaceted approach to AMMOS system architecture modernization intended to address the information- and process-related issues. Each of these architectural tenets will be further explored in this paper. Ultimately, we envision the application of these core tenets resulting in a unified vision of a future-state architecture for the AMMOS-one that is intended to result in a highly adaptable, highly efficient, and highly cost

  20. A practical multiscale approach for optimization of structural damping

    DEFF Research Database (Denmark)

    Andreassen, Erik; Jensen, Jakob Søndergaard

    2016-01-01

    A simple and practical multiscale approach suitable for topology optimization of structural damping in a component ready for additive manufacturing is presented.The approach consists of two steps: First, the homogenized loss factor of a two-phase material is maximized. This is done in order...

  1. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.

    Science.gov (United States)

    Yan, Zheng; Wang, Jun; Li, Guocheng

    2014-07-01

    This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions.

  2. Beyond Information Architecture: A Systems Integration Approach to Web-site Design

    Directory of Open Access Journals (Sweden)

    Krisellen Maloney

    2017-09-01

    Full Text Available Users' needs and expectations regarding access to information have fundamentally changed, creating a disconnect between how users expect to use a library Web site and how the site was designed. At the same time, library technical infrastructures include legacy systems that were not designedf or the Web environment. The authors propose a framework that combines elements of information architecture with approaches to incremental system design and implementation. The framework allows for the development of a Web site that is responsive to changing user needs, while recognizing the need for libraries to adopt a cost-effective approach to implementation and maintenance.

  3. Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach

    Science.gov (United States)

    Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.

    2012-01-01

    This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges

  4. An Approach to Share Architectural Drawing Information and Document Information for Automated Code Checking System

    Institute of Scientific and Technical Information of China (English)

    Jungsik Choi; Inhan Kim

    2008-01-01

    The purpose of this study is to suggest a way of optimized managing and sharing information be-tween standard architectural drawings and construction documents in Korea architectural industry for auto-mated code checking system by linked STEP and XML. To archive this purpose, the authors have analyzed current research and technical development for STEP and XML link and developed a prototype system for sharing information between model based drawings and XML based construction documents. Finally, the authors have suggested practical use scenario of sharing information through linked STEP and XML using test case of automatic code checking. In the paper, the possibility of constructing integrated architectural computing environment through exchange and sharing of drawing information and external data for the whole building life-cycle, from the conceptual design stage to the construction and maintenance stage has been examined. Automated code checking through linked STEP and XML could be enhanced through col-laboration business, more completed code, improved building performance, and reduced construction costs.

  5. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  6. A "Hybrid" Approach for Synthesizing Optimal Controllers of Hybrid Systems

    DEFF Research Database (Denmark)

    Zhao, Hengjun; Zhan, Naijun; Kapur, Deepak

    2012-01-01

    We propose an approach to reduce the optimal controller synthesis problem of hybrid systems to quantifier elimination; furthermore, we also show how to combine quantifier elimination with numerical computation in order to make it more scalable but at the same time, keep arising errors due...... to discretization manageable and within bounds. A major advantage of our approach is not only that it avoids errors due to numerical computation, but it also gives a better optimal controller. In order to illustrate our approach, we use the real industrial example of an oil pump provided by the German company HYDAC...

  7. Quality-driven multi-objective optimization of software architecture design : method, tool, and application

    NARCIS (Netherlands)

    Etemadi Idgahi (Etemaadi), Ramin

    2014-01-01

    Software architecting is a non-trivial and demanding task for software engineers to perform. The architecture is a key enabler for software systems. Besides being crucial for user functionality, the software architecture has deep impact on software qualities such as performance, safety, and cost.

  8. Quality-driven multi-objective optimization of software architecture design : method, tool, and application

    NARCIS (Netherlands)

    Etemadi Idgahi (Etemaadi), Ramin

    2014-01-01

    Software architecting is a non-trivial and demanding task for software engineers to perform. The architecture is a key enabler for software systems. Besides being crucial for user functionality, the software architecture has deep impact on software qualities such as performance, safety, and cost.

  9. An Update on Design Tools for Optimization of CMC 3D Fiber Architectures

    Science.gov (United States)

    Lang, J.; DiCarlo, J.

    2012-01-01

    Objective: Describe and up-date progress for NASA's efforts to develop 3D architectural design tools for CMC in general and for SIC/SiC composites in particular. Describe past and current sequential work efforts aimed at: Understanding key fiber and tow physical characteristics in conventional 2D and 3D woven architectures as revealed by microstructures in the literature. Developing an Excel program for down-selecting and predicting key geometric properties and resulting key fiber-controlled properties for various conventional 3D architectures. Developing a software tool for accurately visualizing all the key geometric details of conventional 3D architectures. Validating tools by visualizing and predicting the Internal geometry and key mechanical properties of a NASA SIC/SIC panel with a 3D orthogonal architecture. Applying the predictive and visualization tools toward advanced 3D orthogonal SiC/SIC composites, and combining them into a user-friendly software program.

  10. Optimizing Groundwater Monitoring Networks Using Integrated Statistical and Geostatistical Approaches

    Directory of Open Access Journals (Sweden)

    Jay Krishna Thakur

    2015-08-01

    Full Text Available The aim of this work is to investigate new approaches using methods based on statistics and geo-statistics for spatio-temporal optimization of groundwater monitoring networks. The formulated and integrated methods were tested with the groundwater quality data set of Bitterfeld/Wolfen, Germany. Spatially, the monitoring network was optimized using geo-statistical methods. Temporal optimization of the monitoring network was carried out using Sen’s method (1968. For geostatistical network optimization, a geostatistical spatio-temporal algorithm was used to identify redundant wells in 2- and 2.5-D Quaternary and Tertiary aquifers. Influences of interpolation block width, dimension, contaminant association, groundwater flow direction and aquifer homogeneity on statistical and geostatistical methods for monitoring network optimization were analysed. The integrated approach shows 37% and 28% redundancies in the monitoring network in Quaternary aquifer and Tertiary aquifer respectively. The geostatistical method also recommends 41 and 22 new monitoring wells in the Quaternary and Tertiary aquifers respectively. In temporal optimization, an overall optimized sampling interval was recommended in terms of lower quartile (238 days, median quartile (317 days and upper quartile (401 days in the research area of Bitterfeld/Wolfen. Demonstrated methods for improving groundwater monitoring network can be used in real monitoring network optimization with due consideration given to influencing factors.

  11. Speleothem Architectural Analysis: Integrated approach for stalagmite-based paleoclimate research

    Science.gov (United States)

    Martín-Chivelet, Javier; Muñoz-García, M. Belén; Cruz, Juncal A.; Ortega, Ana I.; Turrero, María J.

    2017-05-01

    Carbonate stalagmites have become increasingly attractive to Quaternary paleoclimate research, as they can be accurately dated by radiometric methods and concurrently yield high-resolution multi-proxy records of past climate conditions. Reliable series however require the precise characterization of stalagmite internal micro-stratigraphy, a task too often poorly accomplished despite the recent advances in speleothem research. This weakness is due to the lack of a robust integrative methodological framework capable of integrating the wide range of petrographical and micro-stratigrafical methods currently used in speleothem characterization. For covering this need, this review introduces the Speleothem Architectural Analysis (SAA), a holistic approach inspired in well-established stratigraphic procedures such as the architectural element analysis and the sequence stratigraphy, commonly used by geoscientists for categorizing internal stratigraphic heterogeneities in sedimentary deposits. The new approach establishes a six-fold hierarchy of speleothem architectural elements and their bounding surfaces: individual crystallites (1st order), single growth layers (2nd order), speleothem fabrics (3rd order), stacking patterns sets (4th order), morphostratigraphic units (5th order), unconformity-bounded units and major unconformities (6th order). Each category of architectural element is formed in a different range of time, from intervals as short as a year/season to others of centuries or millennia. The SAA, which has the capability of incorporating any petrographic or stratigraphic classification, provides a useful, systematic, and versatile tool for unraveling the complexities of speleothem growth, and thus for genetically interpreting stalagmites in a multi-temporal scale. A detailed speleothem stratigraphy must be the basis for performing robust reconstruction of paleoclimate series. They should precede and accompany any work focused in absolute age dating or in

  12. A novel approach for optimal chiller loading using particle swarm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ardakani, A. Jahanbani; Ardakani, F. Fattahi; Hosseinian, S.H. [Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Hafez Avenue, Tehran 15875-4413 (Iran, Islamic Republic of)

    2008-07-01

    This study employs two new methods to solve optimal chiller loading (OCL) problem. These methods are continuous genetic algorithm (GA) and particle swarm optimization (PSO). Because of continuous nature of variables in OCL problem, continuous GA and PSO easily overcome deficiencies in other conventional optimization methods. Partial load ratio (PLR) of the chiller is chosen as the variable to be optimized and consumption power of the chiller is considered as fitness function. Both of these methods find the optimal solution while the equality constraint is exactly satisfied. Some of the major advantages of proposed approaches over other conventional methods can be mentioned as fast convergence, escaping from getting into local optima, simple implementation as well as independency of the solution from the problem. Abilities of proposed methods are examined with reference to an example system. To demonstrate these abilities, results are compared with binary genetic algorithm method. The proposed approaches can be perfectly applied to air-conditioning systems. (author)

  13. Optimal FPGA implementation of CL multiwavelets architecture for signal denoising application

    Science.gov (United States)

    Mohan Kumar, B.; Vidhya Lavanya, R.; Sumesh, E. P.

    2013-03-01

    Wavelet transform is considered one of the efficient transforms of this decade for real time signal processing. Due to implementation constraints scalar wavelets do not possess the properties such as compact support, regularity, orthogonality and symmetry, which are desirable qualities to provide a good signal to noise ratio (SNR) in case of signal denoising. This leads to the evolution of the new dimension of wavelet called 'multiwavelets', which possess more than one scaling and wavelet filters. The architecture implementation of multiwavelets is an emerging area of research. In real time, the signals are in scalar form, which demands the processing architecture to be scalar. But the conventional Donovan Geronimo Hardin Massopust (DGHM) and Chui-Lian (CL) multiwavelets are vectored and are also unbalanced. In this article, the vectored multiwavelet transforms are converted into a scalar form and its architecture is implemented in FPGA (Field Programmable Gate Array) for signal denoising application. The architecture is compared with DGHM multiwavelets architecture in terms of several objective and performance measures. The CL multiwavelets architecture is further optimised for best performance by using DSP48Es. The results show that CL multiwavelet architecture is suited better for the signal denoising application.

  14. Departures from optimality when pursuing multiple approach or avoidance goals.

    Science.gov (United States)

    Ballard, Timothy; Yeo, Gillian; Neal, Andrew; Farrell, Simon

    2016-07-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. (PsycINFO Database Record

  15. A Riccati approach for constrained linear quadratic optimal control

    Science.gov (United States)

    Sideris, Athanasios; Rodriguez, Luis A.

    2011-02-01

    An active-set method is proposed for solving linear quadratic optimal control problems subject to general linear inequality path constraints including mixed state-control and state-only constraints. A Riccati-based approach is developed for efficiently solving the equality constrained optimal control subproblems generated during the procedure. The solution of each subproblem requires computations that scale linearly with the horizon length. The algorithm is illustrated with numerical examples.

  16. A Neurodynamic Optimization Approach to Bilevel Quadratic Programming.

    Science.gov (United States)

    Qin, Sitian; Le, Xinyi; Wang, Jun

    2016-08-19

    This paper presents a neurodynamic optimization approach to bilevel quadratic programming (BQP). Based on the Karush-Kuhn-Tucker (KKT) theorem, the BQP problem is reduced to a one-level mathematical program subject to complementarity constraints (MPCC). It is proved that the global solution of the MPCC is the minimal one of the optimal solutions to multiple convex optimization subproblems. A recurrent neural network is developed for solving these convex optimization subproblems. From any initial state, the state of the proposed neural network is convergent to an equilibrium point of the neural network, which is just the optimal solution of the convex optimization subproblem. Compared with existing recurrent neural networks for BQP, the proposed neural network is guaranteed for delivering the exact optimal solutions to any convex BQP problems. Moreover, it is proved that the proposed neural network for bilevel linear programming is convergent to an equilibrium point in finite time. Finally, three numerical examples are elaborated to substantiate the efficacy of the proposed approach.

  17. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    Science.gov (United States)

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness.

  18. ASIP Approach for Multimedia Applications Based on a Scalable VLIW DSP Architecture

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yanjun; HE Hu; SHEN Zheng; SUN Yihe

    2009-01-01

    The rapid development of multimedia techniques has increased the demands on multimedia processors.This paper presents a new design method to quickly design high performance processors for new multimedia applications.In this approach,a configurable processor based on the very long instruction-set word architecture is used as the basic core for designers to easily configure new processor cores for multimedia algorithm.Specific instructions designed for multimedia applications efficiently improve the performance of the target processor.Functions not implemented in the digital signal processor (DSP) core can be easily integrated into the target processor as user-defined hardware to increase the performance.Several examples are given based on the architecture.The results show that the processor performance is enhanced approximately 4 times on the H.263 codec and that the processor outperforms both DSPs and single instruction multiple data (SIMD) multimedia extension architectures by up to 8 times when computing the 2-D-IDCT.

  19. Parallel System Architecture (PSA): An efficient approach for automatic recognition of volcano-seismic events

    Science.gov (United States)

    Cortés, Guillermo; García, Luz; Álvarez, Isaac; Benítez, Carmen; de la Torre, Ángel; Ibáñez, Jesús

    2014-02-01

    Automatic recognition of volcano-seismic events is becoming one of the most demanded features in the early warning area at continuous monitoring facilities. While human-driven cataloguing is time-consuming and often an unreliable task, an appropriate machine framework allows expert technicians to focus only on result analysis and decision-making. This work presents an alternative to serial architectures used in classic recognition systems introducing a parallel implementation of the whole process: configuration, feature extraction, feature selection and classification stages are independently carried out for each type of events in order to exploit the intrinsic properties of each signal class. The system uses Gaussian Mixture Models (GMMs) to classify the database recorded at Deception Volcano Island (Antarctica) obtaining a baseline recognition rate of 84% with a cepstral-based waveform parameterization in the serial architecture. The parallel approach increases the results to close to 92% using mixture-based parameterization vectors or up to 91% when the vector size is reduced by 19% via the Discriminative Feature Selection (DFS) algorithm. Besides the result improvement, the parallel architecture represents a major step in terms of flexibility and reliability thanks to the class-focused analysis, providing an efficient tool for monitoring observatories which require real-time solutions.

  20. Universal approach to optimal photon storage in atomic media.

    Science.gov (United States)

    Gorshkov, Alexey V; André, Axel; Fleischhauer, Michael; Sørensen, Anders S; Lukin, Mikhail D

    2007-03-23

    We present a universal physical picture for describing storage and retrieval of photon wave packets in a Lambda-type atomic medium. This physical picture encompasses a variety of different approaches to pulse storage ranging from adiabatic reduction of the photon group velocity and pulse-propagation control via off-resonant Raman fields to photon-echo-based techniques. Furthermore, we derive an optimal control strategy for storage and retrieval of a photon wave packet of any given shape. All these approaches, when optimized, yield identical maximum efficiencies, which only depend on the optical depth of the medium.

  1. Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Williams, Samuel; Jong, Wibe de; Oliker, Leonid

    2014-10-10

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments. In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.

  2. Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Bindu

    2012-06-01

    Full Text Available One of the category of algorithm Problems are basically exponential problems. These problems are basically exponential problems and take time to find the solution. In the present work we are optimising one of the common NP complete problem called Travelling Salesman Problem. In our work we have defined a genetic approach by combining fuzzy approach along with genetics. In this work we have implemented the modified DPX crossover to improve genetic approach. The work is implemented in MATLAB environment and obtained results shows the define approach has optimized the existing genetic algorithm results

  3. A Knowledge-Based System Approach for Extracting Abstractions from Service Oriented Architecture Artifacts

    Directory of Open Access Journals (Sweden)

    George Goehring

    2013-03-01

    Full Text Available Rule-based methods have traditionally been applied to develop knowledge-based systems that replicate expert performance on a deep but narrow problem domain. Knowledge engineers capture expert knowledge and encode it as a set of rules for automating the expert’s reasoning process to solve problems in a variety of domains. We describe the development of a knowledge-based system approach to enhance program comprehension of Service Oriented Architecture (SOA software. Our approach uses rule-based methods to automate the analysis of the set of artifacts involved in building and deploying a SOA composite application. The rules codify expert knowledge to abstract information from these artifacts to facilitate program comprehension and thus assist Software Engineers as they perform system maintenance activities. A main advantage of the knowledge-based approach is its adaptability to the heterogeneous and dynamically evolving nature of SOA environments.

  4. Value Delivery Architecture Modeling – A New Approach for Business Modeling

    Directory of Open Access Journals (Sweden)

    Joachim Metzger

    2015-08-01

    Full Text Available Complexity and uncertainty have evolved as important challenges for entrepreneurship in many industries. Value Delivery Architecture Modeling (VDAM is a proposal for a new approach for business modeling to conquer these challenges. In addition to the creation of transparency and clarity, our approach supports the operationalization of business model ideas. VDAM is based on the combination of a new business modeling language called VDML, ontology building, and the implementation of a level of cross-company abstraction. The application of our new approach in the area of electric mobility in Germany, an industry sector with high levels of uncertainty and a lack of common understanding, shows several promising results: VDAM enables the development of an unambiguous and unbiased view on value creation. Additionally it allows for several applications leading to a more informed decision towards the implementation of new business models.

  5. On the EU approach for DEMO architecture exploration and dealing with uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, M., E-mail: matti.coleman@euro-fusion.org [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Maviglia, F.; Bachmann, C. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Anthony, J. [CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Federici, G. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Shannon, M. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Wenninger, R. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Max-Planck-Institut für Plasmaphysik, 85748 Garching (Germany)

    2016-11-01

    Highlights: • The issue of epistemic uncertainties in the DEMO design basis is described. • An approach to tackle uncertainty by investigating plant architectures is proposed. • The first wall heat load uncertainty is addressed following the proposed approach. - Abstract: One of the difficulties inherent in designing a future fusion reactor is dealing with uncertainty. As the major step between ITER and the commercial exploitation of nuclear fusion energy, DEMO will have to address many challenges – the natures of which are still not fully known. Unlike fission reactors, fusion reactors suffer from the intrinsic complexity of the tokamak (numerous interdependent system parameters) and from the dependence of plasma physics on scale – prohibiting design exploration founded on incremental progression and small-scale experimentation. For DEMO, this means that significant technical uncertainties will exist for some time to come, and a systems engineering design exploration approach must be developed to explore the reactor architecture when faced with these uncertainties. Important uncertainties in the context of fusion reactor design are discussed and a strategy for dealing with these is presented, treating the uncertainty in the first wall loads as an example.

  6. A new approach to acceleration of methods of creating graphic animation in architecture

    Directory of Open Access Journals (Sweden)

    Radojčić Marko

    2011-01-01

    Full Text Available Modern approach to technical visualization methods is not possible without using contemporary 3D animation methods. Obvious need of demonstrating the results of design process in civil engineering, architecture and other technical areas still includes classical time consuming and non-flexible methods that require the process to restart in case of making some changes on the design. This method proposes changes based on some already available open source software packages and components that can be modified in a manner that unlocks accelerated technical visualizations in real time and with instant modifications along with efficiently usage of the processing power of contemporary graphics processing units.

  7. A global optimization approach to multi-polarity sentiment analysis.

    Science.gov (United States)

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  8. Architecture For The Optimization Of A Machining Process In Real Time Through Rule-Based Expert System

    Science.gov (United States)

    Serrano, Rafael; González, Luis Carlos; Martín, Francisco Jesús

    2009-11-01

    Under the project SENSOR-IA which has had financial funding from the Order of Incentives to the Regional Technology Centers of the Counsil of Innovation, Science and Enterprise of Andalusia, an architecture for the optimization of a machining process in real time through rule-based expert system has been developed. The architecture consists of an acquisition system and sensor data processing engine (SATD) from an expert system (SE) rule-based which communicates with the SATD. The SE has been designed as an inference engine with an algorithm for effective action, using a modus ponens rule model of goal-oriented rules.The pilot test demonstrated that it is possible to govern in real time the machining process based on rules contained in a SE. The tests have been done with approximated rules. Future work includes an exhaustive collection of data with different tool materials and geometries in a database to extract more precise rules.

  9. Architectural geometry

    NARCIS (Netherlands)

    Pottmann, Helmut; Eigensatz, Michael; Vaxman, A.; Wallner, Johannes

    2015-01-01

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural

  10. Architectural geometry

    NARCIS (Netherlands)

    Pottmann, Helmut; Eigensatz, Michael; Vaxman, A.; Wallner, Johannes

    2015-01-01

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural

  11. A design approach for integrating thermoelectric devices using topology optimization

    DEFF Research Database (Denmark)

    Soprani, Stefano; Haertel, Jan Hendrik Klaas; Lazarov, Boyan Stefanov;

    2016-01-01

    to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems....... The design method incorporates temperature dependent properties of the thermoelectric device and other materials. The3D topology optimization model developed in this work was used to design a thermoelectric system, complete with insulation and heat sink, that was produced and tested. Good agreement between...... experimental results and model forecasts was obtained and the system was able to maintain the load at more than 33 K below the oil well temperature. Results of this study support topology optimizationas a powerful design tool for thermal design of thermoelectric systems....

  12. OPTIMIZING LOCALIZATION ROUTE USING PARTICLE SWARM-A GENETIC APPROACH

    Directory of Open Access Journals (Sweden)

    L. Lakshmanan

    2014-01-01

    Full Text Available One of the most key problems in wireless sensor networks is finding optimal algorithms for sending packets from source node to destination node. Several algorithms exist in literature, since some are in vital role other may not. Since WSN focus on low power consumption during packet transmission and receiving, finally we adopt by merging swarm particle based algorithm with genetic approach. Initially we order the nodes based on their energy criterion and then focusing towards node path; this can be done using Proactive route algorithm for finding optimal path between Source-Destination (S-D nodes. Fast processing and pre traversal can be done using selective flooding approach and results are in genetic. We have improved our results with high accuracy and optimality in rendering routes.

  13. An optimal control approach to probabilistic Boolean networks

    Science.gov (United States)

    Liu, Qiuli

    2012-12-01

    External control of some genes in a genetic regulatory network is useful for avoiding undesirable states associated with some diseases. For this purpose, a number of stochastic optimal control approaches have been proposed. Probabilistic Boolean networks (PBNs) as powerful tools for modeling gene regulatory systems have attracted considerable attention in systems biology. In this paper, we deal with a problem of optimal intervention in a PBN with the help of the theory of discrete time Markov decision process. Specifically, we first formulate a control model for a PBN as a first passage model for discrete time Markov decision processes and then find, using a value iteration algorithm, optimal effective treatments with the minimal expected first passage time over the space of all possible treatments. In order to demonstrate the feasibility of our approach, an example is also displayed.

  14. A performance optimized architecture of deblocking filter for H.264/AVC

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The in-loop deblocking filter is one of the complex parts in H.264/AVC. It has such a large amount of computation that almost all the pixels in all the frames are involved in the worst case. In this paper, a fast deblocking filter architecture is proposed, and it can effectively save the operating time. In the proposed architecture, two 1-D filters are introduced so that the vertical filtering and the horizontal filtering can be performed at the same time, Only 120 cycles are needed for a macroblock. Our architecture is also a memory efficient one, and only one 4×4 pixels register, one 4×4 transpose array and one 16×32 b two-port (SRAM) are used as buffers in the filtering process. The simulation and synthesis results show that, with almost the same or even smaller area than some 1-D filter based architectures before, the proposed one can save more than 40% processing time. The architecture is suitable for real-time applications and can easily achieve the requirement of processing real-time video in 1080HD (high definition format, 1 920×1 088@30 fps) at 100 MHz.

  15. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  16. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    DR OKE

    This paper introduces an approach based on the Swarm Variant of the ... comprehensive learning particle swarm optimization (CLPSO), genetic ... DOI: http://dx.doi.org/10.4314/ijest.v7i3.12S ..... machine power systems: a comparative study.

  17. The optimality of potential rescaling approaches in land data assimilation

    Science.gov (United States)

    It is well-known that systematic differences exist between modeled and observed realizations of hydrological variables like soil moisture. Prior to data assimilation, these differences must be removed in order to obtain an optimal analysis. A number of rescaling approaches have been proposed for rem...

  18. Discuss Optimal Approaches to Learning Strategy Instruction for EFL Learners

    Institute of Scientific and Technical Information of China (English)

    邢菊如

    2009-01-01

    Numerous research studies reveal that learning strategies have played an important role in language learning processes.This paper explores as English teachers.can we impmve students' language proficiency by giving them optimal learning strategy instruction and what approaches are most effective and efficient?

  19. Stochastic Approaches to Interactive Multi-Criteria Optimization Problems

    OpenAIRE

    1986-01-01

    A stochastic approach to the development of interactive algorithms for multicriteria optimization is discussed in this paper. These algorithms are based on the idea of a random search and the use of a decision-maker who can compare any two decisions. The questions of both theoretical analysis (proof of convergence, investigation of stability) and practical implementation of these algorithms are discussed.

  20. EASEE: an open architecture approach for modeling battlespace signal and sensor phenomenology

    Science.gov (United States)

    Waldrop, Lauren E.; Wilson, D. Keith; Ekegren, Michael T.; Borden, Christian T.

    2017-04-01

    Open architecture in the context of defense applications encourages collaboration across government agencies and academia. This paper describes a success story in the implementation of an open architecture framework that fosters transparency and modularity in the context of Environmental Awareness for Sensor and Emitter Employment (EASEE), a complex physics-based software package for modeling the effects of terrain and atmospheric conditions on signal propagation and sensor performance. Among the highlighted features in this paper are: (1) a code refactorization to separate sensitive parts of EASEE, thus allowing collaborators the opportunity to view and interact with non-sensitive parts of the EASEE framework with the end goal of supporting collaborative innovation, (2) a data exchange and validation effort to enable the dynamic addition of signatures within EASEE thus supporting a modular notion that components can be easily added or removed to the software without requiring recompilation by developers, and (3) a flexible and extensible XML interface, which aids in decoupling graphical user interfaces from EASEE's calculation engine, and thus encourages adaptability to many different defense applications. In addition to the outlined points above, this paper also addresses EASEE's ability to interface with both proprietary systems such as ArcGIS. A specific use case regarding the implementation of an ArcGIS toolbar that leverages EASEE's XML interface and enables users to set up an EASEE-compliant configuration for probability of detection or optimal sensor placement calculations in various modalities is discussed as well.

  1. RFID-WSN integrated architecture for energy and delay- aware routing a simulation approach

    CERN Document Server

    Ahmed, Jameel; Tayyab, Muhammad; Nawaz, Menaa

    2015-01-01

    The book identifies the performance challenges concerning Wireless Sensor Networks (WSN) and Radio Frequency Identification (RFID) and analyzes their impact on the performance of routing protocols. It presents a thorough literature survey to identify the issues affecting routing protocol performance, as well as a mathematical model for calculating the end-to-end delays of the routing protocol ACQUIRE; a comparison of two routing protocols (ACQUIRE and DIRECTED DIFFUSION) is also provided for evaluation purposes. On the basis of the results and literature review, recommendations are made for better selection of protocols regarding the nature of the respective application and related challenges. In addition, this book covers a proposed simulator that integrates both RFID and WSN technologies. Therefore, the manuscript is divided in two major parts: an integrated architecture of smart nodes, and a power-optimized protocol for query and information interchange.

  2. A dynamic object-oriented architecture approach to ecosystem modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Dolph, J. E.; Majerus, K. A.; Sydelko, P. J.; Taxon, T. N.

    1999-04-09

    Modeling and simulation in support of adaptive ecosystem management can be better accomplished through a dynamic, integrated, and flexible approach that incorporates scientific and technological components into a comprehensive ecosystem-modeling framework. The Integrated Dynamic Landscape Analysis and Modeling System (IDLAMS) integrates ecological models and decision support techniques, through a geographic information system (GIS)-based framework. The Strategic Environmental Research and Development Program (SERDP) sponsored the development of IDLAMS. Initially built upon a GIS framework, IDLAMS is migrating to an object-oriented (OO) architectural framework. An object-oriented architecture is more flexible and modular. It allows disparate applications and dynamic models to be integrated in a manner that minimizes (or eliminates) the need to rework or recreate the system as new models are added to the suite. In addition, an object-oriented design makes it easier to provide run-time feedback among models, thereby making it a more dynamic tool for exploring and providing insight into the interactions among ecosystem processes. Finally, an object-oriented design encourages the reuse of existing technology because OO-IDLAMS is able to integrate disparate models, databases, or applications executed in their native languages. Reuse is also accomplished through a structured approach to building a consistent and reusable object library. This reusability can substantially reduce the time and effort needed to develop future integrated ecosystem simulations.

  3. Impact of contour on aesthetic judgments and approach-avoidance decisions in architecture.

    Science.gov (United States)

    Vartanian, Oshin; Navarrete, Gorka; Chatterjee, Anjan; Fich, Lars Brorson; Leder, Helmut; Modroño, Cristián; Nadal, Marcos; Rostrup, Nicolai; Skov, Martin

    2013-06-18

    On average, we urban dwellers spend about 90% of our time indoors, and share the intuition that the physical features of the places we live and work in influence how we feel and act. However, there is surprisingly little research on how architecture impacts behavior, much less on how it influences brain function. To begin closing this gap, we conducted a functional magnetic resonance imaging study to examine how systematic variation in contour impacts aesthetic judgments and approach-avoidance decisions, outcome measures of interest to both architects and users of spaces alike. As predicted, participants were more likely to judge spaces as beautiful if they were curvilinear than rectilinear. Neuroanatomically, when contemplating beauty, curvilinear contour activated the anterior cingulate cortex exclusively, a region strongly responsive to the reward properties and emotional salience of objects. Complementing this finding, pleasantness--the valence dimension of the affect circumplex--accounted for nearly 60% of the variance in beauty ratings. Furthermore, activation in a distributed brain network known to underlie the aesthetic evaluation of different types of visual stimuli covaried with beauty ratings. In contrast, contour did not affect approach-avoidance decisions, although curvilinear spaces activated the visual cortex. The results suggest that the well-established effect of contour on aesthetic preference can be extended to architecture. Furthermore, the combination of our behavioral and neural evidence underscores the role of emotion in our preference for curvilinear objects in this domain.

  4. Designing area optimized application-specific network-on-chip architectures while providing hard QoS guarantees.

    Directory of Open Access Journals (Sweden)

    Sajid Gul Khawaja

    Full Text Available With the increase of transistors' density, popularity of System on Chip (SoC has increased exponentially. As a communication module for SoC, Network on Chip (NoC framework has been adapted as its backbone. In this paper, we propose a methodology for designing area-optimized application specific NoC while providing hard Quality of Service (QoS guarantees for real time flows. The novelty of the proposed system lies in derivation of a Mixed Integer Linear Programming model which is then used to generate a resource optimal Network on Chip (NoC topology and architecture while considering traffic and QoS requirements. We also present the micro-architectural design features used for enabling traffic and latency guarantees and discuss how the solution adapts for dynamic variations in the application traffic. The paper highlights the effectiveness of proposed method by generating resource efficient NoC solutions for both industrial and benchmark applications. The area-optimized results are generated in few seconds by proposed technique, without resorting to heuristics, even for an application with 48 traffic flows.

  5. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.

    Science.gov (United States)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-01

    In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number

  6. Global Optimization Approach to Non-convex Problems

    Institute of Scientific and Technical Information of China (English)

    LU Zi-fang; ZHENG Hui-li

    2004-01-01

    A new approach to find the global optimal solution of the special non-convex problems is proposed in this paper. The non-convex objective problem is first decomposed into two convex sub-problems. Then a generalized gradient is introduced to determine a search direction and the evolution equation is built to obtain a global minimum point. By the approach, we can prevent the search process from some local minima and search a global minimum point. Two numerical examples are given to prove the approach to be effective.

  7. Stochastic optimization in insurance a dynamic programming approach

    CERN Document Server

    Azcue, Pablo

    2014-01-01

    The main purpose of the book is to show how a viscosity approach can be used to tackle control problems in insurance. The problems covered are the maximization of survival probability as well as the maximization of dividends in the classical collective risk model. The authors consider the possibility of controlling the risk process by reinsurance as well as by investments. They show that optimal value functions are characterized as either the unique or the smallest viscosity solution of the associated Hamilton-Jacobi-Bellman equation; they also study the structure of the optimal strategies and show how to find them. The viscosity approach was widely used in control problems related to mathematical finance but until quite recently it was not used to solve control problems related to actuarial mathematical science. This book is designed to familiarize the reader on how to use this approach. The intended audience is graduate students as well as researchers in this area.

  8. ANFIS Approach for Optimal Selection of Reusable Components

    Directory of Open Access Journals (Sweden)

    K.S. Ravichandran

    2012-12-01

    Full Text Available In a growing world, the development of modern software system requires large-scale manpower, high development cost, larger completion time and high risk of maintaining the software quality. Component- Based Software Development (CBSD approach is based on the concept of developing modern software systems by selecting the appropriate reusable components or COTS (Commercial Off-The-Shelf components and then assembling them with well-defined software architecture. The proper selection of COTS components will really reduce the manpower, development cost, product completion time, risk, maintenance cost and also it addresses the high quality software product. In this paper, we develop an automated process of component selection by using Adaptive Neuro-Fuzzy Inference Systems (ANFIS based technique by using 14 reusable components’ parameters as a first time in this field. Again, for increasing the accuracy of a model, Fuzzy- Weighted-Relational-Coefficient (FWRC matrix is derived between the components and CBS development with the help of 14 component parameters, namely, Reliability, Stability, Portability, Consistency, Completeness, Interface & Structural Complexity, Understandability of Software Documents, Security, Usability, Accuracy, Compatibility, Performance, Serviceability and Customizable. In the recent literature studies reveals that almost all the researchers have been designed a general fuzzy-design rule for a component selection problem of all kinds of software architecture; but it leads to a poor selection of components and this paper suggests adoption of a specific fuzzy-design rule for every software architecture application for the selection of reusable components. Finally, it is concluded that the selection of reusable components through ANFIS performs better than the other models discussed so far.

  9. An efficient approach for reliability-based topology optimization

    Science.gov (United States)

    Kanakasabai, Pugazhendhi; Dhingra, Anoop K.

    2016-01-01

    This article presents an efficient approach for reliability-based topology optimization (RBTO) in which the computational effort involved in solving the RBTO problem is equivalent to that of solving a deterministic topology optimization (DTO) problem. The methodology presented is built upon the bidirectional evolutionary structural optimization (BESO) method used for solving the deterministic optimization problem. The proposed method is suitable for linear elastic problems with independent and normally distributed loads, subjected to deflection and reliability constraints. The linear relationship between the deflection and stiffness matrices along with the principle of superposition are exploited to handle reliability constraints to develop an efficient algorithm for solving RBTO problems. Four example problems with various random variables and single or multiple applied loads are presented to demonstrate the applicability of the proposed approach in solving RBTO problems. The major contribution of this article comes from the improved efficiency of the proposed algorithm when measured in terms of the computational effort involved in the finite element analysis runs required to compute the optimum solution. For the examples presented with a single applied load, it is shown that the CPU time required in computing the optimum solution for the RBTO problem is 15-30% less than the time required to solve the DTO problems. The improved computational efficiency allows for incorporation of reliability considerations in topology optimization without an increase in the computational time needed to solve the DTO problem.

  10. Effects of optimism on creativity under approach and avoidance motivation

    Directory of Open Access Journals (Sweden)

    Tamar eIcekson

    2014-02-01

    Full Text Available Focusing on avoiding failure or negative outcomes (avoidance motivation can undermine creativity, due to cognitive (e.g., threat appraisals, affective (e.g., anxiety, and volitional processes (e.g., low intrinsic motivation. This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement - barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences.

  11. Effects of optimism on creativity under approach and avoidance motivation.

    Science.gov (United States)

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement-barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences.

  12. Two-Layer Linear MPC Approach Aimed at Walking Beam Billets Reheating Furnace Optimization

    Directory of Open Access Journals (Sweden)

    Silvia Maria Zanoli

    2017-01-01

    Full Text Available In this paper, the problem of the control and optimization of a walking beam billets reheating furnace located in an Italian steel plant is analyzed. An ad hoc Advanced Process Control framework has been developed, based on a two-layer linear Model Predictive Control architecture. This control block optimizes the steady and transient states of the considered process. Two main problems have been addressed. First, in order to manage all process conditions, a tailored module defines the process variables set to be included in the control problem. In particular, a unified approach for the selection on the control inputs to be used for control objectives related to the process outputs is guaranteed. The impact of the proposed method on the controller formulation is also detailed. Second, an innovative mathematical approach for stoichiometric ratios constraints handling has been proposed, together with their introduction in the controller optimization problems. The designed control system has been installed on a real plant, replacing operators’ mental model in the conduction of local PID controllers. After two years from the first startup, a strong energy efficiency improvement has been observed.

  13. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    Science.gov (United States)

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2016-07-20

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  14. Optimizing the architecture of SFQ-RDP (Single Flux Quantum- Reconfigurable Datapath)

    OpenAIRE

    Mehdipour, Farhad; Honda, Hiroaki; Kataoka, Hiroshi; Inoue, Koji; Murakami, Kazuaki

    2009-01-01

    A large-scale reconfigurable data-path (LSRDP) processor based on single-flux quantum circuits is designed to overcome the issues originating from the CMOS technology. The LSRDP micro-architecture design procedure and its outcome will be presented in this paper.

  15. Optimal control of underactuated mechanical systems: A geometric approach

    Science.gov (United States)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  16. Optimal Control of Underactuated Mechanical Systems: A Geometric Approach

    CERN Document Server

    Colombo, L; Zuccalli, M

    2009-01-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  17. Dynamic Query Optimization Approach for Semantic Database Grid

    Institute of Scientific and Technical Information of China (English)

    Xiao-Qing Zheng; Hua-Jun Chen; Zhao-Hui Wu; Yu-Xin Mao

    2006-01-01

    Fundamentally, semantic grid database is about bringing globally distributed databases together in order to coordinate resource sharing and problem solving in which information is given well-defined meaning, and DartGrid Ⅱ is the implemented database gird system whose goal is to provide a semantic solution for integrating database resources on the Web.Although many algorithms have been proposed for optimizing query-processing in order to minimize costs and/or response time, associated with obtaining the answer to query in a distributed database system, database grid query optimization problem is fundamentally different from traditional distributed query optimization. These differences are shown to be the consequences of autonomy and heterogeneity of database nodes in database grid. Therefore, more challenges have arisen for query optimization in database grid than traditional distributed database. Following this observation, the design of a query optimizer in DartGrid Ⅱ is presented, and a heuristic, dynamic and parallel query optimization approach to processing query in database grid is proposed. A set of semantic tools supporting relational database integration and semantic-based information browsing has also been implemented to realize the above vision.

  18. Robust and optimal control a two-port framework approach

    CERN Document Server

    Tsai, Mi-Ching

    2014-01-01

    A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear, time-invariant systems, based on the two-port system widespread in electrical engineering. The novel use of the two-port system in this context allows straightforward engineering-oriented solution-finding procedures to be developed, requiring no mathematics beyond linear algebra. A chain-scattering description provides a unified framework for constructing the stabilizing controller set and for synthesizing H2 optimal and H∞ sub-optimal controllers. Simple yet illustrative examples explain each step. A Two-port Framework for Robust and Optimal Control  features: ·         a hands-on, tutorial-style presentation giving the reader the opportunity to repeat the designs presented and easily to modify them for their own programs; ·         an abundance of examples illustrating the most important steps in robust and optimal design; and ·   �...

  19. A Hybrid Optimization Approach for SRM FINOCYL Grain Design

    Institute of Scientific and Technical Information of China (English)

    Khurram Nisar; Liang Guozhu; Qasim Zeeshan

    2008-01-01

    This article presents a method to design and optimize 3D FINOCYL grain (FCG) configuration for solid rocket motors (SRMs). The design process of FCG configuration involves mathematical modeling of the geometry and parametric evaluation of various inde-pendent geometric variables that define the complex configuration. Vh'tually infinite combinations of these variables will satisfy the requirements of mass of propellant, thrust, and burning time in addition to satisfying basic needs for volumetric loading fraction and web fraction. In order to ensure the acquisition of the best possible design to be acquired, a sound approach of design and optimization is essentially demanded. To meet this need, a method is introduced to acquire the finest possible performance. A series of computations are carried out to formulate the grain geometry in terms of various combinations of key shapes inclusive of ellipsoid, cone, cylinder, sphere, torus, and inclined plane. A hybrid optimization (HO) technique is established by associating genetic algorithm (GA) for global solution convergence with sequential quadratic programming (SQP) for further local convergence of the solution, thus achieving the final optimal design. A comparison of the optimal design results derived from SQP, GA, and HO algorithms is presented. By using HO technique, the parameter of propellant mass is optimized to the minimum value with the required level of thrust staying within the constrained burning time, nozzle and propellant parameters, and a fixed length and outer diameter of grain,

  20. A hybrid optimization approach in non-isothermal glass molding

    Science.gov (United States)

    Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz

    2016-10-01

    Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.

  1. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    Science.gov (United States)

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  2. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ranganathan Mohanasundaram

    2015-01-01

    Full Text Available The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  3. Robotic collaborative technology alliance: an open architecture approach to integrated research

    Science.gov (United States)

    Dean, Robert Michael S.; DiBerardino, Charles A.

    2014-06-01

    The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities [1]. Research occurs in 5 main Task Areas: Intelligence, Perception, Dexterous Manipulation and Unique Mobility (DMUM), Human Robot Interaction (HRI), and Integrated Research (IR). This last task of Integrated Research is especially critical and challenging. Individual research components can only be fully assessed when integrated onto a robot where they interact with other aspects of the system to create cross-Task capabilities which move beyond the State of the Art. Adding to the complexity, the RCTA is comprised of 12+ independent organizations across the United States. Each has its own constraints due to development environments, ITAR, "lab" vs "real-time" implementations, and legacy software investments from previous and ongoing programs. We have developed three main components to manage the Integration Task. The first is RFrame, a data-centric transport agnostic middleware which unifies the disparate environments, protocols, and data collection mechanisms. Second is the modular Intelligence Architecture built around the Common World Model (CWM). The CWM instantiates a Common Data Model and provides access services. Third is RIVET, an ITAR free Hardware-In-The-Loop simulator based on 3D game technology. RIVET provides each researcher a common test-bed for development prior to integration, and a regression test mechanism. Once components are integrated and verified, they are released back to the consortium to provide the RIVET baseline for further research. This approach allows Integration of new and legacy systems built upon different architectures, by application of Open Architecture principles.

  4. Development of a Multi-Event Trajectory Optimization Tool for Noise-Optimized Approach Route Design

    NARCIS (Netherlands)

    Braakenburg, M.L.; Hartjes, S.; Visser, H.G.; Hebly, S.J.

    2011-01-01

    This paper presents preliminary results from an ongoing research effort towards the development of a multi-event trajectory optimization methodology that allows to synthesize RNAV approach routes that minimize a cumulative measure of noise, taking into account the total noise effect aggregated for a

  5. Efficient and Robust Data Collection Using Compact Micro Hardware, Distributed Bus Architectures and Optimizing Software

    Science.gov (United States)

    Chau, Savio; Vatan, Farrokh; Randolph, Vincent; Baroth, Edmund C.

    2006-01-01

    Future In-Space propulsion systems for exploration programs will invariably require data collection from a large number of sensors. Consider the sensors needed for monitoring several vehicle systems states of health, including the collection of structural health data, over a large area. This would include the fuel tanks, habitat structure, and science containment of systems required for Lunar, Mars, or deep space exploration. Such a system would consist of several hundred or even thousands of sensors. Conventional avionics system design will require these sensors to be connected to a few Remote Health Units (RHU), which are connected to robust, micro flight computers through a serial bus. This results in a large mass of cabling and unacceptable weight. This paper first gives a survey of several techniques that may reduce the cabling mass for sensors. These techniques can be categorized into four classes: power line communication, serial sensor buses, compound serial buses, and wireless network. The power line communication approach uses the power line to carry both power and data, so that the conventional data lines can be eliminated. The serial sensor bus approach reduces most of the cabling by connecting all the sensors with a single (or redundant) serial bus. Many standard buses for industrial control and sensor buses can support several hundreds of nodes, however, have not been space qualified. Conventional avionics serial buses such as the Mil-Std-1553B bus and IEEE 1394a are space qualified but can support only a limited number of nodes. The third approach is to combine avionics buses to increase their addressability. The reliability, EMI/EMC, and flight qualification issues of wireless networks have to be addressed. Several wireless networks such as the IEEE 802.11 and Ultra Wide Band are surveyed in this paper. The placement of sensors can also affect cable mass. Excessive sensors increase the number of cables unnecessarily. Insufficient number of sensors

  6. Efficient and Robust Data Collection Using Compact Micro Hardware, Distributed Bus Architectures and Optimizing Software

    Science.gov (United States)

    Chau, Savio; Vatan, Farrokh; Randolph, Vincent; Baroth, Edmund C.

    2006-01-01

    Future In-Space propulsion systems for exploration programs will invariably require data collection from a large number of sensors. Consider the sensors needed for monitoring several vehicle systems states of health, including the collection of structural health data, over a large area. This would include the fuel tanks, habitat structure, and science containment of systems required for Lunar, Mars, or deep space exploration. Such a system would consist of several hundred or even thousands of sensors. Conventional avionics system design will require these sensors to be connected to a few Remote Health Units (RHU), which are connected to robust, micro flight computers through a serial bus. This results in a large mass of cabling and unacceptable weight. This paper first gives a survey of several techniques that may reduce the cabling mass for sensors. These techniques can be categorized into four classes: power line communication, serial sensor buses, compound serial buses, and wireless network. The power line communication approach uses the power line to carry both power and data, so that the conventional data lines can be eliminated. The serial sensor bus approach reduces most of the cabling by connecting all the sensors with a single (or redundant) serial bus. Many standard buses for industrial control and sensor buses can support several hundreds of nodes, however, have not been space qualified. Conventional avionics serial buses such as the Mil-Std-1553B bus and IEEE 1394a are space qualified but can support only a limited number of nodes. The third approach is to combine avionics buses to increase their addressability. The reliability, EMI/EMC, and flight qualification issues of wireless networks have to be addressed. Several wireless networks such as the IEEE 802.11 and Ultra Wide Band are surveyed in this paper. The placement of sensors can also affect cable mass. Excessive sensors increase the number of cables unnecessarily. Insufficient number of sensors

  7. OPTIMIZATION APPROACH FOR HYBRID ELECTRIC VEHICLE POWERTRAIN DESIGN

    Institute of Scientific and Technical Information of China (English)

    Zhu Zhengli; Zhang Jianwu; Yin Chengliang

    2005-01-01

    According to bench test results of fuel economy and engine emission for the real powertrain system of EQ7200HEV car, a 3-D performance map oriented quasi-linear model is developed for the configuration of the powertrain components such as internal combustion engine, traction electric motor, transmission, main retarder and energy storage unit. A genetic algorithm based on optimization procedure is proposed and applied for parametric optimization of the key components by consideration of requirements of some driving cycles. Through comparison of numerical results obtained by the genetic algorithm with those by traditional optimization methods, it is shown that the present approach is quite effective and efficient in emission reduction and fuel economy for the design of the hybrid electric car powertrain.

  8. Structural Weight Optimization of Aircraft Wing Component Using FEM Approach.

    Directory of Open Access Journals (Sweden)

    Arockia Ruban M,

    2015-06-01

    Full Text Available One of the main challenges for the civil aviation industry is the reduction of its environmental impact by better fuel efficiency by virtue of Structural optimization. Over the past years, improvements in performance and fuel efficiency have been achieved by simplifying the design of the structural components and usage of composite materials to reduce the overall weight of the structure. This paper deals with the weight optimization of transport aircraft with low wing configuration. The Linear static and Normal Mode analysis were carried out using MSc Nastran & Msc Patran under different pressure conditions and the results were verified with the help of classical approach. The Stress and displacement results were found and verified and hence arrived to the conclusion about the optimization of the wing structure.

  9. APPROACH ON INTELLIGENT OPTIMIZATION DESIGN BASED ON COMPOUND KNOWLEDGE

    Institute of Scientific and Technical Information of China (English)

    Yao Jianchu; Zhou Ji; Yu Jun

    2003-01-01

    A concept of an intelligent optimal design approach is proposed, which is organized by a kind of compound knowledge model. The compound knowledge consists of modularized quantitative knowledge, inclusive experience knowledge and case-based sample knowledge. By using this compound knowledge model, the abundant quantity information of mathematical programming and the symbolic knowledge of artificial intelligence can be united together in this model. The intelligent optimal design model based on such a compound knowledge and the automatically generated decomposition principles based on it are also presented. Practically, it is applied to the production planning, process schedule and optimization of production process of a refining & chemical work and a great profit is achieved. Specially, the methods and principles are adaptable not only to continuous process industry, but also to discrete manufacturing one.

  10. TSP based Evolutionary optimization approach for the Vehicle Routing Problem

    Science.gov (United States)

    Kouki, Zoulel; Chaar, Besma Fayech; Ksouri, Mekki

    2009-03-01

    Vehicle Routing and Flexible Job Shop Scheduling Problems (VRP and FJSSP) are two common hard combinatorial optimization problems that show many similarities in their conceptual level [2, 4]. It was proved for both problems that solving techniques like exact methods fail to provide good quality solutions in a reasonable amount of time when dealing with large scale instances [1, 5, 14]. In order to overcome this weakness, we decide in the favour of meta heuristics and we focalize on evolutionary algorithms that have been successfully used in scheduling problems [1, 5, 9]. In this paper we investigate the common properties of the VRP and the FJSSP in order to provide a new controlled evolutionary approach for the CVRP optimization inspired by the FJSSP evolutionary optimization algorithms introduced in [10].

  11. Art as behaviour--an ethological approach to visual and verbal art, music and architecture.

    Science.gov (United States)

    Sütterlin, Christa; Schiefenhövel, Wulf; Lehmann, Christian; Forster, Johanna; Apfelauer, Gerhard

    2014-01-01

    In recent years, the fine arts, architecture, music and literature have increasingly been examined from the vantage point of human ethology and evolutionary psychology. In 2011 the authors formed the research group 'Ethology of the Arts' concentrating on the evolution and biology of perception and behaviour. These novel approaches aim at a better understanding of the various facets represented by the arts by taking into focus possible phylogenetic adaptations, which have shaped the artistic capacities of our ancestors. Rather than culture specificity, which is stressed e.g. by cultural anthropology and numerous other disciplines, universal human tendencies to perceive, feel, think and behave are postulated. Artistic expressive behaviour is understood as an integral part of the human condition, whether expressed in ritual, visual, verbal or musical art. The Ethology of the Arts-group's research focuses on visual and verbal art, music and built environment/architecture and is designed to contribute to the incipient interdisciplinarity in the field of evolutionary art research.

  12. A global optimization approach to multi-polarity sentiment analysis.

    Directory of Open Access Journals (Sweden)

    Xinmiao Li

    Full Text Available Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG and support vector machines (SVM are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA and grid

  13. An optimization approach to kinetic model reduction for combustion chemistry

    CERN Document Server

    Lebiedz, Dirk

    2013-01-01

    Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...

  14. Blood platelet production: a novel approach for practical optimization.

    Science.gov (United States)

    van Dijk, Nico; Haijema, René; van der Wal, Jan; Sibinga, Cees Smit

    2009-03-01

    The challenge of production and inventory management for blood platelets (PLTs) is the requirement to meet highly uncertain demands. Shortages are to be minimized, if not to be avoided at all. Overproduction, in turn, leads to high levels of outdating as PLTs have a limited "shelf life." Outdating is to be minimized for ethical and cost reasons. Operations research (OR) methodology was applied to the PLT inventory management problem. The problem can be formulated in a general mathematical form. To solve this problem, a five-step procedure was used. This procedure is based on a combination of two techniques, a mathematical technique called stochastic dynamic programming (SDP) and computer simulation. The approach identified an optimal production policy, leading to the computation of a simple and nearly optimal PLT production "order-up-to" rule. This rule prescribes a fixed order-up-to level for each day of the week. The approach was applied to a test study with actual data for a regional Dutch blood bank. The main finding in the test study was that outdating could be reduced from 15-20 percent to less than 0.1 percent with virtually no shortages. Blood group preferences and extending the shelf life of more than 5 days appeared to be of marginal effect. In this article the worlds of blood management and the mathematical discipline of OR are brought together for the optimization of blood PLT production. This leads to simple nearly optimal blood PLT production policies that are suitable for practical implementation.

  15. A State-Based Modeling Approach for Efficient Performance Evaluation of Embedded System Architectures at Transaction Level

    Directory of Open Access Journals (Sweden)

    Anthony Barreteau

    2012-01-01

    Full Text Available Abstract models are necessary to assist system architects in the evaluation process of hardware/software architectures and to cope with the still increasing complexity of embedded systems. Efficient methods are required to create reliable models of system architectures and to allow early performance evaluation and fast exploration of the design space. In this paper, we present a specific transaction level modeling approach for performance evaluation of hardware/software architectures. This approach relies on a generic execution model that exhibits light modeling effort. Created models are used to evaluate by simulation expected processing and memory resources according to various architectures. The proposed execution model relies on a specific computation method defined to improve the simulation speed of transaction level models. The benefits of the proposed approach are highlighted through two case studies. The first case study is a didactic example illustrating the modeling approach. In this example, a simulation speed-up by a factor of 7,62 is achieved by using the proposed computation method. The second case study concerns the analysis of a communication receiver supporting part of the physical layer of the LTE protocol. In this case study, architecture exploration is led in order to improve the allocation of processing functions.

  16. An Efficient PageRank Approach for Urban Traffic Optimization

    OpenAIRE

    2012-01-01

    The cities are not static environments. They change constantly. When we talk about traffic in the city, the evolution of traffic lights is a journey from mindless automation to increasingly intelligent, fluid traffic management. In our approach, presented in this paper, reinforcement-learning mechanism based on cost function is introduced to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999))...

  17. MULTISCALE OPTIMIZATION OF FLOW DISTRIBUTION BY CONSTRUCTAL APPROACH

    Institute of Scientific and Technical Information of China (English)

    Lingai Luo; Daniel Tondeur

    2005-01-01

    Constructal approach is a recent concept allowing to generate and optimize multi-scale structures, in particular, branching structures, connecting a microscopic world to a macroscopic one, from an engineer's point of view.Branching morphologies are found in many types of natural phenomena, and may be associated to some kind of optimization, expressing the evolutionary adaptation of natural systems to their environment. In a sense, the constructal approach tries to imitate this morphogenesis while short-cutting the trial-and-error of nature.The basic ideas underlying the constructal concept and methodology are illustrated here by the examples of fluid distribution to a multi-channel reactor, and of the design of a porous material and system for gas adsorption and storage. In usual constructal theory, a tree branching is postulated for the channels or flow-paths or conductors, usually a dichotomic tree (every branch is divided into two "daughters"). The objective function of the optimization is built from the resistances to mass or heat transport, expressed here as "characteristic transport times", and the geometric result is expressed as a shape factor of a domain. The optimized shape expresses the compromise between the mass or heat transport characteristics at adjacent scales. Under suitable assumptions, simple analytical scaling laws are found, which relate the geometric and transport properties of different scales.Some challenging geometric problems may arise when applying the constructal approach to practical situations where strong geometric constraints exist. The search for analytical solutions imposes simplifying assumptions which may be at fault, calling for less constraining approaches, for example making only weak assumptions on the branching structure.Some of these challenges are brought forward along this text.

  18. Computational approaches for microalgal biofuel optimization: a review.

    Science.gov (United States)

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  19. Computational Approaches for Microalgal Biofuel Optimization: A Review

    Directory of Open Access Journals (Sweden)

    Joseph Koussa

    2014-01-01

    Full Text Available The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  20. Determining Land System Sustainability through a Land Architecture Approach: Example of Southern Yucatán (Invited)

    Science.gov (United States)

    Turner, B. L., II

    2009-12-01

    Sustainable land systems involve an array of tradeoffs, not only among ecosystem services, but between those services and human outcomes. These tradeoffs are affected by the architecture of the land system—the kind, size, pattern, and distribution of land uses and covers. Working towards a model capable of handling a full array of ecosystem services and human outcomes, the concept of land architecture is illustrated through a simplified land system in the seasonal tropical forests of the southern Yucatán where sustainability is sought through the competing goals of forest conservation-preservation and agricultural development, both cultivation and ranching. Land architecture and tradeoff impacts are compared between two communities emphasizing, respectfully, forest conservation-preservation and agriculture. The role of spatial scale is also illustrated. Vulnerability and resilience assessments of land systems should be enhanced through a land architecture approach.

  1. An urban informatics approach to smart city learning in architecture and urban design education

    Directory of Open Access Journals (Sweden)

    Mirko Guaralda

    2013-08-01

    Full Text Available This study aims to redefine spaces of learning to places of learning through the direct engagement of local communities as a way to examine and learn from real world issues in the city. This paper exemplifies Smart City Learning, where the key goal is to promote the generation and exchange of urban design ideas for the future development of South Bank, in Brisbane, Australia, informing the creation of new design policies responding to the needs of local citizens. Specific to this project was the implementation of urban informatics techniques and approaches to promote innovative engagement strategies. Architecture and Urban Design students were encouraged to review and appropriate real-time, ubiquitous technology, social media, and mobile devices that were used by urban residents to augment and mediate the physical and digital layers of urban infrastructures. Our study’s experience found that urban informatics provide an innovative opportunity to enrich students’ place of learning within the city.

  2. Some approaches for modeling and analysis of a parallel mechanism with stewart platform architecture

    Energy Technology Data Exchange (ETDEWEB)

    V. De Sapio

    1998-05-01

    Parallel mechanisms represent a family of devices based on a closed kinematic architecture. This is in contrast to serial mechanisms, which are comprised of a chain-like series of joints and links in an open kinematic architecture. The closed architecture of parallel mechanisms offers certain benefits and disadvantages.

  3. Building constructions: architecture and nature

    Directory of Open Access Journals (Sweden)

    Mayatskaya Irina

    2017-01-01

    Full Text Available The problem of optimization of building structures is considered in architectural bionic modeling on the bionic principle basis. It is possible to get a reliable and durable constructions by studying the structure and the laws of organization of natural objects. Modern architects have created unique buildings using the bionic approach. There are such properties as symmetry, asymmetry, self-similarity and fractality used in the modern architecture. Using the methods of fractal geometry in the design of architectural forms allows finding a variety of constructive solutions.

  4. Toward an Agile Approach to Managing the Effect of Requirements on Software Architecture during Global Software Development

    Directory of Open Access Journals (Sweden)

    Abdulaziz Alsahli

    2016-01-01

    Full Text Available Requirement change management (RCM is a critical activity during software development because poor RCM results in occurrence of defects, thereby resulting in software failure. To achieve RCM, efficient impact analysis is mandatory. A common repository is a good approach to maintain changed requirements, reusing and reducing effort. Thus, a better approach is needed to tailor knowledge for better change management of requirements and architecture during global software development (GSD.The objective of this research is to introduce an innovative approach for handling requirements and architecture changes simultaneously during global software development. The approach makes use of Case-Based Reasoning (CBR and agile practices. Agile practices make our approach iterative, whereas CBR stores requirements and makes them reusable. Twin Peaks is our base model, meaning that requirements and architecture are handled simultaneously. For this research, grounded theory has been applied; similarly, interviews from domain experts were conducted. Interview and literature transcripts formed the basis of data collection in grounded theory. Physical saturation of theory has been achieved through a published case study and developed tool. Expert reviews and statistical analysis have been used for evaluation. The proposed approach resulted in effective change management of requirements and architecture simultaneously during global software development.

  5. A numerical multivariate approach to optimization of photovoltaic solar low energy building designs

    Energy Technology Data Exchange (ETDEWEB)

    Peippo, K.

    1997-12-31

    subjective economic and architectural boundary conditions play a major role in determining the design preferences. In terms of computing time and accuracy the approach appears to be applicable to strategic studies giving first order illustrative quantitative insight in the combined effect of energy efficiency measures. However, model refinements are necessary especially in daylighting calculations for more precise results. Also, if a larger number of variables are to be introduced in the optimizations, more advanced non-linear optimization techniques have to be considered. (orig.) 26 refs.

  6. Self-optimizing approach for automated laser resonator alignment

    Science.gov (United States)

    Brecher, C.; Schmitt, R.; Loosen, P.; Guerrero, V.; Pyschny, N.; Pavim, A.; Gatej, A.

    2012-02-01

    Nowadays, the assembly of laser systems is dominated by manual operations, involving elaborate alignment by means of adjustable mountings. From a competition perspective, the most challenging problem in laser source manufacturing is price pressure, a result of cost competition exerted mainly from Asia. From an economical point of view, an automated assembly of laser systems defines a better approach to produce more reliable units at lower cost. However, the step from today's manual solutions towards an automated assembly requires parallel developments regarding product design, automation equipment and assembly processes. This paper introduces briefly the idea of self-optimizing technical systems as a new approach towards highly flexible automation. Technically, the work focuses on the precision assembly of laser resonators, which is one of the final and most crucial assembly steps in terms of beam quality and laser power. The paper presents a new design approach for miniaturized laser systems and new automation concepts for a robot-based precision assembly, as well as passive and active alignment methods, which are based on a self-optimizing approach. Very promising results have already been achieved, considerably reducing the duration and complexity of the laser resonator assembly. These results as well as future development perspectives are discussed.

  7. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh; Manzano Franco, Joseph B.; Tumeo, Antonino

    2015-05-20

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) { on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.

  8. Design Optimization of Mixed-Criticality Real-Time Applications on Cost-Constrained Partitioned Architectures

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Pop, Paul

    2011-01-01

    In this paper we are interested to implement mixed-criticality hard real-time applications on a given heterogeneous distributed architecture. Applications have different criticality levels, captured by their Safety-Integrity Level (SIL), and are scheduled using static-cyclic scheduling. Mixed...... of different SILs can share a partition only if they are all elevated to the highest SIL among them. Such elevation leads to increased development costs.We are interested to determine (i) the mapping of tasks to processors, (ii) the assignment of tasks to partitions, (iii) the sequence and size of the time...

  9. Indoor Wireless Localization-hybrid and Unconstrained Nonlinear Optimization Approach

    Directory of Open Access Journals (Sweden)

    R. Jayabharathy

    2013-07-01

    Full Text Available In this study, a hybrid TOA/RSSI wireless localization is proposed for accurate positioning in indoor UWB systems. The major problem in indoor localization is the effect of Non-Line of Sight (NLOS propagation. To mitigate the NLOS effects, an unconstrained nonlinear optimization approach is utilized to process Time-of-Arrival (TOA and Received Signal Strength (RSS in the location system.TOA range measurements and path loss model are used to discriminate LOS and NLOS conditions. The weighting factors assigned by hypothesis testing, is used for solving the objective function in the proposed approach. This approach is used for describing the credibility of the TOA range measurement. Performance of the proposed technique is done based on MATLAB simulation. The result shows that the proposed technique performs well and achieves improved positioning under severe NLOS conditions.

  10. Portfolio optimization in enhanced index tracking with goal programming approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  11. Genetic braid optimization: A heuristic approach to compute quasiparticle braids

    Science.gov (United States)

    McDonald, Ross B.; Katzgraber, Helmut G.

    2013-02-01

    In topologically protected quantum computation, quantum gates can be carried out by adiabatically braiding two-dimensional quasiparticles, reminiscent of entangled world lines. Bonesteel [Phys. Rev. Lett.10.1103/PhysRevLett.95.140503 95, 140503 (2005)], as well as Leijnse and Flensberg [Phys. Rev. B10.1103/PhysRevB.86.104511 86, 104511 (2012)], recently provided schemes for computing quantum gates from quasiparticle braids. Mathematically, the problem of executing a gate becomes that of finding a product of the generators (matrices) in that set that approximates the gate best, up to an error. To date, efficient methods to compute these gates only strive to optimize for accuracy. We explore the possibility of using a generic approach applicable to a variety of braiding problems based on evolutionary (genetic) algorithms. The method efficiently finds optimal braids while allowing the user to optimize for the relative utilities of accuracy and/or length. Furthermore, when optimizing for error only, the method can quickly produce efficient braids.

  12. Optimizing communication satellites payload configuration with exact approaches

    Science.gov (United States)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  13. Solution of optimization problems using hybrid architecture; Solucao de problemas de otimizacao utilizando arquitetura hibrida

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, Lelis Tetsuo

    2008-07-01

    king of problem. Because of the importance and magnitude of this issue, every effort which contributes to the improvement of power planning is welcome and this corroborates with this thesis which has an objective to propose technical, viable and economic solutions to solve the optimization problems with a new approach and has potential to be applied in many others kind of similar problems. (author)

  14. Green Architecture

    Science.gov (United States)

    Lee, Seung-Ho

    Today, the environment has become a main subject in lots of science disciplines and the industrial development due to the global warming. This paper presents the analysis of the tendency of Green Architecture in France on the threes axes: Regulations and Approach for the Sustainable Architecture (Certificate and Standard), Renewable Materials (Green Materials) and Strategies (Equipments) of Sustainable Technology. The definition of 'Green Architecture' will be cited in the introduction and the question of the interdisciplinary for the technological development in 'Green Architecture' will be raised up in the conclusion.

  15. Incorporating High-Speed, Optimizing, Interleaving, Configurable/Composable Scheduling into NASA's EUROPA Planning Architecture Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced, robust, autonomous planning systems have not focused on the scheduling decisions made by the planner. And high quality, optimizing schedulers have rarely...

  16. Optimization Approaches for Designing Quantum Reversible Arithmetic Logic Unit

    Science.gov (United States)

    Haghparast, Majid; Bolhassani, Ali

    2016-03-01

    Reversible logic is emerging as a promising alternative for applications in low-power design and quantum computation in recent years due to its ability to reduce power dissipation, which is an important research area in low power VLSI and ULSI designs. Many important contributions have been made in the literatures towards the reversible implementations of arithmetic and logical structures; however, there have not been many efforts directed towards efficient approaches for designing reversible Arithmetic Logic Unit (ALU). In this study, three efficient approaches are presented and their implementations in the design of reversible ALUs are demonstrated. Three new designs of reversible one-digit arithmetic logic unit for quantum arithmetic has been presented in this article. This paper provides explicit construction of reversible ALU effecting basic arithmetic operations with respect to the minimization of cost metrics. The architectures of the designs have been proposed in which each block is realized using elementary quantum logic gates. Then, reversible implementations of the proposed designs are analyzed and evaluated. The results demonstrate that the proposed designs are cost-effective compared with the existing counterparts. All the scales are in the NANO-metric area.

  17. ADAPTIVE REUSE FOR NEW SOCIAL AND MUNICIPAL FUNCTIONS AS AN ACCEPTABLE APPROACH FOR CONSERVATION OF INDUSTRIAL HERITAGE ARCHITECTURE IN THE CZECH REPUBLIC

    Directory of Open Access Journals (Sweden)

    Oleg Fetisov

    2016-04-01

    Full Text Available The present paper deals with a problem of conservation and adaptive reuse of industrial heritage architecture. The relevance and topicality of the problem of adaptive reuse of industrial heritage architecture for new social and municipal functions as the conservation concept are defined. New insights on the typology of industrial architecture are reviewed (e. g. global changes in all European industry, new concepts and technologies in manufacturing, new features of industrial architecture and their construction and typology, first results of industrialization and changes in the typology of industrial architecture in post-industrial period. General goals and tasks of conservation in context of adaptive reuse of industrial heritage architecture are defined (e. g. historical, architectural and artistic, technical. Adaptive reuse as an acceptable approach for conservation and new use is proposed and reviewed. Moreover, the logical model of adaptive reuse of industrial heritage architecture as an acceptable approach for new use has been developed. Consequently, three general methods for the conservation of industrial heritage architecture by the adaptive reuse approach are developed: historical, architectural and artistic, technical. Relevant functional methods' concepts (social concepts are defined and classified. General beneficial effect of the adaptive reuse approach is given. On the basis of analysis results of experience in adaptive reuse of industrial architecture with new social functions general conclusions are developed.

  18. Coverage Optimization for Defect-Tolerance Logic Mapping on Nanoelectronic Crossbar Architectures

    Institute of Scientific and Technical Information of China (English)

    Bo Yuan; Bin Li

    2012-01-01

    Emerging nano-devices with the corresponding nano-architectures are expected to supplement or even replace conventional lithography-based CMOS integrated circuits,while,they are also facing the serious challenge of high defect rates.In this paper,a new weighted coverage is defined as one of the most important evaluation criteria of various defecttolerance logic mapping algorithms for nanoelectronic crossbar architectures functional design.This new criterion is proved by experiments that it can calculate the number of crossbar modules required by the given logic function more accurately than the previous one presented by Yellambalase et al.Based on the new criterion,a new effective mapping algorithm based on genetic algorithm (GA) is proposed.Compared with the state-of-the-art greedy mapping algorithm,the proposed algorithm shows pretty good effectiveness and robustness in experiments on testing problems of various scales and defect rates,and superior performances are observed on problems of large scales and high defect rates.

  19. Optimal trading strategies—a time series approach

    Science.gov (United States)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  20. Ant colony optimization approach to estimate energy demand of Turkey

    Energy Technology Data Exchange (ETDEWEB)

    Duran Toksari, M. [Erciyes University, Kayseri (Turkey). Engineering Faculty, Industrial Engineering Department

    2007-08-15

    This paper attempts to shed light on the determinants of energy demand in Turkey. Energy demand model is first proposed using the ant colony optimization (ACO) approach. It is multi-agent systems in which the behavior of each ant is inspired by the foraging behavior of real ants to solve optimization problem. ACO energy demand estimation (ACOEDE) model is developed using population, gross domestic product (GDP), import and export. All equations proposed here are linear and quadratic. Quadratic{sub A}COEDE provided better-fit solution due to fluctuations of the economic indicators. The ACOEDE model plans the energy demand of Turkey until 2025 according to three scenarios. The relative estimation errors of the ACOEDE model are the lowest when they are compared with the Ministry of Energy and Natural Resources (MENR) projection. (author)

  1. An Optimal Rubrics-Based Approach to Real Estate Appraisal

    Directory of Open Access Journals (Sweden)

    Zhangcheng Chen

    2017-05-01

    Full Text Available Traditional real estate appraisal methods obtain estimates of real estate by using mathematical modeling to analyze the existing sample data. However, the information of sample data sometimes cannot fully reflect the real-time quotes. For example, in a thin real estate market, the correlated sample data for estimated object is lacking, which limits the estimates of these traditional methods. In this paper, an optimal rubrics-based approach to real estate appraisal is proposed, which brings in crowdsourcing. The valuation estimate can serve as a market indication for the potential real estate buyers or sellers. It is not only based on the information of the existing sample data (just like these traditional methods, but also on the extra real-time market information from online crowdsourcing feedback, which makes the estimated result close to that of the market. The proposed method constructs the rubrics model from sample data. Based on this, the cosine similarity function is used to calculate the similarity between each rubric for selecting the optimal rubrics. The selected optimal rubrics and the estimated point are posted on a crowdsourcing platform. After comparing the information of the estimated point with the optimal rubrics on the crowdsourcing platform, those users who are connected with the estimated object complete the appraisal with their knowledge of the real estate market. The experiment results show that the average accuracy of the proposed approach is over 70%; the maximum accuracy is 90%. This supports that the proposed method can easily provide a valuable market reference for the potential real estate buyers or sellers, and is an attempt to use the human-computer interaction in the real estate appraisal field.

  2. Direct and Evolutionary Approaches for Optimal Receiver Function Inversion

    Science.gov (United States)

    Dugda, Mulugeta Tuji

    Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem

  3. Perspective: Codesign for materials science: An optimal learning approach

    Science.gov (United States)

    Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.

    2016-05-01

    A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.

  4. Multidisciplinary Design Optimization Under Uncertainty: An Information Model Approach (PREPRINT)

    Science.gov (United States)

    2011-03-01

    and c ∈ R, which is easily solved using the MatLab function fmincon. The reader is cautioned not to optimize over (t, p, c). Our approach requires a...would have to be expanded. The fifteen formulas can serve as the basis for numerical simulations, an easy task using MatLab . 5.3 Simulation of the higher...Design 130, 2008, 081402-1 – 081402-12. [32] M. Loève, ” Fonctions aléatoires du second ordre,” Suplement to P. Lévy, Pro- cessus Stochastiques et

  5. A Hybrid Approach to the Optimization of Multiechelon Systems

    Directory of Open Access Journals (Sweden)

    Paweł Sitek

    2015-01-01

    Full Text Available In freight transportation there are two main distribution strategies: direct shipping and multiechelon distribution. In the direct shipping, vehicles, starting from a depot, bring their freight directly to the destination, while in the multiechelon systems, freight is delivered from the depot to the customers through an intermediate points. Multiechelon systems are particularly useful for logistic issues in a competitive environment. The paper presents a concept and application of a hybrid approach to modeling and optimization of the Multi-Echelon Capacitated Vehicle Routing Problem. Two ways of mathematical programming (MP and constraint logic programming (CLP are integrated in one environment. The strengths of MP and CLP in which constraints are treated in a different way and different methods are implemented and combined to use the strengths of both. The proposed approach is particularly important for the discrete decision models with an objective function and many discrete decision variables added up in multiple constraints. An implementation of hybrid approach in the ECLiPSe system using Eplex library is presented. The Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP and its variants are shown as an illustrative example of the hybrid approach. The presented hybrid approach will be compared with classical mathematical programming on the same benchmark data sets.

  6. Optimizing Libraries’ Content Findability Using Simple Object Access Protocol (SOAP) With Multi-Tier Architecture

    Science.gov (United States)

    Lahinta, A.; Haris, I.; Abdillah, T.

    2017-03-01

    The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.

  7. An Optimizing Compiler for Petascale I/O on Leadership Class Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Choudhary, Alok [Northwestern Univ., Evanston, IL (United States); Kandemir, Mahmut [Pennsylvania State Univ., State College, PA (United States)

    2015-03-18

    In high-performance computing systems, parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions.

  8. Architecture-Driven Level Set Optimization: From Clustering to Subpixel Image Segmentation.

    Science.gov (United States)

    Balla-Arabe, Souleymane; Gao, Xinbo; Ginhac, Dominique; Brost, Vincent; Yang, Fan

    2016-12-01

    Thanks to their effectiveness, active contour models (ACMs) are of great interest for computer vision scientists. The level set methods (LSMs) refer to the class of geometric active contours. Comparing with the other ACMs, in addition to subpixel accuracy, it has the intrinsic ability to automatically handle topological changes. Nevertheless, the LSMs are computationally expensive. A solution for their time consumption problem can be hardware acceleration using some massively parallel devices such as graphics processing units (GPUs). But the question is: which accuracy can we reach while still maintaining an adequate algorithm to massively parallel architecture? In this paper, we attempt to push back the compromise between, speed and accuracy, efficiency and effectiveness, to a higher level, comparing with state-of-the-art methods. To this end, we designed a novel architecture-aware hybrid central processing unit (CPU)-GPU LSM for image segmentation. The initialization step, using the well-known k -means algorithm, is fast although executed on a CPU, while the evolution equation of the active contour is inherently local and therefore suitable for GPU-based acceleration. The incorporation of local statistics in the level set evolution allowed our model to detect new boundaries which are not extracted by the used clustering algorithm. Comparing with some cutting-edge LSMs, the introduced model is faster, more accurate, less subject to giving local minima, and therefore suitable for automatic systems. Furthermore, it allows two-phase clustering algorithms to benefit from the numerous LSM advantages such as the ability to achieve robust and subpixel accurate segmentation results with smooth and closed contours. Intensive experiments demonstrate, objectively and subjectively, the good performance of the introduced framework both in terms of speed and accuracy.

  9. Lightweight enterprise architectures

    CERN Document Server

    Theuerkorn, Fenix

    2004-01-01

    STATE OF ARCHITECTUREArchitectural ChaosRelation of Technology and Architecture The Many Faces of Architecture The Scope of Enterprise Architecture The Need for Enterprise ArchitectureThe History of Architecture The Current Environment Standardization Barriers The Need for Lightweight Architecture in the EnterpriseThe Cost of TechnologyThe Benefits of Enterprise Architecture The Domains of Architecture The Gap between Business and ITWhere Does LEA Fit? LEA's FrameworkFrameworks, Methodologies, and Approaches The Framework of LEATypes of Methodologies Types of ApproachesActual System Environmen

  10. Taxes, subsidies and unemployment - a unified optimization approach

    Directory of Open Access Journals (Sweden)

    Erik Bajalinov

    2010-12-01

    Full Text Available Like a linear programming (LP problem, linear-fractional programming (LFP problem can be usefully applied in a wide range of real-world applications. In the last few decades a lot of research papers and monographs were published throughout the world where authors (mainly mathematicians investigated different theoretical and algorithmic aspects of LFP problems in various forms. In this paper we consider these two approaches to optimization (based on linear and linear-fractional objective functions on the same feasible set, compare results they lead to and give interpretation in terms of taxes, subsidies and manpower requirement. We show that in certain cases both approaches are closely connected with one another and may be fruitfully utilized simultaneously.

  11. Forging tool shape optimization using pseudo inverse approach and adaptive incremental approach

    Science.gov (United States)

    Halouani, A.; Meng, F. J.; Li, Y. M.; Labergère, C.; Abbès, B.; Lafon, P.; Guo, Y. Q.

    2013-05-01

    This paper presents a simplified finite element method called "Pseudo Inverse Approach" (PIA) for tool shape design and optimization in multi-step cold forging processes. The approach is based on the knowledge of the final part shape. Some intermediate configurations are introduced and corrected by using a free surface method to consider the deformation paths without contact treatment. A robust direct algorithm of plasticity is implemented by using the equivalent stress notion and tensile curve. Numerical tests have shown that the PIA is very fast compared to the incremental approach. The PIA is used in an optimization procedure to automatically design the shapes of the preform tools. Our objective is to find the optimal preforms which minimize the equivalent plastic strain and punch force. The preform shapes are defined by B-Spline curves. A simulated annealing algorithm is adopted for the optimization procedure. The forging results obtained by the PIA are compared to those obtained by the incremental approach to show the efficiency and accuracy of the PIA.

  12. AN EFFICIENT 3-DIMENSIONAL DISCRETE WAVELET TRANSFORM ARCHITECTURE FOR VIDEO PROCESSING APPLICATION

    Institute of Scientific and Technical Information of China (English)

    Ganapathi Hegde; Pukhraj Vaya

    2012-01-01

    This paper presents an optimized 3-D Discrete Wavelet Transform (3-DDWT) architecture.1-DDWT employed for the design of 3-DDWT architecture uses reduced lifting scheme approach.Further the architecture is optimized by applying block enabling technique,scaling,and rounding of the filter coefficients.The proposed architecture uses biorthogonal (9/7) wavelet filter.The architecture is modeled using Verilog HDL,simulated using ModelSim,synthesized using Xilinx ISE and finally implemented on Virtex-5 FPGA.The proposed 3-DDWT architecture has slice register utilization of 5%,operating frequency of 396 MHz and a power consumption of 0.45 W.

  13. An Optimal Path Computation Architecture for the Cloud-Network on Software-Defined Networking

    Directory of Open Access Journals (Sweden)

    Hyunhun Cho

    2015-05-01

    Full Text Available Legacy networks do not open the precise information of the network domain because of scalability, management and commercial reasons, and it is very hard to compute an optimal path to the destination. According to today’s ICT environment change, in order to meet the new network requirements, the concept of software-defined networking (SDN has been developed as a technological alternative to overcome the limitations of the legacy network structure and to introduce innovative concepts. The purpose of this paper is to propose the application that calculates the optimal paths for general data transmission and real-time audio/video transmission, which consist of the major services of the National Research & Education Network (NREN in the SDN environment. The proposed SDN routing computation (SRC application is designed and applied in a multi-domain network for the efficient use of resources, selection of the optimal path between the multi-domains and optimal establishment of end-to-end connections.

  14. Optimization of Partitioned Architectures to Support Soft Real-Time Applications

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Pop, Paul

    2014-01-01

    In this paper we propose a new Tabu Search-based design optimization strategy for mixed-criticality systems implementing hard and soft real-time applications on the same platform. Our proposed strategy determined an implementation such that all hard real-time applications are schedulable...

  15. Energy Conservation Law in Industrial Architecture: An Approach through Geometric Algebra

    Directory of Open Access Journals (Sweden)

    Juan C. Bravo

    2016-09-01

    Full Text Available Since 1892, the electrical engineering scientific community has been seeking a power theory for interpreting the power flow within electric networks under non-sinusoidal conditions. Although many power theories have been proposed regarding non-sinusoidal operation, an adequate solution is yet to be found. Using the framework based on complex algebra in non-sinusoidal circuit analysis (frequency domain, the verification of the energy conservation law is only possible in sinusoidal situations. In this case, reactive energy turns out to be proportional to the energy difference between the average electric and magnetic energies stored in the loads and its cancellation is mathematically trivial. However, in industrial architecture, apparent power definition of electric loads (non-sinusoidal conditions is inconsistent with the energy conservation law. Up until now, in the classical complex algebra approach, this goal is only valid in the case of purely resistive loads. Thus, in this paper, a new circuit analysis approach using geometric algebra is used to develop the most general proof of energy conservation in industrial building loads. In terms of geometric objects, this powerful tool calculates the voltage, current, and apparent power in electrical systems in non-sinusoidal, linear/nonlinear situations. In contrast to the traditional method developed by Steinmetz, the suggested powerful tool extends the concept of phasor to multivector-phasors and is performed in a new Generalized Complex Geometric Algebra structure (CGn, where Gn is the Clifford algebra in n-dimensional real space and C is the complex vector space. To conclude, a numerical example illustrates the clear advantages of the approach suggested in this paper.

  16. PERCEPTIVE APPROACH FOR ROUTE OPTIMIZATION IN MOBILE IP

    Directory of Open Access Journals (Sweden)

    Vinay Kumar Nigam

    2010-12-01

    Full Text Available The recent advances in wireless communication technology and the unprecedented growth of the Internet have paved the way for wireless networking and IP mobility. Mobile Internet protocol[1,2] has been designed within the IETF to support the mobility[2] of users who wish to connect to the internet and maintain communications as they move from place to place. Mobile IPV6 allows a mobile node to talk directly to its peers while retaining the ability to move around and change the currently used IP addresses. This mode of operation is called Route Optimization[7,10].In this approach, the correspondent node learns a binding between the Mobile nodes permanent home address and its current temporary care-of-address. This introduces several security vulnerabilities to Mobile IP, among them the most important one is the authentication and authorization of binding updates. This paper describes the Route optimization by the introduction of mobility. In this paper , we proposed a new efficient technique for route optimization in mobile IP for smoothly communication while MN moving from one network domain to other without losing the connection. Our technique will also be improve the path in intra-network communication[9].

  17. Specific energy optimization in sawing of rocks using Taguchi approach

    Institute of Scientific and Technical Information of China (English)

    Izzet Karakurt

    2014-01-01

    This work aims at selecting optimal operating variables to obtain the minimum specific energy (SE) in sawing of rocks. A particular granite was sampled and sawn by a fully automated circular diamond sawblades. The peripheral speed, the traverse speed, the cut depth and the flow rate of cooling fluid were selected as the operating variables. Taguchi approach was adopted as a statistical design of experimental technique for optimization studies. The results were evaluated based on the analysis of variance and signal-to-noise ratio (S/N ratio). Statistically significant operating variables and their percentage contribution to the process were also determined. Additionally, a statistical model was developed to demonstrate the relationship between SE and operating variables using regression analysis and the model was then verified. It was found that the optimal combination of operating variables for minimum SE is the peripheral speed of 25 m/s, the traverse speed of 70 cm/min, the cut depth of 2 cm and the flow rate of cooling fluid of 100 mL/s. The cut depth and traverse speed were statistically determined as the significant operating variables affecting the SE, respectively. Furthermore, the regression model results reveal that the predictive model has a high applicability for practical applications.

  18. Replication in Overlay Networks: A Multi-objective Optimization Approach

    Science.gov (United States)

    Al-Haj Hassan, Osama; Ramaswamy, Lakshmish; Miller, John; Rasheed, Khaled; Canfield, E. Rodney

    Recently, overlay network-based collaborative applications such as instant messaging, content sharing, and Internet telephony are becoming increasingly popular. Many of these applications rely upon data-replication to achieve better performance, scalability, and reliability. However, replication entails various costs such as storage for holding replicas and communication overheads for ensuring replica consistency. While simple rule-of-thumb strategies are popular for managing the cost-benefit tradeoffs of replication, they cannot ensure optimal resource utilization. This paper explores a multi-objective optimization approach for replica management, which is unique in the sense that we view the various factors influencing replication decisions such as access latency, storage costs, and data availability as objectives, and not as constraints. This enables us to search for solutions that yield close to optimal values for these parameters. We propose two novel algorithms, namely multi-objective Evolutionary (MOE) algorithm and multi-objective Randomized Greedy (MORG) algorithm for deciding the number of replicas as well as their placement within the overlay. While MOE yields higher quality solutions, MORG is better in terms of computational efficiency. The paper reports a series of experiments that demonstrate the effectiveness of the proposed algorithms.

  19. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  20. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  1. Modeling the crop transpiration using an optimality-based approach

    Institute of Scientific and Technical Information of China (English)

    Stanislaus; J.Schymanski; Murugesu; Sivapalan

    2008-01-01

    Evapotranspiration constitutes more than 80% of the long-term water balance in Northern China.In this area,crop transpiration due to large areas of agriculture and irrigation is responsible for the majority of evapotranspiration.A model for crop transpiration is therefore essential for estimating the agricultural water consumption and understanding its feedback to the environment.However,most existing hydrological models usually calculate transpiration by relying on parameter calibration against local observations,and do not take into account crop feedback to the ambient environment.This study presents an optimality-based ecohydrology model that couples an ecological hypothesis,the photosynthetic process,stomatal movement,water balance,root water uptake and crop senescence,with the aim of predicting crop characteristics,CO2 assimilation and water balance based only on given meteorological data.Field experiments were conducted in the Weishan Irrigation District of Northern China to evaluate performance of the model.Agreement between simulation and measurement was achieved for CO2 assimilation,evapotranspiration and soil moisture content.The vegetation optimality was proven valid for crops and the model was applicable for both C3 and C4 plants.Due to the simple scheme of the optimality-based approach as well as its capability for modeling dynamic interactions between crops and the water cycle without prior vegetation information,this methodology is potentially useful to couple with the distributed hydrological model for application at the watershed scale.

  2. Why do colder mothers produce larger eggs? An optimality approach.

    Science.gov (United States)

    Bownds, Celeste; Wilson, Robbie; Marshall, Dustin J

    2010-11-15

    One of the more common patterns of offspring size variation is that mothers tend to produce larger offspring at lower temperatures. Whether such variation is adaptive remains unclear. Determining whether optimal offspring size differs between thermal environments provides a direct way of assessing the adaptive significance of temperature-driven variation in egg size. Here, we examined the relationship between offspring size and performance at three temperatures for several important fitness components in the zebra fish, Danio rerio. The effects of egg size on performance were highly variable among life-history stages (i.e. pre- and post-hatching) and dependent on the thermal environment; offspring size positively affected performance at some temperatures but negatively affected performance at others. When we used these data to generate a simple optimality model, the model predicted that mothers should produce the largest size offspring at the lowest temperature, offspring of intermediate size at the highest temperature and the smallest offspring at the intermediate temperature. An experimental test of these predictions showed that the rank order of observed offspring sizes produced by mothers matched our predictions. Our results suggest that mothers adaptively manipulate the size of their offspring in response to thermally driven changes in offspring performance and highlight the utility of optimality approaches for understanding offspring size variation.

  3. Silanization of glass chips—A factorial approach for optimization

    Science.gov (United States)

    Vistas, Cláudia R.; Águas, Ana C. P.; Ferreira, Guilherme N. M.

    2013-12-01

    Silanization of glass chips with 3-mercaptopropyltrimethoxysilane (MPTS) was investigated and optimized to generate a high-quality layer with well-oriented thiol groups. A full factorial design was used to evaluate the influence of silane concentration and reaction time. The stabilization of the silane monolayer by thermal curing was also investigated, and a disulfide reduction step was included to fully regenerate the thiol-modified surface function. Fluorescence analysis and water contact angle measurements were used to quantitatively assess the chemical modifications, wettability and quality of modified chip surfaces throughout the silanization, curing and reduction steps. The factorial design enables a systematic approach for the optimization of glass chips silanization process. The optimal conditions for the silanization were incubation of the chips in a 2.5% MPTS solution for 2 h, followed by a curing process at 110 °C for 2 h and a reduction step with 10 mM dithiothreitol for 30 min at 37 °C. For these conditions the surface density of functional thiol groups was 4.9 × 1013 molecules/cm2, which is similar to the expected maximum coverage obtained from the theoretical estimations based on projected molecular area (∼5 × 1013 molecules/cm2).

  4. Optimization of minoxidil microemulsions using fractional factorial design approach.

    Science.gov (United States)

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  5. An Informatics Approach to Demand Response Optimization in Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  6. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  7. Design of a modular product architecture and a cellular manufacturing layout : A concurrent engineering approach

    NARCIS (Netherlands)

    Slomp, J; McGinnis, LF; Ahmad, MM; Sullivan, WG

    1996-01-01

    This paper formalizes the problem of a concurrent design of a modular product architecture and a cellular manufacturing layout and presents a way to deal with this design problem. First, the different types of a modular product architecture are explained as well as important implications of modulari

  8. Metamodeling and the Critic-based approach to multi-level optimization.

    Science.gov (United States)

    Werbos, Ludmilla; Kozma, Robert; Silva-Lugo, Rodrigo; Pazienza, Giovanni E; Werbos, Paul J

    2012-08-01

    Large-scale networks with hundreds of thousands of variables and constraints are becoming more and more common in logistics, communications, and distribution domains. Traditionally, the utility functions defined on such networks are optimized using some variation of Linear Programming, such as Mixed Integer Programming (MIP). Despite enormous progress both in hardware (multiprocessor systems and specialized processors) and software (Gurobi) we are reaching the limits of what these tools can handle in real time. Modern logistic problems, for example, call for expanding the problem both vertically (from one day up to several days) and horizontally (combining separate solution stages into an integrated model). The complexity of such integrated models calls for alternative methods of solution, such as Approximate Dynamic Programming (ADP), which provide a further increase in the performance necessary for the daily operation. In this paper, we present the theoretical basis and related experiments for solving the multistage decision problems based on the results obtained for shorter periods, as building blocks for the models and the solution, via Critic-Model-Action cycles, where various types of neural networks are combined with traditional MIP models in a unified optimization system. In this system architecture, fast and simple feed-forward networks are trained to reasonably initialize more complicated recurrent networks, which serve as approximators of the value function (Critic). The combination of interrelated neural networks and optimization modules allows for multiple queries for the same system, providing flexibility and optimizing performance for large-scale real-life problems. A MATLAB implementation of our solution procedure for a realistic set of data and constraints shows promising results, compared to the iterative MIP approach.

  9. Optimal architecture of differentiation cascades with asymmetric and symmetric stem cell division.

    Science.gov (United States)

    Sánchez-Taltavull, Daniel

    2016-10-21

    The role of symmetric division in stem cell biology is ambiguous. It is necessary after injuries, but if symmetric divisions occur too often, the appearance of tumours is more likely. To explore the role of symmetric and asymmetric division in cell populations, we propose a mathematical model of competition of populations, in which the stem cell expansion is controlled by fully differentiated cells. We show that there is an optimal fraction of symmetric stem cell division, which maximises the long-term survival probability of the organism. Moreover, we show the optimal number of stem cells in a tissue, and we show that number has to be small enough to reduce the probability of the appearance of advantageous malignant cells, and large enough to assure that the population will not be suppressed by stochastic fluctuations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  11. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    Science.gov (United States)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  12. RSLES: an architectural implementation of a decision support system for optimal recruit station location

    OpenAIRE

    Houck, Dale E.; Shigley, Mark V.

    1999-01-01

    Approved for Public release; distribution is unlimited This thesis describes a component-based methodology for developing a decision support system (DSS) for optimal location of military recruiting stations in regional recruiting markets. The DSS is designed to ensure that stations are selected that minimize cost for a given level of production. The interface allows users to perform "what if' analysis to determine if there are better locations to meet desired objectives. The Recruit Statio...

  13. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Grout, Ray W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-09

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved here through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.

  14. GPA-MDS: A Visualization Approach to Investigate Genetic Architecture among Phenotypes Using GWAS Results.

    Science.gov (United States)

    Wei, Wei; Ramos, Paula S; Hunt, Kelly J; Wolf, Bethany J; Hardiman, Gary; Chung, Dongjun

    2016-01-01

    Genome-wide association studies (GWAS) have identified tens of thousands of genetic variants associated with hundreds of phenotypes and diseases, which have provided clinical and medical benefits to patients with novel biomarkers and therapeutic targets. Recently, there has been accumulating evidence suggesting that different complex traits share a common risk basis, namely, pleiotropy. Previously, a statistical method, namely, GPA (Genetic analysis incorporating Pleiotropy and Annotation), was developed to improve identification of risk variants and to investigate pleiotropic structure through a joint analysis of multiple GWAS datasets. While GPA provides a statistically rigorous framework to evaluate pleiotropy between phenotypes, it is still not trivial to investigate genetic relationships among a large number of phenotypes using the GPA framework. In order to address this challenge, in this paper, we propose a novel approach, GPA-MDS, to visualize genetic relationships among phenotypes using the GPA algorithm and multidimensional scaling (MDS). This tool will help researchers to investigate common etiology among diseases, which can potentially lead to development of common treatments across diseases. We evaluate the proposed GPA-MDS framework using a simulation study and apply it to jointly analyze GWAS datasets examining 18 unique phenotypes, which helps reveal the shared genetic architecture of these phenotypes.

  15. A Service Oriented Architecture Approach to Achieve Interoperability between Immunization Information Systems in Iran

    Science.gov (United States)

    Hosseini, Masoud; Ahmadi, Maryam; Dixon, Brian E.

    2014-01-01

    Clinical decision support (CDS) systems can support vaccine forecasting and immunization reminders; however, immunization decision-making requires data from fragmented, independent systems. Interoperability and accurate data exchange between immunization information systems (IIS) is an essential factor to utilize Immunization CDS systems. Service oriented architecture (SOA) and Health Level 7 (HL7) are dominant standards for web-based exchange of clinical information. We implemented a system based on SOA and HL7 v3 to support immunization CDS in Iran. We evaluated system performance by exchanging 1500 immunization records for roughly 400 infants between two IISs. System turnaround time is less than a minute for synchronous operation calls and the retrieved immunization history of infants were always identical in different systems. CDS generated reports were accordant to immunization guidelines and the calculations for next visit times were accurate. Interoperability is rare or nonexistent between IIS. Since inter-state data exchange is rare in United States, this approach could be a good prototype to achieve interoperability of immunization information. PMID:25954452

  16. A Service Oriented Architecture Approach to Achieve Interoperability between Immunization Information Systems in Iran.

    Science.gov (United States)

    Hosseini, Masoud; Ahmadi, Maryam; Dixon, Brian E

    2014-01-01

    Clinical decision support (CDS) systems can support vaccine forecasting and immunization reminders; however, immunization decision-making requires data from fragmented, independent systems. Interoperability and accurate data exchange between immunization information systems (IIS) is an essential factor to utilize Immunization CDS systems. Service oriented architecture (SOA) and Health Level 7 (HL7) are dominant standards for web-based exchange of clinical information. We implemented a system based on SOA and HL7 v3 to support immunization CDS in Iran. We evaluated system performance by exchanging 1500 immunization records for roughly 400 infants between two IISs. System turnaround time is less than a minute for synchronous operation calls and the retrieved immunization history of infants were always identical in different systems. CDS generated reports were accordant to immunization guidelines and the calculations for next visit times were accurate. Interoperability is rare or nonexistent between IIS. Since inter-state data exchange is rare in United States, this approach could be a good prototype to achieve interoperability of immunization information.

  17. Approach to design neural cryptography: a generalized architecture and a heuristic rule.

    Science.gov (United States)

    Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen

    2013-06-01

    Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.

  18. Novel architecture of composite electrode for optimization of lithium battery performance

    Energy Technology Data Exchange (ETDEWEB)

    Guy, D.; Lestriez, B.; Gaudefroy, V.; Guyomard, D. [Laboratoire de Chimie des Solides, Institut des Materiaux Jean Rouxel, CNRS, Universite de Nantes, B.P. 32229, 44322 Nantes Cedex 3 (France); Bouchet, R. [Laboratoire Madirel, Universite de Marseille, Centre Sr Jerome, Av. Escadrille Normandie Niemen, 13397 Marseille Cedex 20 (France)

    2006-06-19

    We show that the polymeric binder of the composite electrode may have an important role on the lithium trivanadate Li{sub 1.2}V{sub 3}O{sub 8} electrode performance. We describe a new tailored polymeric binder combination with controlled polymer-filler (carbon black) interactions that allows the preparation of new and more efficient electrode architecture. Using this polymeric binder, composite electrodes based on Li{sub 1.2}V{sub 3}O{sub 8} display a room temperature cycling capacity of 280mAhg{sup -1} (C/5 rate, 3.3-2V) instead of 150mAhg{sup -1} using a standard-type (poly(vinylidene fluoride)-hexafluoropropylene (PVdF-HFP) binder) composite electrode. We have coupled scanning electron microscopy (SEM) observations, galvanostatic cycling and electrochemical impedance spectroscopy in order to define and understand the impact of the microstructure of the composite electrode on its electrochemical performance. Derived from these studies, the main key factors that provide efficient charge carrier collection within the composite electrode complex medium are discussed. (author)

  19. Preparation and Recipe Optimization of Water-based Architectural Heat Insulation Coatings

    Institute of Scientific and Technical Information of China (English)

    CHEN Lijun; SHI Hongxin; XIANG Juping; WU Hongke

    2008-01-01

    Water-based architectural heat insulation coatings were studied to overcome the drawbacks of conventional inorganic silicate heat insulation coatings.The heat insulation coatings were prepared with the method of mechanical agitation when the mixed organic polymer emulsions were used as binder of the coatings and the mixed heat insulating aggregates were applied as powder,and some assistants were also added.Water temperature difference in the plastic container,which was coated with heat insulation coatings,represented the heat-insulating property of the coatings.The influences of components of mixed polymer emulsion,mass ratio of polymer emulsion to powder,particle size of heat insulating aggregates,added amount of air entraining admixture and the match of thickeners on the properties of the coatings were studied.The experimental results show that the heat insulation coatings with good finishing,heat-insulation property and artificial weathering can be prepared when the binder is composed of 66.92% styrene-acrylic emulsion,16.59% elastic emulsion and 16.49% silicone-acrylic emulsion,the mass ratio of polymer emulsion to powder is 0.45,the particle size of heat insulating aggregates is in the rang of 200 and 250 mesh size,the added amount of sericite is 15%,and the added amount of air entraining admixture is in the range of 1.0% and 1.5% and the thickeners are the mixtures of ASE-60 and RM-5000.

  20. Evaluating neural control with optimal architecture for DC/DC converter

    Directory of Open Access Journals (Sweden)

    Fredy Hernán Martínez Sarmiento

    2010-05-01

    -power equipment raises great design challenges due to the mathematical model’s complexity and its highly nonlinear dynamic characteristics. Artificial intelligence techniques, such as neuronal networks, suppose great improvements in design and final per- formance, given their capacity for learning complex dynamics and generalising their behaviour. This work was aimed at propo- sing (and evaluating dynamic response later on direct control link with neuronal networks which also allowed eliminating test ele- ments and error in its design. Artificial neuronal network-based direct control was designed as well as possible using bio-inspired search models. This simultaneously optimised two different but fundamental aspects of the network: architecture and the weight of the connections. The control was applied to a boost converter. The results led to observing the scheme’s dynamic performan- ce; response time and exit voltage delta led to concluding that the criteria selected for designing the control were appropriate and represented a contribution towards developing control applications of DC/DC switchmode systems.

  1. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    Science.gov (United States)

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature.

  2. Development and Optimization of Tailored Composite TBC Design Architectures for Improved Erosion Durability

    Science.gov (United States)

    Schmitt, Michael P.; Schreiber, Jeremy M.; Rai, Amarendra K.; Eden, Timothy J.; Wolfe, Douglas E.

    2017-08-01

    Rare-earth pyrochlores, RE2Zr2O7, have been identified as potential thermal barrier coating (TBC) materials due to their attractive thermal properties and CMAS resistance. However, they possess a low fracture toughness which results in poor erosion durability/foreign object damage resistance. This research focuses on the development of tailored composite air plasma spray (APS) TBC design architectures utilizing a t' Low-k secondary toughening phase (ZrO2-2Y2O3-1Gd2O3-1Yb2O3; mol.%) to enhance the erosion durability of a hyper-stoichiometric pyrochlore, NZO (ZrO2-25Nd2O3-5Y2O3-5Yb2O3; mol.%). In this study, composite coatings have been deposited with 30, 50, and 70% (wt.%) t' Low-k toughening phase in a horizontally aligned lamellar morphology which enhances the toughening response of the coating. The coatings were characterized via SEM and XRD and were tested for erosion durability before and after isothermal heat treatment at 1100 °C. Analysis with mixing laws indicated improved erosion performance; however, a lack of long-term thermal stability was shown via isothermal heat treatments at 1316 °C. An impact stress analysis was performed using finite element analysis of a coating cross section, representing the first microstructurally realistic study of mechanical properties of TBCs with the results correlating well with observed behavior.

  3. Clinicoanatomic study of optimal arthroscopic approaches to the elbow

    Directory of Open Access Journals (Sweden)

    I. A. Kuznetsov

    2015-01-01

    Full Text Available The purpose: development and topographic substantiation of optimal arthroscopic approaches to the elbow, taking into account the location of the neurovascular structures in different functional positions. Material and methods: Anatomical relationships of elbow nerves and bony structures were studied by dissection of non-fixed anatomical material (6 elbow joints. To investigate the variant anatomy of the brachial artery, MRI in 23 patients were performed. In 10 patients the authors used ultrasound to study the topographic relationships of elbow nerve structures at different functional positions of the upper extremity Variability of the brachial artery deviation, depending on the angle of elbow flexion, was studied in six angiograms of non-fixed anatomical material. Statistical analysis was performed using Instant + and Past 306 software. Results: It was found that elbow flexion of 180°-90° moves the brachial artery away from the bones with a maximum distance from the humerus of 5 cm above the joint space. Distance increases from 23.5±3.1 mm to 23.9±3.1 mm. In 90° elbow flexion radial and median nerves are at the maximum distance from bony structures - 16.01±0.43 and 20.48±0.28 mm, respectively. Conclusion: These findings allowed justification of the conclusion that the lateral arthroscopic approaches to the elbow are the safest. It is possible to perform two lateral arthroscopic approaches: optical and instrumental, without conflict with major neurovascular structures. The optimal position for the surgery is 90° elbow flexion.

  4. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen; Fisker, Anna Marie; Kirkegaard, Poul Henning

    2013-01-01

    and recovery through the architecture framing eating experiences, this article examines, from a theoretical perspective, two less debated concepts relating to hospitality called food design and architectural theatricality. In architectural theory the nineteenth century German architect Gottfried Semper...... is known for his writings on theatricality, understood as a holistic design approach emphasizing the contextual, cultural, ritual and social meanings rooted in architecture. Relative hereto, the International Food Design Society recently argued, in a similar holistic manner, that the methodology used...... to provide an aesthetic eating experience includes knowledge on both food and design. Based on a hermeneutic reading of Semper’s theory, our thesis is that this holistic design approach is important when debating concepts of hospitality in hospitals. We use this approach to argue for how ‘food design...

  5. A conceptual approach to approximate tree root architecture in infinite slope models

    Science.gov (United States)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  6. Proposed Information Sharing Security Approach for Security Personnels, Vertical Integration, Semantic Interoperability Architecture and Framework for Digital Government

    CERN Document Server

    Headayetullah, Md; Biswas, Sanjay; Puthal, B

    2011-01-01

    This paper mainly depicts the conceptual overview of vertical integration, semantic interoperability architecture such as Educational Sector Architectural Framework (ESAF) for New Zealand government and different interoperability framework solution for digital government. In this paper, we try to develop a secure information sharing approach for digital government to improve home land security. This approach is a role and cooperation based approach for security personnel of different government departments. In order to run any successful digital government of any country in the world, it is necessary to interact with their citizen and to share secure information via different network among the citizen or other government. Consequently, in order to smooth the progress of users to cooperate with and share information without darkness and flawlessly transversely different networks and databases universally, a safe and trusted information-sharing environment has been renowned as a very important requirement and t...

  7. EVALUATING AND REFINING THE ‘ENTERPRISE ARCHITECTURE AS STRATEGY’ APPROACH AND ARTEFACTS

    Directory of Open Access Journals (Sweden)

    M. De Vries

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Enterprise Architecture (EA is a new discipline that has emerged from the need to create a holistic view of an enterprise, and thereby to discover business/IT integration and alignment opportunities across enterprise structures. Previous EA value propositions that merely focus on IT cost reductions will no longer convince management to invest in EA. Today, EA should enable business strategy in the organisation to create value. This resides in the ability to do enterprise optimisation through process standardisation and integration. In order to do this, a new approach is required to integrate EA into the strategy planning process of the organisation.
    This article explores the use of three key artefacts – operating models, core diagrams, and an operating maturity assessment as defined by Ross, Weill & Robertson [1] – as the basis of this new approach. Action research is applied to a research group to obtain qualitative feedback on the practicality of the artefacts.

    AFRIKAANSE OPSOMMING: Ondernemingsargitektuur (OA is ’n nuwe dissipline wat ontstaan het uit die behoefte om ’n holistiese perspektief van ’n onderneming te skep om sodoende besigheid/IT-integrasie en - belyningsgeleenthede regoor ondernemingstrukture te ontdek. Vorige OA waardeaanbiedings wat hoofsaaklik gefokus het op IT kostebesparings sal bestuur nie meer kan oorreed om in OA te belê nie. Vandag behoort OA bevoegdheid te gee aan ondernemingstrategie om werklik waarde te skep. Hierdie bevoegdheid lê gesetel in ondernemingsoptimering deur middel van prosesstandaardisasie en -integrasie. ’n Nuwe benadering word benodig ten einde OA te integreer met die strategiese beplanningsproses van die organisasie.
    Hierdie artikel ondersoek die gebruik van drie artefakte – operasionele modelle, kerndiagramme, en operasionele volwassenheidsassessering soos gedefinieer deur Ross, Weill & Robertson [1] – as die basis van hierdie nuwe benadering

  8. Optimal Traits of Plant Hydraulic Architecture for Rock-Dominated Landscapes

    Science.gov (United States)

    Schwinning, S.

    2014-12-01

    Optimality models can only be as good as assumptions about the relevant constraints on plant function. To date, Dynamic Global Vegetation Models (DGVMs) have utilized relatively simple representations of the rhizosphere, chiefly assuming uniform, thick soil without restrictions to root development. In reality, many terrestrial landforms have features that severely impede root growth. These include habitats with shallow or skeletal soils over bedrock, karst or caliche. Experiments have shown that plants in these habitats are not limited to using soil water, but use a variety of strategies to extract water from rocky substrates, e.g., growing extended structural roots along rock crevices into soil pockets or perched water tables, developing flattened root mats inside planar fissures or associating with mycorrhizae to extract water directly from the rock matrix. While these strategies expand plant-available water sources beyond soil, the added pools are expected to have extraction and recharge characteristics quite different from soil. Here I ask how the dynamical differences in non-soil water pools should influence plant hydraulic traits. I built upon earlier work to determine how predictions of optimal plant function types change when model details are adjusted to reflect water uptake from non-soil sources. The model is a hydraulic continuum model based on Darcy's law with optimization parameters representing biomass allocation between leaves, stems and roots, variable stem water storage capacity, and sensitivity of leaf and root conductivity to water potential. The rhizosphere is represented by two dynamically distinct water pools, the first representing a component with quick recharge and depletion (remnant soil), the second a non-soil component with restricted root density, potentially high storage capacity but possibly low hydraulic conductivity. The prediction of optimal plant functional types was significantly altered for non-soil compared to soil substrates

  9. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen

    and well-being, as well as outline a set of basic design principles ‘predicting’ the future interior architectural qualities of patient eating environments. Methodologically the thesis is based on an explorative study employing an abductive approach and hermeneutic-interpretative strategy utilizing tactics......This PhD thesis is motived by a personal interest in the theoretical, practical and creative qualities of architecture. But also a wonder and curiosity about the cultural and social relations architecture represents through its occupation with both the sciences and the arts. Inspired by present...... initiatives in Aalborg Hospital to overcome patient undernutrition by refurbishing eating environments, this thesis engages in an investigation of the interior architectural qualities of patient eating environments. The relevance for this holistic perspective, synthesizing health, food and architecture...

  10. Architectural Narratives

    DEFF Research Database (Denmark)

    2010-01-01

    and architectural heritage; another group tries to embed new performative technologies in expressive architectural representation. Finally, this essay provides a theoretical framework for the analysis of the political rationales of these projects and for the architectural representation bridges the gap between......In this essay, I focus on the combination of programs and the architecture of cultural projects that have emerged within the last few years. These projects are characterized as “hybrid cultural projects,” because they intend to combine experience with entertainment, play, and learning. This essay...... identifies new rationales related to this development, and it argues that “cultural planning” has increasingly shifted its focus from a cultural institutional approach to a more market-oriented strategy that integrates art and business. The role of architecture has changed, too. It not only provides...

  11. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen

    , is the current building of a series of Danish ‘super hospitals’ and an increased focus among architectural practices on research-based knowledge produced with the architectural sub-disciplines Healing Architecture and Evidence-Based Design. The problem is that this research does not focus on patient eating...... environments and a knowledge gap therefore exists in present hospital designs. Consequently, the purpose of this thesis has been to investigate if any research-based knowledge exist supporting the hypothesis that the interior architectural qualities of eating environments influence patient food intake, health...... and well-being, as well as outline a set of basic design principles ‘predicting’ the future interior architectural qualities of patient eating environments. Methodologically the thesis is based on an explorative study employing an abductive approach and hermeneutic-interpretative strategy utilizing tactics...

  12. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    -anthropology. Within the field of architecture, however, there has not yet been quite the same eagerness to include anthropological approaches in design processes. This paper discusses why this is so and how and whether architectural anthropology has different conditions and objectives than other types of design...... and other spaces that architects are preoccupied with. On the other hand, the distinction between architecture and design is not merely one of scale. Design and architecture represent – at least in Denmark – also quite different disciplinary traditions and methods. Where designers develop prototypes......, and that this will restrict the creative design process. Also, the end user of architecture is not easily identified, as a new building should not just accommodate the needs of specific residents but also those of neighbours, future residents, other citizens and maybe society as such. The paper explores the challenges...

  13. On the local optimal solutions of metabolic regulatory networks using information guided genetic algorithm approach and clustering analysis.

    Science.gov (United States)

    Zheng, Ying; Yeh, Chen-Wei; Yang, Chi-Da; Jang, Shi-Shang; Chu, I-Ming

    2007-08-31

    Biological information generated by high-throughput technology has made systems approach feasible for many biological problems. By this approach, optimization of metabolic pathway has been successfully applied in the amino acid production. However, in this technique, gene modifications of metabolic control architecture as well as enzyme expression levels are coupled and result in a mixed integer nonlinear programming problem. Furthermore, the stoichiometric complexity of metabolic pathway, along with strong nonlinear behaviour of the regulatory kinetic models, directs a highly rugged contour in the whole optimization problem. There may exist local optimal solutions wherein the same level of production through different flux distributions compared with global optimum. The purpose of this work is to develop a novel stochastic optimization approach-information guided genetic algorithm (IGA) to discover the local optima with different levels of modification of the regulatory loop and production rates. The novelties of this work include the information theory, local search, and clustering analysis to discover the local optima which have physical meaning among the qualified solutions.

  14. Biomimicry as an approach for sustainable architecture case of arid regions with hot and dry climate

    Science.gov (United States)

    Bouabdallah, Nabila; M'sellem, Houda; Alkama, Djamel

    2016-07-01

    This paper aims to study the problem of thermal comfort inside buildings located in hot and arid climates. The principal idea behind this research is using concepts based on the potential of nature as an instrument that helps creating appropriate facades with the environment "building skin". The biomimetic architecture imitates nature through the study of form, function, behaviour and ecosystems of biological organisms. This research aims to clarify the possibilities that can be offered by biomimicry architecture to develop architectural bio-inspired building's design that can help to enhance indoor thermal ambiance in buildings located in hot and dry climate which helps to achieve thermal comfort for users.

  15. Optimal precursors triggering the Kuroshio Extension state transition obtained by the Conditional Nonlinear Optimal Perturbation approach

    Science.gov (United States)

    Zhang, Xing; Mu, Mu; Wang, Qiang; Pierini, Stefano

    2017-06-01

    In this study, the initial perturbations that are the easiest to trigger the Kuroshio Extension (KE) transition connecting a basic weak jet state and a strong, fairly stable meandering state, are investigated using a reduced-gravity shallow water ocean model and the CNOP (Conditional Nonlinear Optimal Perturbation) approach. This kind of initial perturbation is called an optimal precursor (OPR). The spatial structures and evolutionary processes of the OPRs are analyzed in detail. The results show that most of the OPRs are in the form of negative sea surface height (SSH) anomalies mainly located in a narrow band region south of the KE jet, in basic agreement with altimetric observations. These negative SSH anomalies reduce the meridional SSH gradient within the KE, thus weakening the strength of the jet. The KE jet then becomes more convoluted, with a high-frequency and large-amplitude variability corresponding to a high eddy kinetic energy level; this gradually strengthens the KE jet through an inverse energy cascade. Eventually, the KE reaches a high-energy state characterized by two well defined and fairly stable anticyclonic meanders. Moreover, sensitivity experiments indicate that the spatial structures of the OPRs are not sensitive to the model parameters and to the optimization times used in the analysis.

  16. Approaches of Russian oil companies to optimal capital structure

    Science.gov (United States)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  17. Optimization of decision rules based on dynamic programming approach

    KAUST Repository

    Zielosko, Beata

    2014-01-14

    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  18. Performance optimization of Jatropha biodiesel engine model using Taguchi approach

    Energy Technology Data Exchange (ETDEWEB)

    Ganapathy, T.; Murugesan, K.; Gakkhar, R.P. [Mechanical and Industrial Engineering Department, Indian Institute of Technology Roorkee, Roorkee 247 667 (India)

    2009-11-15

    This paper proposes a methodology for thermodynamic model analysis of Jatropha biodiesel engine in combination with Taguchi's optimization approach to determine the optimum engine design and operating parameters. A thermodynamic model based on two-zone Weibe's heat release function has been employed to simulate the Jatropha biodiesel engine performance. Among the important engine design and operating parameters 10 critical parameters were selected assuming interactions between the pair of parameters. Using linear graph theory and Taguchi method an L{sub 16} orthogonal array has been utilized to determine the engine test trials layout. In order to maximize the performance of Jatropha biodiesel engine the signal to noise ratio (SNR) related to higher-the-better (HTB) quality characteristics has been used. The present methodology correctly predicted the compression ratio, Weibe's heat release constants and combustion zone duration as the critical parameters that affect the performance of the engine compared to other parameters. (author)

  19. Frost Formation: Optimizing solutions under a finite volume approach

    Science.gov (United States)

    Bartrons, E.; Perez-Segarra, C. D.; Oliet, C.

    2016-09-01

    A three-dimensional transient formulation of the frost formation process is developed by means of a finite volume approach. Emphasis is put on the frost surface boundary condition as well as the wide range of empirical correlations related to the thermophysical and transport properties of frost. A study of the numerical solution is made, establishing the parameters that ensure grid independence. Attention is given to the algorithm, the discretised equations and the code optimization through dynamic relaxation techniques. A critical analysis of four cases is carried out by comparing solutions of several empirical models against tested experiments. As a result, a discussion on the performance of such parameters is started and a proposal of the most suitable models is presented.

  20. Multipurpose Water Reservoir Management: An Evolutionary Multiobjective Optimization Approach

    Directory of Open Access Journals (Sweden)

    Luís A. Scola

    2014-01-01

    Full Text Available The reservoirs that feed large hydropower plants should be managed in order to provide other uses for the water resources. Those uses include, for instance, flood control and avoidance, irrigation, navigability in the rivers, and other ones. This work presents an evolutionary multiobjective optimization approach for the study of multiple water usages in multiple interlinked reservoirs, including both power generation objectives and other objectives not related to energy generation. The classical evolutionary algorithm NSGA-II is employed as the basic multiobjective optimization machinery, being modified in order to cope with specific problem features. The case studies, which include the analysis of a problem which involves an objective of navigability on the river, are tailored in order to illustrate the usefulness of the data generated by the proposed methodology for decision-making on the problem of operation planning of multiple reservoirs with multiple usages. It is shown that it is even possible to use the generated data in order to determine the cost of any new usage of the water, in terms of the opportunity cost that can be measured on the revenues related to electric energy sales.

  1. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  2. A Multiscale Approach to Optimal In Situ Bioremediation Design

    Science.gov (United States)

    Minsker, B. S.; Liu, Y.

    2001-12-01

    The use of optimization methods for in situ bioremediation design is quite challenging because the dynamics of bioremediation require that fine spatial and temporal scales be used in simulation, which substantially increases computational effort for optimization. In this paper, we present a multiscale approach that can be used to solve substantially larger-scale problems than previously possible. The multiscale method starts from a coarse mesh and proceeds to a finer mesh when it converges. While it is on a finer mesh, it switches back to a coarser mesh to calculate derivatives. The derivatives are then interpolated back to the finer mesh to approximate the derivatives on the finer mesh. To demonstrate the method, a four-level case study with 6,500 state variables is solved in less than 9 days, compared with nearly one year that would have been required using the original single-scale model. These findings illustrate that the multiscale method will allow solution of substantially larger-scale problems than previously possible, particularly since the method also enables easy parallelization of the model in the future.

  3. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  4. Using tailored methodical approaches to achieve optimal science outcomes

    Science.gov (United States)

    Wingate, Lory M.

    2016-08-01

    The science community is actively engaged in research, development, and construction of instrumentation projects that they anticipate will lead to new science discoveries. There appears to be very strong link between the quality of the activities used to complete these projects, and having a fully functioning science instrument that will facilitate these investigations.[2] The combination of using internationally recognized standards within the disciplines of project management (PM) and systems engineering (SE) has been demonstrated to lead to achievement of positive net effects and optimal project outcomes. Conversely, unstructured, poorly managed projects will lead to unpredictable, suboptimal project outcomes ultimately affecting the quality of the science that can be done with the new instruments. The proposed application of these two specific methodical approaches, implemented as a tailorable suite of processes, are presented in this paper. Project management (PM) is accepted worldwide as an effective methodology used to control project cost, schedule, and scope. Systems engineering (SE) is an accepted method that is used to ensure that the outcomes of a project match the intent of the stakeholders, or if they diverge, that the changes are understood, captured, and controlled. An appropriate application, or tailoring, of these disciplines can be the foundation upon which success in projects that support science can be optimized.

  5. An improved ant colony optimization approach for optimization of process planning.

    Science.gov (United States)

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach.

  6. Application-Aware Optimization of Redundant Resources for the Reconfigurable Self-Healing eDNA Hardware Architecture

    DEFF Research Database (Denmark)

    Boesen, Michael Reibel; Madsen, Jan; Pop, Paul

    2011-01-01

    In this paper we are interested in the mapping of embedded applications on a dynamically reconfigurable self-healing hardware architecture known as the eDNA (electronic DNA) architecture. The architecture consists of an array of cells interconnected through a 2D-mesh topology. Each cell consists ...

  7. Application-Aware Optimization of Redundant Resources for the Reconfigurable Self-Healing eDNA Hardware Architecture

    DEFF Research Database (Denmark)

    Boesen, Michael Reibel; Madsen, Jan; Pop, Paul

    2011-01-01

    In this paper we are interested in the mapping of embedded applications on a dynamically reconfigurable self-healing hardware architecture known as the eDNA (electronic DNA) architecture. The architecture consists of an array of cells interconnected through a 2D-mesh topology. Each cell consists ...

  8. Securing cloud services a pragmatic approach to security architecture in the cloud

    CERN Document Server

    Newcombe, Lee

    2012-01-01

    This book provides an overview of security architecture processes and explains how they may be used to derive an appropriate set of security controls to manage the risks associated with working in the Cloud.

  9. Architecture and method for optimization of cloud resources used in software testing

    Directory of Open Access Journals (Sweden)

    Joana Coelho Vigário

    2016-03-01

    Full Text Available Nowadays systems can evolve quickly, and to this growth is associated, for example, the production of new features, or even the change of system perspective, required by the stakeholders. These conditions require the development of software testing in order to validate the systems. Run a large battery of tests sequentially can take hours. However, tests can run faster in a distributed environment with rapid availability of pre-configured systems, such as cloud computing. There is increasing demand for automation of the entire process, including integration, build, running tests and management of cloud resources.This paper aims to demonstrate the applicability of the practice continuous integration (CI in Information Systems, for automating the build and software testing performed in a distributed environment of cloud computing, in order to achieve optimization and elasticity of the resources provided by the cloud.

  10. Non-technical approach to the challenges of ecological architecture: Learning from Van der Laan

    OpenAIRE

    María-Jesús González-Díaz; Justo García-Navarro

    2016-01-01

    Up to now, ecology has a strong influence on the development of technical and instrumental aspects of architecture, such as renewable and efficient of resources and energy, CO2 emissions, air quality, water reuse, some social and economical aspects. These concepts define the physical keys and codes of the current ׳sustainable׳ architecture, normally instrumental but rarely and insufficiently theorised. But is not there another way of bringing us to nature? We need a theoretical referent. This...

  11. Optimization of floodplain monitoring sensors through an entropy approach

    Science.gov (United States)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  12. Modular Power Architectures for Microgrid Clusters

    DEFF Research Database (Denmark)

    Lin, Hengwei; Liu, Leo; Guerrero, Josep M.

    2014-01-01

    . The user-frame concept proposed here when designing microgrids considers that the end-user is the basis for the geographical deployment. Meanwhile, a modular user-oriented approach is adopted in order to enhance reliability and expansibility. Finally, a unified dispatching and hierarchical management...... approach is proposed and evaluated to effectively optimize and manage modular microgrid architectures....

  13. Architectural optimizations for low-power K-best MIMO decoders

    KAUST Repository

    Mondal, Sudip

    2009-09-01

    Maximum-likelihood (ML) detection for higher order multiple-input-multiple-output (MIMO) systems faces a major challenge in computational complexity. This limits the practicality of these systems from an implementation point of view, particularly for mobile battery-operated devices. In this paper, we propose a modified approach for MIMO detection, which takes advantage of the quadratic-amplitude modulation (QAM) constellation structure to accelerate the detection procedure. This approach achieves low-power operation by extending the minimum number of paths and reducing the number of required computations for each path extension, which results in an order-of-magnitude reduction in computations in comparison with existing algorithms. This paper also describes the very-large-scale integration (VLSI) design of the low-power path metric computation unit. The approach is applied to a 4 × 4, 64-QAM MIMO detector system. Results show negligible performance degradation compared with conventional algorithms while reducing the complexity by more than 50%. © 2009 IEEE.

  14. Optimal placement of dampers and actuators based on stochastic approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A general method is developed for optimal application of dampers and actuators by installing them at optimal location on seismic-resistant structures. The study includes development of a statistical criterion, formulation of a general optimization problem and establishment of a solution procedure. Numerical analysis of the seismic response in time-history of controlled structures is used to verify the proposed method for optimal device application and to demonstrate the effectiveness of seismic response control with optimal device location. This study shows that the proposed method for the optimal device application is simple and general, and that the optimally applied dampers and actuators are very efficient for seismic response reduction.

  15. A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL; Li, Dong [ORNL

    2014-01-01

    Recent trends of CMOS scaling and increasing number of on-chip cores have led to a large increase in the size of on-chip caches. Since SRAM has low density and consumes large amount of leakage power, its use in designing on-chip caches has become more challenging. To address this issue, researchers are exploring the use of several emerging memory technologies, such as embedded DRAM, spin transfer torque RAM, resistive RAM, phase change RAM and domain wall memory. In this paper, we survey the architectural approaches proposed for designing memory systems and, specifically, caches with these emerging memory technologies. To highlight their similarities and differences, we present a classification of these technologies and architectural approaches based on their key characteristics. We also briefly summarize the challenges in using these technologies for architecting caches. We believe that this survey will help the readers gain insights into the emerging memory device technologies, and their potential use in designing future computing systems.

  16. Optimal Control Approaches to the Aggregate Production Planning Problem

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2015-12-01

    Full Text Available In the area of production planning and control, the aggregate production planning (APP problem represents a great challenge for decision makers in production-inventory systems. Tradeoff between inventory-capacity is known as the APP problem. To address it, static and dynamic models have been proposed, which in general have several shortcomings. It is the premise of this paper that the main drawback of these proposals is, that they do not take into account the dynamic nature of the APP. For this reason, we propose the use of an Optimal Control (OC formulation via the approach of energy-based and Hamiltonian-present value. The main contribution of this paper is the mathematical model which integrates a second order dynamical system coupled with a first order system, incorporating production rate, inventory level, and capacity as well with the associated cost by work force in the same formulation. Also, a novel result in relation with the Hamiltonian-present value in the OC formulation is that it reduces the inventory level compared with the pure energy based approach for APP. A set of simulations are provided which verifies the theoretical contribution of this work.

  17. Optimizing Concurrent M3-Transactions: A Fuzzy Constraint Satisfaction Approach

    Directory of Open Access Journals (Sweden)

    Peng LI

    2004-10-01

    Full Text Available Due to the high connectivity and great convenience, many E-commerce application systems have a high transaction volume. Consequently, the system state changes rapidly and it is likely that customers issue transactions based on out-of-date state information. Thus, the potential of transaction abortion increases greatly. To address this problem, we proposed an M3-transaction model. An M3-transaction is a generalized transaction where users can issue their preferences in a request by specifying multiple criteria and optional data resources simultaneously within one transaction. In this paper, we introduce the transaction grouping and group evaluation techniques. We consider evaluating a group of M3-transactions arrived to the system within a short duration together. The system makes optimal decisions in allocating data to transactions to achieve better customer satisfaction and lower transaction failure rate. We apply the fuzzy constraint satisfaction approach for decision-making. We also conduct experimental studies to evaluate the performance of our approach. The results show that the M3-transaction with group evaluation is more resilient to failure and yields much better performance than the traditional transaction model.

  18. Time-Saving Approach for Optimal Mining of Association Rules

    Directory of Open Access Journals (Sweden)

    Mouhir Mohammed

    2016-10-01

    Full Text Available Data mining is the process of analyzing data so as to get useful information to be exploited by users. Association rules is one of data mining techniques used to detect different correlations and to reveal relationships among data individual items in huge data bases. These rules usually take the following form: if X then Y as independent attributes. An association rule has become a popular technique used in several vital fields of activity such as insurance, medicine, banks, supermarkets… Association rules are generated in huge numbers by algorithms known as Association Rules Mining algorithms. The generation of huge quantities of Association Rules may be time-and-effort consuming this is the reason behind an urgent necessity of an efficient and scaling approach to mine only the relevant and significant association rules. This paper proposes an innovative approach which mines the optimal rules from a large set of Association Rules in a distributive processing way to improve its efficiency and to decrease the running time.

  19. Optimizing algal cultivation & productivity : an innovative, multidiscipline, and multiscale approach.

    Energy Technology Data Exchange (ETDEWEB)

    Murton, Jaclyn K.; Hanson, David T. (University of New Mexico, Albuquerque, NM); Turner, Tom (University of New Mexico, Albuquerque, NM); Powell, Amy Jo; James, Scott Carlton (Sandia National Laboratories, Livermore, CA); Timlin, Jerilyn Ann; Scholle, Steven (University of New Mexico, Albuquerque, NM); August, Andrew (Sandia National Laboratories, Livermore, CA); Dwyer, Brian P.; Ruffing, Anne; Jones, Howland D. T.; Ricken, James Bryce; Reichardt, Thomas A. (Sandia National Laboratories, Livermore, CA)

    2010-04-01

    Progress in algal biofuels has been limited by significant knowledge gaps in algal biology, particularly as they relate to scale-up. To address this we are investigating how culture composition dynamics (light as well as biotic and abiotic stressors) describe key biochemical indicators of algal health: growth rate, photosynthetic electron transport, and lipid production. Our approach combines traditional algal physiology with genomics, bioanalytical spectroscopy, chemical imaging, remote sensing, and computational modeling to provide an improved fundamental understanding of algal cell biology across multiple cultures scales. This work spans investigations from the single-cell level to ensemble measurements of algal cell cultures at the laboratory benchtop to large greenhouse scale (175 gal). We will discuss the advantages of this novel, multidisciplinary strategy and emphasize the importance of developing an integrated toolkit to provide sensitive, selective methods for detecting early fluctuations in algal health, productivity, and population diversity. Progress in several areas will be summarized including identification of spectroscopic signatures for algal culture composition, stress level, and lipid production enabled by non-invasive spectroscopic monitoring of the photosynthetic and photoprotective pigments at the single-cell and bulk-culture scales. Early experiments compare and contrast the well-studied green algae chlamydomonas with two potential production strains of microalgae, nannochloropsis and dunnaliella, under optimal and stressed conditions. This integrated approach has the potential for broad impact on algal biofuels and bioenergy and several of these opportunities will be discussed.

  20. Optimizing Concurrent M3-Transactions: A Fuzzy Constraint Satisfaction Approach

    Directory of Open Access Journals (Sweden)

    Peng LI

    2004-10-01

    Full Text Available Due to the high connectivity and great convenience, many E-commerce application systems have a high transaction volume. Consequently, the system state changes rapidly and it is likely that customers issue transactions based on out-of-date state information. Thus, the potential of transaction abortion increases greatly. To address this problem, we proposed an M3-transaction model. An M3-transaction is a generalized transaction where users can issue their preferences in a request by specifying multiple criteria and optional data resources simultaneously within one transaction. In this paper, we introduce the transaction grouping and group evaluation techniques. We consider evaluating a group of M3-transactions arrived to the system within a short duration together. The system makes optimal decisions in allocating data to transactions to achieve better customer satisfaction and lower transaction failure rate. We apply the fuzzy constraint satisfaction approach for decision-making. We also conduct experimental studies to evaluate the performance of our approach. The results show that the M3-transaction with group evaluation is more resilient to failure and yields much better performance than the traditional transaction model.

  1. Optimizing the building envelopes with green roofs : a discussion of architectural and energy performance requirements

    Energy Technology Data Exchange (ETDEWEB)

    Hagerman, J. [Columbia Univ., New York, NY (United States). Dept. of Civil Engineering]|[Rafael Vinoly Architects, New York, NY (United States); Hodge, D. [Rafael Vinoly Architects, New York, NY (United States)

    2006-07-01

    This paper provided recommendations for optimized green roof technologies inspired by an architect firm's involvement in designing a 255,000 square foot green roof on top of the Howard Hughes Medical Institute's Janelia Farm Research Campus in Virginia. During the course of the green roof construction and installation, the architects found that green roofs needed design flexibility to meet their conceptual design requirements. It was suggested that the use of a modular system might allow for easier inspection access as well as the ability for the planting material to be reconfigured. It was noted that green roof systems can sometimes conflict with water management strategies of the building envelope. Green roof component lists do not make reference to the layers of construction within the building envelope, as it is often assumed that they are irrelevant to green roof design. Modular products offer Architects flexibility in design and maintenance, and products can be incorporated into more sophisticated water management details, offering simplicity of design, ease of installation, and ease of roof membrane inspection. A thermal analysis of modular and monolithic roof assemblies was conducted which showed that the assemblies contributed very little to the overall thermal insulation envelope when the positive thermal benefits of the green roof failed. It was recommended that green roof installations should be designed to sit directly on top of the roof membranes to replace the building's insulation envelope. Foamglas was proposed as a material for building insulation and to prevent root penetration. An evaluation of the R-values of various green roof systems at failure was also provided to give guidance to Architects incorporating green roofs in building envelopes. 3 refs., 3 tabs., 8 figs.

  2. Optimization-Based Approaches to Control of Probabilistic Boolean Networks

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2017-02-01

    Full Text Available Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs, which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained.

  3. Tehran Water Museum with the Performance-Oriented Approach to Bionic Architecture

    Directory of Open Access Journals (Sweden)

    Faeghe Farokhizad

    2017-01-01

    Full Text Available Form and function of architecture in nature, is a process that is perceived as instinctive as the development of internal growth and creation. The most basic level of commitment to life that reveals itself in the form of materials. Architecture form and shape to the beat and rhythm of the invisible life, in fact, it is a process which gives the project structure and the structure of the plan. Every living organism is driven by unchangeable force. Trying to become more efficient form and function. In the natural area is very important that "performance" is defined as the process and relationship and "form" is defined as a result of this process. Forms of interaction with nature that takes shape and naturally goes in the direction of performance to match its relationship with the wider environment and in the surrounding territory. Methods and new ideas can be learned from nature. Generally architecture is defined as to imagine, design, understanding and build according to circumstances. These problems may in itself was fully functional and to varying degrees, reflecting the economic, political and social project. In any case, it seems that the status quo is not simply satisfying. For this reason, we seek a new agreement that they "answer the question" is called. Therefore, in this study, based on architectural features permit, trying to establish a performance-oriented architecture, nature-based design and natural patterns to be defined by Vitruvius

  4. A Multiobjective Approach for the Heuristic Optimization of Compactness and Homogeneity in the Optimal Zoning

    Directory of Open Access Journals (Sweden)

    B. Bernábe-Loranca

    2012-06-01

    Full Text Available This paper presents a multiobjective methodology for optimal zoning design (OZ, based on the grouping ofgeographic data with characteristics of territorial aggregation. The two objectives considered are the minimization ofthe geometric compactness on the geographical location of the data and the homogeneity of any of the descriptivevariables. Since this problem is NP hard [1], our proposal provides an approximate solution taking into accountproperties of partitioning algorithms and design restrictions for territorial space. Approximate solutions are generatedthrough the set of optimum values (Maxima and the corresponding minimals (dual Minima [2] of the bi-objectivefunction using Variable Neighborhood Search (VNS [3] and the Pareto order defined over this set of values. Theresults obtained by our proposed approach constitute good solutions and are generated in a reasonably lowcomputational time.

  5. Topology Optimization using a Topology Description Function Approach

    NARCIS (Netherlands)

    de Ruiter, M.J.

    2005-01-01

    During the last two decades, computational structural optimization methods have emerged, as computational power increased tremendously. Designers now have topological optimization routines at their disposal. These routines are able to generate the entire geometry of structures, provided only with in

  6. Tectonic Thinking in Contemporary Industrialized Architecture

    OpenAIRE

    Dell, Anne

    2013-01-01

    This paper argues for a new critical approach to the ways architectural design strategies are developing. Contemporary construction industry appears to evolve into highly specialized and optimized processes driven by industrialized manufacturing, therefore the role of the architect and the understanding of the architectural design process ought to be revised. The paper is based on the following underlying hypothesis: ‘Tectonic thinking – defined as a central attention towards the nature, the ...

  7. Architectural geometry

    KAUST Repository

    Pottmann, Helmut

    2014-11-26

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  8. Optimizing Thermal-Elastic Properties of C/C–SiC Composites Using a Hybrid Approach and PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Yingjie Xu

    2016-03-01

    Full Text Available Carbon fiber-reinforced multi-layered pyrocarbon–silicon carbide matrix (C/C–SiC composites are widely used in aerospace structures. The complicated spatial architecture and material heterogeneity of C/C–SiC composites constitute the challenge for tailoring their properties. Thus, discovering the intrinsic relations between the properties and the microstructures and sequentially optimizing the microstructures to obtain composites with the best performances becomes the key for practical applications. The objective of this work is to optimize the thermal-elastic properties of unidirectional C/C–SiC composites by controlling the multi-layered matrix thicknesses. A hybrid approach based on micromechanical modeling and back propagation (BP neural network is proposed to predict the thermal-elastic properties of composites. Then, a particle swarm optimization (PSO algorithm is interfaced with this hybrid model to achieve the optimal design for minimizing the coefficient of thermal expansion (CTE of composites with the constraint of elastic modulus. Numerical examples demonstrate the effectiveness of the proposed hybrid model and optimization method.

  9. Assay optimization: a statistical design of experiments approach.

    Science.gov (United States)

    Altekar, Maneesha; Homon, Carol A; Kashem, Mohammed A; Mason, Steven W; Nelson, Richard M; Patnaude, Lori A; Yingling, Jeffrey; Taylor, Paul B

    2007-03-01

    With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization.

  10. An optimal adder-based hardware architecture for the DCT/SA-DCT

    Science.gov (United States)

    Kinane, Andrew; Muresan, Valentin; O'Connor, Noel

    2005-07-01

    The explosive growth of the mobile multimedia industry has accentuated the need for ecient VLSI implemen- tations of the associated computationally demanding signal processing algorithms. This need becomes greater as end-users demand increasingly enhanced features and more advanced underpinning video analysis. One such feature is object-based video processing as supported by MPEG-4 core profile, which allows content-based in- teractivity. MPEG-4 has many computationally demanding underlying algorithms, an example of which is the Shape Adaptive Discrete Cosine Transform (SA-DCT). The dynamic nature of the SA-DCT processing steps pose significant VLSI implementation challenges and many of the previously proposed approaches use area and power consumptive multipliers. Most also ignore the subtleties of the packing steps and manipulation of the shape information. We propose a new multiplier-less serial datapath based solely on adders and multiplexers to improve area and power. The adder cost is minimised by employing resource re-use methods. The number of (physical) adders used has been derived using a common sub-expression elimination algorithm. Additional energy eciency is factored into the design by employing guarded evaluation and local clock gating. Our design implements the SA-DCT packing with minimal switching using ecient addressing logic with a transpose mem- ory RAM. The entire design has been synthesized using TSMC 0.09µm TCBN90LP technology yielding a gate count of 12028 for the datapath and its control logic.

  11. Organic solar cells under the BHJ approach using conventional/inverted architectures

    Science.gov (United States)

    Salinas, J. F.; Salto, C.; Maldonado, J. L.; Ramos-Ortíz, G.; Rodríguez, M.; Meneses-Nava, M. A.; Barbosa-García, Oracio; Farfán, N.; Santillan, R.

    2011-08-01

    The search of clean and renewable energy sources is one of the most important challenges that mankind confronts. Recently there has been a notable interest to develop organic photovoltaic (OPV) technology as a mean of renewable energy source since it combines low-cost and easy fabrication. Most of the efforts have been directed to increase the efficiency, leaving aside the durability of the organic materials, however, a new architecture known as inverted solar cell might bring a never seen durability (years) that could make possible large scale applications of this technology. Here are presented the results we achieved using both, the conventional and inverted architectures employing as organic donor (D) the very well known semi-conducting polymer P3HT, in mixtures with the acceptor (A) fullerene PC61BM. The morphology of thin polymer films prepared by using the spin coating technique was analyzed by AFM. For the conventional architecture the cells were fabricated following the structure ITO/PEDOT:PSS/P3HT:PC61BM/Wood´s metal, where the Wood´s metal cathode is an alloy that melts at 75 °C. For the inverted architecture the structure ITO/ZnO/P3HT:PC61BM /PEDOT:PSS/(Ag, Cu or Silver paint) was used, where ITO worked as cathode by switching its work function through the introduction of ZnO nanoparticles. Under tests using Xenon lamp irradiation at 100 mW/cm2, the conventional and the inverted architectures produced efficiencies of 1.75 % and 0.5 %, respectively. For both architectures the chosen back-contact materials (Wood´s metal and silver paint) allowed us to easily make the OPVs cells without the need of vacuum steps.

  12. A systematic approach: optimization of healthcare operations with knowledge management.

    Science.gov (United States)

    Wickramasinghe, Nilmini; Bali, Rajeev K; Gibbons, M Chris; Choi, J H James; Schaffer, Jonathan L

    2009-01-01

    Effective decision making is vital in all healthcare activities. While this decision making is typically complex and unstructured, it requires the decision maker to gather multispectral data and information in order to make an effective choice when faced with numerous options. Unstructured decision making in dynamic and complex environments is challenging and in almost every situation the decision maker is undoubtedly faced with information inferiority. The need for germane knowledge, pertinent information and relevant data are critical and hence the value of harnessing knowledge and embracing the tools, techniques, technologies and tactics of knowledge management are essential to ensuring efficiency and efficacy in the decision making process. The systematic approach and application of knowledge management (KM) principles and tools can provide the necessary foundation for improving the decision making processes in healthcare. A combination of Boyd's OODA Loop (Observe, Orient, Decide, Act) and the Intelligence Continuum provide an integrated, systematic and dynamic model for ensuring that the healthcare decision maker is always provided with the appropriate and necessary knowledge elements that will help to ensure that healthcare decision making process outcomes are optimized for maximal patient benefit. The example of orthopaedic operating room processes will illustrate the application of the integrated model to support effective decision making in the clinical environment.

  13. New Approaches to HSCT Multidisciplinary Design and Optimization

    Science.gov (United States)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  14. A multiscale optimization approach to detect exudates in the macula.

    Science.gov (United States)

    Agurto, Carla; Murray, Victor; Yu, Honggang; Wigdahl, Jeffrey; Pattichis, Marios; Nemeth, Sheila; Barriga, E Simon; Soliz, Peter

    2014-07-01

    Pathologies that occur on or near the fovea, such as clinically significant macular edema (CSME), represent high risk for vision loss. The presence of exudates, lipid residues of serous leakage from damaged capillaries, has been associated with CSME, in particular if they are located one optic disc-diameter away from the fovea. In this paper, we present an automatic system to detect exudates in the macula. Our approach uses optimal thresholding of instantaneous amplitude (IA) components that are extracted from multiple frequency scales to generate candidate exudate regions. For each candidate region, we extract color, shape, and texture features that are used for classification. Classification is performed using partial least squares (PLS). We tested the performance of the system on two different databases of 652 and 400 images. The system achieved an area under the receiver operator characteristic curve (AUC) of 0.96 for the combination of both databases and an AUC of 0.97 for each of them when they were evaluated independently.

  15. Designing with Space Syntax: A configurative approach to architectural layout, proposing a computational methodology

    NARCIS (Netherlands)

    Nourian, P.; Rezvani, S.; Sariyildiz, I.S.

    2013-01-01

    This paper introduces a design methodology and a toolkit developed as a parametric CAD program for configurative design of architectural plan layouts. Using this toolkit, designers can start plan layout process with sketching the way functional spaces need to connect to each other. A tool draws an i

  16. A Contribution to the phono-kinetic approach: An architectural experimentation to design a public shelter

    Directory of Open Access Journals (Sweden)

    Gregoire Chelkoff

    2011-12-01

    Full Text Available Despite the current evolution of several tools that help understanding, representing and constructing a better awareness of sound and more generally the sensory qualities of the built environment, the integration of sonic dimension in the ordinary architectural production remains difficult. Sonic dimension is still undervalued considered as an accessory or as a second dimension of the space. Till now, the sonic quality doesn’t have its legitimacy in the architectural and urban thinking except in specific situations where designer should solve certain problems of noise and propagation. Therefore, we should find the appropriate qualitative criteria of the architectural elements that don’t only integrate the perceptive aspects but also the action. In this sense, we shall present an experimentation that has been elaborated to clarify the relationship between sound and movement or sound and action in order to design spatial architectural elements. How can we identify the sonic role during the spatial experiment and explore the different corporal and movement modalities that emerge by hearing? We postulate that the actions potentials that emerge in a sonic context must be considered as an alternative that can modulate the environment in space and time[1]. [1] This paper is based on another article : Approche écologique de kinesthèses sonores : expérimentation d'un prototype d'abri public et ergonomie acoustique, Acoustique et techniques, n° 41, 2005, pp. 24-31

  17. A Project-Based Learning Approach to Programmable Logic Design and Computer Architecture

    Science.gov (United States)

    Kellett, C. M.

    2012-01-01

    This paper describes a course in programmable logic design and computer architecture as it is taught at the University of Newcastle, Australia. The course is designed around a major design project and has two supplemental assessment tasks that are also described. The context of the Computer Engineering degree program within which the course is…

  18. Impact of Evolution of Concerns in the Model-Driven Architecture Design Approach

    NARCIS (Netherlands)

    Tekinerdogan, Bedir; Aksit, Mehmet; Henninger, Francis

    2007-01-01

    Separation of concerns is an important principle for designing high quality software systems and is both applied in the Model-Driven Architecture (MDA) and Aspect-Oriented Software Development (AOSD). The AOSD and MDA techniques seem to be complementary to each other; historically AOSD has focused o

  19. Information security architecture an integrated approach to security in the organization

    CERN Document Server

    Killmeyer, Jan

    2006-01-01

    Information Security Architecture, Second Edition incorporates the knowledge developed during the past decade that has pushed the information security life cycle from infancy to a more mature, understandable, and manageable state. It simplifies security by providing clear and organized methods and by guiding you to the most effective resources available.

  20. Understanding the Value of Enterprise Architecture for Organizations: A Grounded Theory Approach

    Science.gov (United States)

    Nassiff, Edwin

    2012-01-01

    There is a high rate of information system implementation failures attributed to the lack of alignment between business and information technology strategy. Although enterprise architecture (EA) is a means to correct alignment problems and executives highly rate the importance of EA, it is still not used in most organizations today. Current…

  1. Novel genomic approaches unravel genetic architecture of complex traits in apple.

    NARCIS (Netherlands)

    Kumar, S.; Garrick, D.J.; Bink, M.C.A.M.; Whitworth, C.; Chagné, D.

    2013-01-01

    BACKGROUND: Understanding the genetic architecture of quantitative traits is important for developing genome-based crop improvement methods. Genome-wide association study (GWAS) is a powerful technique for mining novel functional variants. Using a family-based design involving 1,200 apple (Malus × d

  2. Understanding the Value of Enterprise Architecture for Organizations: A Grounded Theory Approach

    Science.gov (United States)

    Nassiff, Edwin

    2012-01-01

    There is a high rate of information system implementation failures attributed to the lack of alignment between business and information technology strategy. Although enterprise architecture (EA) is a means to correct alignment problems and executives highly rate the importance of EA, it is still not used in most organizations today. Current…

  3. An Overview of Software Engineering Approaches to Service Oriented Architectures in Various Fields

    NARCIS (Netherlands)

    Kontogogos, Artemios; Avgeriou, Paris

    2009-01-01

    For the last few years, a rise has been observed in research activity in Service Oriented Architectures, with applications in different sectors. Several new technologies have been introduced and even more are being currently researched and aimed to the future. In this paper we present and analyze so

  4. One Approach to Senior Level Design in Naval Architecture and Marine Engineering. Report 09-92.

    Science.gov (United States)

    Colella, Kurt J.

    The United States Coast Guard Academy has integrated a successful senior-level ship design course sequence into an undergraduate engineering curriculum in order to achieve specifically desired academic and professional outcomes. The Naval Architecture and Marine Engineering (NAME) curriculum discussed is designed to allow for efficient use of…

  5. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  6. An Approach In Optimization Of Ad-Hoc Routing Algorithms

    Directory of Open Access Journals (Sweden)

    Sarvesh Kumar Sharma

    2012-06-01

    Full Text Available In this paper different optimization of Ad-hoc routing algorithm is surveyed and a new method using training based optimization algorithm for reducing the complexity of routing algorithms is suggested. A binary matrix is assigned to each node in the network and gets updated after each data transfer using the protocols. The use of optimization algorithm in routing algorithm can reduce the complexity of routing to the least amount possible.

  7. An optimized Leave One Out approach to efficiently identify outliers

    Science.gov (United States)

    Biagi, L.; Caldera, S.; Perego, D.

    2012-04-01

    Least squares are a well established and very popular statistical toolbox in geomatics. Particularly, LS are applied to routinely adjust geodetic networks in the cases both of classical surveys and of modern GNSS permanent networks, both at the local and at the global spatial scale. The linearized functional model between the observables and a vector of unknowns parameters is given. A vector of N observations and its apriori covariance is available. Typically, the observations vector can be decomposed into n subvectors, internally correlated but reciprocally uncorrelated. This happens, for example, when double differences are built from undifferenced observations and are processed to estimate the network coordinates of a GNSS session. Note that when all the observations are independent, n=N: this is for example the case of the adjustment of a levelling network. LS provide the estimates of the parameters, the observables, the residuals and of the a posteriori variance. The testing of the initial hypotheses, the rejection of outliers and the estimation of accuracies and reliabilities can be performed at different levels of significance and power. However, LS are not robust. The a posteriori estimation of the variance can be biased by one unmodelled outlier in the observations. In some case, the unmodelled bias is spread into all the residuals and its identification is difficult. A possible solution to this problem is given by the so called Leave One Out (LOO) approach. A particular subvector can be excluded from the adjustment, whose results are used to check the residuals of the excluded subvector. Clearly, the check is more robust, because a bias in the subvector does not affect the adjustment results. The process can be iterated on all the subvectors. LOO is robust but can be very slow, when n adjustments are performed. An optimized LLO algorithm has been studied. The usual LS adjustment on all the observations is performed to obtain a 'batch' result. The

  8. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    Science.gov (United States)

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-01

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  9. Security Model For Service-Oriented Architecture

    CERN Document Server

    Karimi, Oldooz

    2011-01-01

    In this article, we examine how security applies to Service Oriented Architecture (SOA). Before we discuss security for SOA, lets take a step back and examine what SOA is. SOA is an architectural approach which involves applications being exposed as "services". Originally, services in SOA were associated with a stack of technologies which included SOAP, WSDL, and UDDI. This article addresses the defects of traditional enterprise application integration by combining service oriented-architecture and web service technology. Application integration is then simplified to development and integration of services to tackle connectivity of isomerous enterprise application integration, security, loose coupling between systems and process refactoring and optimization.

  10. Optimal Feature Extraction Using Greedy Approach for Random Image Components and Subspace Approach in Face Recognition

    Institute of Scientific and Technical Information of China (English)

    Mathu Soothana S.Kumar Retna Swami; Muneeswaran Karuppiah

    2013-01-01

    An innovative and uniform framework based on a combination of Gabor wavelets with principal component analysis (PCA) and multiple discriminant analysis (MDA) is presented in this paper.In this framework,features are extracted from the optimal random image components using greedy approach.These feature vectors are then projected to subspaces for dimensionality reduction which is used for solving linear problems.The design of Gabor filters,PCA and MDA are crucial processes used for facial feature extraction.The FERET,ORL and YALE face databases are used to generate the results.Experiments show that optimal random image component selection (ORICS) plus MDA outperforms ORICS and subspace projection approach such as ORICS plus PCA.Our method achieves 96.25%,99.44% and 100% recognition accuracy on the FERET,ORL and YALE databases for 30% training respectively.This is a considerably improved performance compared with other standard methodologies described in the literature.

  11. A New Approach for Parameter Optimization in Land Surface Model

    Institute of Scientific and Technical Information of China (English)

    LI Hongqi; GUO Weidong; SUN Guodong; ZHANG Yaocun; FU Congbin

    2011-01-01

    In this study,a new parameter optimization method was used to investigate the expansion of conditional nonlinear optimal perturbation (CNOP) in a land surface model (LSM) using long-term enhanced field observations at Tongyn station in Jilin Province,China,combined with a sophisticated LSM (common land model,CoLM).Tongyu station is a reference site of the international Coordinated Energy and Water Cycle Observations Project (CEOP) that has studied semiarid regions that have undergone desertification,salination,and degradation since late 1960s.In this study,three key land-surface parameters,namely,soil color,proportion of sand or clay in soil,and leaf-area index were chosen as parameters to be optimized.Our study comprised three experiments:First,a single-parameter optimization was performed,while the second and third experiments performed triple- and six-parameter optinizations,respectively.Notable improvements in simulating sensible heat flux (SH),latent heat flux (LH),soil temperature (TS),and moisture (MS) at shallow layers were achieved using the optimized parameters.The multiple-parameter optimization experiments performed better than the single-parameter experminent.All results demonstrate that the CNOP method can be used to optimize expanded parameters in an LSM.Moreover,clear mathematical meaning,simple design structure,and rapid computability give this method great potential for further application to parameter optimization in LSMs.

  12. A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection

    NARCIS (Netherlands)

    Takano, Y.; Sotirov, R.

    2010-01-01

    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on semide

  13. Optimal angle reduction - a behavioral approach to linear system approximation

    NARCIS (Netherlands)

    Roorda, Berend; Fuhrmann, P.A.

    2001-01-01

    We investigate the problem of optimal state reduction under minimization of the angle between system behaviors. The angle is defined in a worst-case sense, as the largest angle that can occur between a system trajectory and its optimal approximation in the reduced-order model. This problem is analyz

  14. Secure Architecture for m-Health Communications Using Multi-agent Approach

    Directory of Open Access Journals (Sweden)

    Mohd Fadhli Abdul Jalil

    2014-03-01

    Full Text Available In this study we propose a security architecture for mobile Health (mHealth communications. mHealth is a term used for medical-related services or communication, delivered using mobile devices such as mobile phones, tablet computers and PDAs. Communication in the health-related field often involve sensitive information (e.g., an email about a patient's illness is sent between doctors, which is transmitted over the Internet. However, although the Internet greatly facilitates the communication, it is undeniable that the threats to the Internet are becoming more prevalent. A multi-agent security architecture is presented in this study to provide a secure environment for mHealth communication. Agents are skilled in order to handle the communication processes at both sender’s and recipient’s side. This includes the security processes, which make use of cryptography protocols to secure data at both sides.

  15. A General Architecture for Robotics Systems: A Perception-Based Approach to Artificial Life.

    Science.gov (United States)

    Young, Rupert

    2017-01-01

    Departing from the conventional view of the reasons for the behavior of living systems, this research presents a radical and unique view of that behavior, as the observed side effects of a hierarchical set of simple, continuous, and dynamic negative feedback control systems, by way of an experimental model implemented on a real-world autonomous robotic rover. Rather than generating specific output from input, the systems control their perceptual inputs by varying output. The variables controlled do not exist in the environment, but are entirely internal perceptions constructed as a result of the layout and connections of the neural architecture. As the underlying processes are independent of the domain, the architecture is universal and thus has significant implications not only for understanding natural living systems, but also for the development of robotics systems. The central process of perceptual control has the potential to unify the behavioral sciences and is proposed as the missing behavioral principle of Artificial Life.

  16. Avionics Architectures for Exploration: Building a Better Approach for (Human) Spaceflight Avionics

    Science.gov (United States)

    Goforth, Montgomery B.; Ratliff, James E.; Hames, Kevin L.; Vitalpur, Sharada V.

    2014-01-01

    The field of Avionics is advancing far more rapidly in terrestrial applications than in space flight applications. Spaceflight Avionics are not keeping pace with expectations set by terrestrial experience, nor are they keeping pace with the need for increasingly complex automation and crew interfaces as we move beyond Low Earth Orbit. NASA must take advantage of the strides being made by both space-related and terrestrial industries to drive our development and sustaining costs down. This paper describes ongoing efforts by the Avionics Architectures for Exploration (AAE) project chartered by NASA's Advanced Exploration Systems (AES) Program to evaluate new avionic architectures and technologies, provide objective comparisons of them, and mature selected technologies for flight and for use by other AES projects. Results from the AAE project's FY13 efforts are discussed, along with the status of FY14 efforts and future plans.

  17. Data-Centric Enterprise Architecture

    OpenAIRE

    Zeinab Rajabi; Maryam Nooraei Abade

    2012-01-01

    Enterprises choose Enterprise Architecture (EA) solution, in order to overcome dynamic business challenges and in coordinate various enterprise elements. In this article, a solution is suggested for the Enterprise Architecture development. The solution focuses on architecture data in the Enterprise Architecture development process. Data-centric architecture approach is preferred product-centric architecture approach. We suggest using Enterprise Ontology (EO) as context for collecting architec...

  18. The impact of optimize solar radiation received on the levels and energy disposal of levels on architectural design result by using computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Rezaei, Davood; Farajzadeh Khosroshahi, Samaneh; Sadegh Falahat, Mohammad [Zanjan University (Iran, Islamic Republic of)], email: d_rezaei@znu.ac.ir, email: ronas_66@yahoo.com, email: Safalahat@yahoo.com

    2011-07-01

    In order to minimize the energy consumption of a building it is important to achieve optimum solar energy. The aim of this paper is to introduce the use of computer modeling in the early stages of design to optimize solar radiation received and energy disposal in an architectural design. Computer modeling was performed on 2 different projects located in Los Angeles, USA, using ECOTECT software. Changes were made to the designs following analysis of the modeling results and a subsequent analysis was carried out on the optimized designs. Results showed that the computer simulation allows the designer to set the analysis criteria and improve the energy performance of a building before it is constructed; moreover, it can be used for a wide range of optimization levels. This study pointed out that computer simulation should be performed in the design stage to optimize a building's energy performance.

  19. A Convex Optimization Approach to pMRI Reconstruction

    CERN Document Server

    Zhang, Cishen

    2013-01-01

    In parallel magnetic resonance imaging (pMRI) reconstruction without using estimation of coil sensitivity functions, one group of algorithms reconstruct sensitivity encoded images of the coils first followed by the magnitude only image reconstruction, e.g. GRAPPA, and another group of algorithms jointly compute the image and sensitivity functions by regularized optimization which is a non-convex problem with local only solutions. For the magnitude only image reconstruction, this paper derives a reconstruction formulation, which is linear in the magnitude image, and an associated convex hull in the solution space of the formulated equation containing the magnitude of the image. As a result, the magnitude only image reconstruction for pMRI is formulated into a two-step convex optimization problem, which has a globally optimal solution. An algorithm based on split-bregman and nuclear norm regularized optimizations is proposed to implement the two-step convex optimization and its applications to phantom and in-vi...

  20. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Tamer Khatib

    2012-01-01

    Full Text Available This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia, which are Kuala Lumpur, Johor Bharu, Ipoh, Kuching, and Alor Setar. Based on the results of the designed example for a PV system installed in Kuala Lumpur, the proposed method gives satisfactory optimal sizing results.

  1. Green Architecture for Dense Home Area Networks Based on Radio-over-Fiber with Data Aggregation Approach

    Institute of Scientific and Technical Information of China (English)

    Mohd Sharil Abdullah; Mohd Adib Sarijari; Abdul Hadi Fikri Abdul Hamid; Norsheila Fisal; Anthony Lo; Rozeha A. Rashid; Sharifah Kamilah Syed Yusof

    2016-01-01

    Abstract-Thehigh-density population leads to crowded cities. The future city is envisaged to encompass a large-scale network with diverse applications and a massive number of interconnected heterogeneous wireless-enabled devices. Hence, green technology elements are crucial to design sustainable and future-proof network architectures. They are the solutions for spectrum scarcity, high latency, interference, energy efficiency, and scalability that occur in dense and heterogeneous wireless networks especially in the home area network (HAN). Radio-over-fiber (ROF) is a technology candidate to provide a global view of HAN’s activities that can be leveraged to allocate orthogonal channel communications for enabling wireless-enabled HAN devices transmission, with considering the clustered-frequency-reuse approach. Our proposed network architecture design is mainly focused on enhancing the network throughput and reducing the average network communications latency by proposing a data aggregation unit (DAU). The performance shows that with the DAU, the average network communications latency reduces significantly while the network throughput is enhanced, compared with the existing ROF architecture without the DAU.

  2. Optimism

    Science.gov (United States)

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  3. Three approaches to integrating building performance simulations tools in architecture and engineering undergraduate education

    Energy Technology Data Exchange (ETDEWEB)

    Charles, P.P. [Roger Williams Univ., Bristol, RI (United States). School of Architecture, Art and Historic Preservation; Thomas, C.R. [Roger Williams Univ., Bristol, RI (United States). School of Engineering, Computing and Construction Management

    2008-07-01

    This paper described past and on-going teaching experiences at Roger Williams University in Bristol, Rhode Island. In particular, the university has offered several new architecture courses where building simulation tools have played a key role in explaining hard-to-grasp physical phenomena at play in a building. The university also offers a new course to both undergraduate architecture and engineering students to promote collaboration between these two disciplines. The course focuses on the elements of simulation tools that are adapted to sustainable building design. The paper concluded with the advantages and limitations of these teaching methods and provided perspectives to future improvement of some of the pedagogical models. It was concluded that in general, the integration of building simulation tools in the architecture studios and courses have provided students with valuable insight into the dynamic nature of the building environment and about comfort, particularly when the software have transient simulation capabilities. The simulation tools expand the realm of the design beyond the mere visual. Multiple simulation runs of design options help reinforce in the student the basic notion of the iterative nature of the design process. 17 refs.

  4. Development of a Subcell Based Modeling Approach for Modeling the Architecturally Dependent Impact Response of Triaxially Braided Polymer Matrix Composites

    Science.gov (United States)

    Sorini, Chris; Chattopadhyay, Aditi; Goldberg, Robert K.; Kohlman, Lee W.

    2016-01-01

    Understanding the high velocity impact response of polymer matrix composites with complex architectures is critical to many aerospace applications, including engine fan blade containment systems where the structure must be able to completely contain fan blades in the event of a blade-out. Despite the benefits offered by these materials, the complex nature of textile composites presents a significant challenge for the prediction of deformation and damage under both quasi-static and impact loading conditions. The relatively large mesoscale repeating unit cell (in comparison to the size of structural components) causes the material to behave like a structure rather than a homogeneous material. Impact experiments conducted at NASA Glenn Research Center have shown the damage patterns to be a function of the underlying material architecture. Traditional computational techniques that involve modeling these materials using smeared homogeneous, orthotropic material properties at the macroscale result in simulated damage patterns that are a function of the structural geometry, but not the material architecture. In order to preserve heterogeneity at the highest length scale in a robust yet computationally efficient manner, and capture the architecturally dependent damage patterns, a previously-developed subcell modeling approach where the braided composite unit cell is approximated as a series of four adjacent laminated composites is utilized. This work discusses the implementation of the subcell methodology into the commercial transient dynamic finite element code LS-DYNA (Livermore Software Technology Corp.). Verification and validation studies are also presented, including simulation of the tensile response of straight-sided and notched quasi-static coupons composed of a T700/PR520 triaxially braided [0deg/60deg/-60deg] composite. Based on the results of the verification and validation studies, advantages and limitations of the methodology as well as plans for future work

  5. Architecturally Reconfigurable Development of Mobile Games

    DEFF Research Database (Denmark)

    Zhang, Weishan

    2005-01-01

    Mobile game development must face the problem of multiple hardware and software platforms, which will bring large number of variants. To cut the development and maintenance efforts, in this paper, we present an architecturally reconfigurable software product line approach to develop mobile games....... Mobile game domain variants could be handled uniformly and traced across all kinds of software assets. The architecture and configuration mechanism in our approach make optimizations that built into meta-components propagated to all product line members. We show this approach with an industrial Role......-Playing-Game product line, which achieved not only development and maintenance gains, but also performance enhancements....

  6. A Fuzzy Simulation-Based Optimization Approach for Groundwater Remediation Design at Contaminated Aquifers

    Directory of Open Access Journals (Sweden)

    A. L. Yang

    2012-01-01

    Full Text Available A fuzzy simulation-based optimization approach (FSOA is developed for identifying optimal design of a benzene-contaminated groundwater remediation system under uncertainty. FSOA integrates remediation processes (i.e., biodegradation and pump-and-treat, fuzzy simulation, and fuzzy-mean-value-based optimization technique into a general management framework. This approach offers the advantages of (1 considering an integrated remediation alternative, (2 handling simulation and optimization problems under uncertainty, and (3 providing a direct linkage between remediation strategies and remediation performance through proxy models. The results demonstrate that optimal remediation alternatives can be obtained to mitigate benzene concentration to satisfy environmental standards with a minimum system cost.

  7. Flower pollination algorithm: A novel approach for multiobjective optimization

    Science.gov (United States)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  8. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    Full Text Available Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor’s method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  9. MVMO-based approach for optimal placement and tuning of supplementary damping controller

    NARCIS (Netherlands)

    Rueda Torres, J.L.; Gonzalez-Longatt, F.

    2015-01-01

    This paper introduces an approach based on the Swarm Variant of the Mean-Variance Mapping Optimization (MVMO-S) to solve the multi-scenario formulation of the optimal placement and coordinated tuning of power system supplementary damping controllers (POCDCs). The effectiveness of the approach is

  10. Synthesis of conjugated polymers with complex architecture for photovoltaic applications

    DEFF Research Database (Denmark)

    Kiriy, Anton; Krebs, Frederik C

    2017-01-01

    A common approach to bulk heterojunction solar cells involves a “trialand- error” approach in finding optimal kinetically unstable morphologies. An alternative approach assumes the utilization of complex polymer architectures, such as donor–acceptor block copolymers. Because of a covalent...

  11. Longevity, genes and efforts: an optimal taxation approach to prevention.

    Science.gov (United States)

    Leroux, M-L; Pestieau, P; Ponthiere, G

    2011-01-01

    This paper applies the analytical tools of optimal taxation theory to the design of the optimal subsidy on preventive behaviours, in an economy where longevity varies across agents, and depends on preventive expenditures and on longevity genes. Public intervention can be here justified on three grounds: corrections for misperceptions of the survival process and for externalities related to individual preventive behaviour, and redistribution across both earnings and genetic dimensions. The optimal subsidy on preventive expenditures is shown to depend on the combined impacts of misperception, externalities and self-selection. It is generally optimal to subsidize preventive efforts to an extent depending on the degree of individual myopia, on how productivity and genes are correlated, and on the complementarity of genes and preventive efforts in the survival function.

  12. Potential and challenges in home care service process optimization : a route optimization approach

    OpenAIRE

    Nakari, Pentti J. E.

    2016-01-01

    Aging of the population is an increasing problem in many countries, including Finland, and it poses a challenge to public services such as home care. Vehicle routing optimization (VRP) type optimization solutions are one possible way to decrease the time required for planning home visits and driving to customer addresses, as well as decreasing transportation costs. Although VRP optimization is widely and succesfully applied to commercial and industrial logistics, the home care ...

  13. Structural Weight Optimization of Aircraft Wing Component Using FEM Approach.

    OpenAIRE

    Arockia Ruban M,; Kaveti Aruna

    2015-01-01

    One of the main challenges for the civil aviation industry is the reduction of its environmental impact by better fuel efficiency by virtue of Structural optimization. Over the past years, improvements in performance and fuel efficiency have been achieved by simplifying the design of the structural components and usage of composite materials to reduce the overall weight of the structure. This paper deals with the weight optimization of transport aircraft with low wing configuratio...

  14. FINANCIAL STRUCTURE OPTIMIZATION BY USING A GOAL PROGRAMMING APPROACH

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2012-12-01

    Full Text Available This paper proposes a new methodology for solving the multiple objective fractional linear programming problems using Taylor’s formula and goal programming techniques. The proposed methodology is tested on the example of company's financial structure optimization. The obtained results indicate the possibility of efficient application of the proposed methodology for company's financial structure optimization as well as for solving other multi-criteria fractional programming problems.

  15. PROPOSED INFORMATION SHARING SECURITY APPROACH FOR SECURITY PERSONNELS, VERTICAL INTEGRATION, SEMANTIC INTEROPERABILITY ARCHITECTURE AND FRAMEWORK FOR DIGITAL GOVERNMENT

    Directory of Open Access Journals (Sweden)

    Md.Headayetullah

    2011-06-01

    Full Text Available This paper mainly depicts the conceptual overview of vertical integration, semantic interoperability architecture such as Educational Sector Architectural Framework (ESAF for New Zealand governmentand different interoperability framework solution for digital government. In this paper, we try to develop a secure information sharing approach for digital government to improve home land security. This approach is a role and cooperation based approach for security personnel of different government departments. In order to run any successful digital government of any country in the world, it is necessary to interact with their citizen and to share secure information via different network among the citizen or other government. Consequently, in order to smooth the progress of users to cooperate with and share information without darkness and flawlessly transversely different networks and databases universally, a safe and trusted information-sharing environment has been renowned as a very important requirement and to press forward homeland security endeavor. The key incentive following this research is to put up a secure and trusted information-sharing approach for government departments. This paper presents a proficient function and teamwork based information sharing approach for safe exchange of hush-hush and privileged information amid security personnels and government departments inside the national boundaries by means of public key cryptography. The expanded approach makes use of cryptographic hash function; public key cryptosystem and a unique and complex mapping function for securely swapping over secret information. Moreover, the projected approach facilitates privacy preserving information sharing with probable restrictions based on the rank of the security personnels. The projected function and collaboration based information sharing approach ensures protected and updated information sharing between security personnels and government

  16. Research on program optimization based on compute unified device architecture%基于计算统一设备架构的程序优化研究

    Institute of Scientific and Technical Information of China (English)

    杨云生; 张朝晖

    2011-01-01

    Compute Unified Device Architecture(CUDA) is a vital new force in the domain of general purpose computing, is also the engine of the most power computer in the world. But because of the particularity of architecture, programs based on CUDA must be optimized specially. In order that programmers understand the optimization steps of CUDA program, the methods of CUDA program optimization are set forth from the aspects of program methods, using memory and optimizing instructions. At the same time, an instance is tested for comparing these methods. The results of tests show that the deeply optimized program runs faster 30 times than it has not optimized. At last, a reference sequence of the optimization methods is presented.%计算统一设备架构(CUDA)是通用计算领域的生力军,是世界最强计算机的引擎.但由于架构的特殊性,基于CUDA的程序必须进行专门的优化.为使编程人员了解CUDA程序的优化,从编程方法,存储器使用以及指令流优化等方面阐述CUDA程序优化措施的同时,结合一个实例进行了比较测试,测试结果显示经充分优化的程序比优化前快30倍.最后,给出了优化措施的参考排序.

  17. Breathing architecture: Conceptual architectural design based on the investigation into the natural ventilation of buildings

    Directory of Open Access Journals (Sweden)

    Anastasia D. Stavridou

    2015-06-01

    Full Text Available This study explores architectural design by examining air, fluid mechanics, and the natural ventilation of buildings. In this context, this research introduces a new way of dealing with the process of architectural synthesis. The proposed way can be used either to create new architectural projects or to rethink existing ones. This study is supported by previous investigation into the natural ventilation of buildings via computational and laboratory simulation (Stavridou, 2011; Stavridou and Prinos, 2013. The investigation into the natural ventilation of buildings provides information and data that affect architectural design through various parameters. The parameters of architectural synthesis that are influenced and discussed in this paper are the following: (i inspiration and analogical transfer, (ii initial conception of the main idea using computational fluid dynamics (digital design, (iii development of the main idea through an investigatory process toward building form optimization, and (iv form configuration, shape investigation, and other morphogenetic prospects. This study illustrates the effect of natural ventilation research on architectural design and thus produces a new approach to the architectural design process. This approach leads to an innovative kind of architecture called “breathing architecture.”

  18. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Science.gov (United States)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  19. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...... that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... the autonomy of architecture, not as an esoteric concept but as a valid source of information in a pragmatic design practice, may help us overcome the often-proclaimed dichotomy between formal autonomy and a societally committed architecture. It follows that in architectural education there can be a close...

  20. Multiple 3d Approaches for the Architectural Study of the Medieval Abbey of Cormery in the Loire Valley

    Science.gov (United States)

    Pouyet, T.

    2017-02-01

    This paper will focus on the technical approaches used for a PhD thesis regarding architecture and spatial organization of benedict abbeys in Touraine in the Middle Ages, in particular the abbey of Cormery in the heart of the Loire Valley. Monastic space is approached in a diachronic way, from the early Middle Ages to the modern times using multi-sources data: architectural study, written sources, ancient maps, various iconographic documents… Many scales are used in the analysis, from the establishment of the abbeys in a territory to the scale of a building like the tower-entrance of the church of Cormery. These methodological axes have been developed in the research unit CITERES for many years and the 3D technology is now used to go further along in that field. The recording in 3D of the buildings of the abbey of Cormery allows us to work at the scale of the monastery and to produce useful data such as sections or orthoimages of the ground and the walls faces which are afterwards drawn and analysed. The study of these documents, crossed with the other historical sources, allowed us to emphasize the presence of walls older than what we thought and to discover construction elements that had not been recognized earlier and which enhance the debate about the construction date St Paul tower and associated the monastic church.

  1. Integrating emerging earth science technologies into disaster risk management: an enterprise architecture approach

    Science.gov (United States)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster risk management has grown to rely on earth observations, multi-source data analysis, numerical modeling, and interagency information sharing. The practice and outcomes of disaster risk management will likely undergo further change as several emerging earth science technologies come of age: mobile devices; location-based services; ubiquitous sensors; drones; small satellites; satellite direct readout; Big Data analytics; cloud computing; Web services for predictive modeling, semantic reconciliation, and collaboration; and many others. Integrating these new technologies well requires developing and adapting them to meet current needs; but also rethinking current practice to draw on new capabilities to reach additional objectives. This requires a holistic view of the disaster risk management enterprise and of the analytical or operational capabilities afforded by these technologies. One helpful tool for this assessment, the GEOSS Architecture for the Use of Remote Sensing Products in Disaster Management and Risk Assessment (Evans & Moe, 2013), considers all phases of the disaster risk management lifecycle for a comprehensive set of natural hazard types, and outlines common clusters of activities and their use of information and computation resources. We are using these architectural views, together with insights from current practice, to highlight effective, interrelated roles for emerging earth science technologies in disaster risk management. These roles may be helpful in creating roadmaps for research and development investment at national and international levels.

  2. A Comparative Study between Optimization and Market-Based Approaches to Multi-Robot Task Allocation

    Directory of Open Access Journals (Sweden)

    Mohamed Badreldin

    2013-01-01

    Full Text Available This paper presents a comparative study between optimization-based and market-based approaches used for solving the Multirobot task allocation (MRTA problem that arises in the context of multirobot systems (MRS. The two proposed approaches are used to find the optimal allocation of a number of heterogeneous robots to a number of heterogeneous tasks. The two approaches were extensively tested over a number of test scenarios in order to test their capability of handling complex heavily constrained MRS applications that include extended number of tasks and robots. Finally, a comparative study is implemented between the two approaches and the results show that the optimization-based approach outperforms the market-based approach in terms of optimal allocation and computational time.

  3. Combinatorial optimization of PEG architecture and hydrophobic content improves siRNA polyplex stability, pharmacokinetics, and potency in vivo.

    Science.gov (United States)

    Werfel, Thomas A; Jackson, Meredith A; Kavanaugh, Taylor E; Kirkbride, Kellye C; Miteva, Martina; Giorgio, Todd D; Duvall, Craig

    2017-03-30

    A rationally-designed library of ternary siRNA polyplexes was developed and screened for gene silencing efficacy in vitro and in vivo with the goal of overcoming both cell-level and systemic delivery barriers. [2-(dimethylamino)ethyl methacrylate] (DMAEMA) was homopolymerized or colpolymerized (50mol% each) with butyl methacrylate (BMA) from a reversible addition - fragmentation chain transfer (RAFT) chain transfer agent, with and without pre-conjugation to polyethylene glycol (PEG). Both single block polymers were tested as core-forming units, and both PEGylated, diblock polymers were screened as corona-forming units. Ternary siRNA polyplexes were assembled with varied amounts and ratios of core-forming polymers to PEGylated corona-forming polymers. The impact of polymer composition/ratio, hydrophobe (BMA) placement, and surface PEGylation density was correlated to important outcomes such as polyplex size, stability, pH-dependent membrane disruptive activity, biocompatibility, and gene silencing efficiency. The lead formulation, DB4-PDB12, was optimally PEGylated not only to ensure colloidal stability (no change in size by DLS between 0 and 24h) and neutral surface charge (0.139mV) but also to maintain higher cell uptake (>90% positive cells) than the most densely PEGylated particles. The DB4-PDB12 polyplexes also incorporated BMA in both the polyplex core- and corona-forming polymers, resulting in robust endosomolysis and in vitro siRNA silencing (~85% protein level knockdown) of the model gene luciferase across multiple cell types. Further, the DB4-PDB12 polyplexes exhibited greater stability, increased blood circulation time, reduced renal clearance, increased tumor biodistribution, and greater silencing of luciferase compared to our previously-optimized, binary parent formulation following intravenous (i.v.) delivery. This polyplex library approach enabled concomitant optimization of the composition and ratio of core- and corona-forming polymers (indirectly

  4. A correlation consistency based multivariate alarm thresholds optimization approach.

    Science.gov (United States)

    Gao, Huihui; Liu, Feifei; Zhu, Qunxiong

    2016-11-01

    Different alarm thresholds could generate different alarm data, resulting in different correlations. A new multivariate alarm thresholds optimization methodology based on the correlation consistency between process data and alarm data is proposed in this paper. The interpretative structural modeling is adopted to select the key variables. For the key variables, the correlation coefficients of process data are calculated by the Pearson correlation analysis, while the correlation coefficients of alarm data are calculated by kernel density estimation. To ensure the correlation consistency, the objective function is established as the sum of the absolute differences between these two types of correlations. The optimal thresholds are obtained using particle swarm optimization algorithm. Case study of Tennessee Eastman process is given to demonstrate the effectiveness of proposed method.

  5. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Directory of Open Access Journals (Sweden)

    Hiroshi Shiraishi

    2012-01-01

    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  6. Deterministic global optimization an introduction to the diagonal approach

    CERN Document Server

    Sergeyev, Yaroslav D

    2017-01-01

    This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...

  7. Structured controllers for uncertain systems a stochastic optimization approach

    CERN Document Server

    Toscano, Rosario

    2013-01-01

    Structured Controllers for Uncertain Systems focuses on the development of easy-to-use design strategies for robust low-order or fixed-structure controllers (particularly the industrially ubiquitous PID controller). These strategies are based on a recently-developed stochastic optimization method termed the "Heuristic Kalman Algorithm" (HKA) the use of which results in a simplified methodology that enables the solution of the structured control problem without a profusion of user-defined parameters. An overview of the main stochastic methods employable in the context of continuous non-convex optimization problems is also provided and various optimization criteria for the design of a structured controller are considered; H∞, H2, and mixed H2/H∞ each merits a chapter to itself. Time-domain-performance specifications can be easily incorporated in the design. Advances in Industrial Control aims to report and encourage the transfer of technology in control engineering. The rapid development of control technolo...

  8. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    Science.gov (United States)

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  9. A linear nonequilibrium thermodynamics approach to optimization of thermoelectric devices

    CERN Document Server

    Ouerdane, H; Apertet, Y; Michot, A; Abbout, A

    2013-01-01

    Improvement of thermoelectric systems in terms of performance and range of applications relies on progress in materials science and optimization of device operation. In this chapter, we focuse on optimization by taking into account the interaction of the system with its environment. For this purpose, we consider the illustrative case of a thermoelectric generator coupled to two temperature baths via heat exchangers characterized by a thermal resistance, and we analyze its working conditions. Our main message is that both electrical and thermal impedance matching conditions must be met for optimal device performance. Our analysis is fundamentally based on linear nonequilibrium thermodynamics using the force-flux formalism. An outlook on mesoscopic systems is also given.

  10. An approach to identify the optimal cloud in cloud federation

    Directory of Open Access Journals (Sweden)

    Saumitra Baleshwar Govil

    2012-01-01

    Full Text Available Enterprises are migrating towards cloud computing for their ability to provide agility, robustness and feasibility in operations. To increase the reliability and availability of services, clouds have grown into federated clouds i.e., union of clouds. There are still major issues in federated clouds, which when solved could lead to increased satisfaction to both service providers and clients alike. One such issue is to select the optimal foreign cloud amongst the federation, which provides services according to the client requirements. In this paper, we propose a model to select the optimal cloud service provider based on the capability and performance of the available clouds in the federation. We use two matrix models to obtain the capability and performance parametric values. They are matched with the client requirements and the optimal foreign cloud service provider is selected.

  11. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    , while recent material and spatial turns in anthropology have also brought an increasing interest in design, architecture and the built environment. Understanding the relationship between the social and the physical is at the heart of both disciplines, and they can obviously benefit from further......Architecture and anthropology have always had a common focus on dwelling, housing, urban life and spatial organisation. Current developments in both disciplines make it even more relevant to explore their boundaries and overlaps. Architects are inspired by anthropological insights and methods...... collaboration: How can qualitative anthropological approaches contribute to contemporary architecture? And just as importantly: What can anthropologists learn from architects’ understanding of spatial and material surroundings? Recent theoretical developments in anthropology stress the role of materials...

  12. Architectural Narratives

    DEFF Research Database (Denmark)

    2010-01-01

    In this essay, I focus on the combination of programs and the architecture of cultural projects that have emerged within the last few years. These projects are characterized as “hybrid cultural projects,” because they intend to combine experience with entertainment, play, and learning. This essay...... identifies new rationales related to this development, and it argues that “cultural planning” has increasingly shifted its focus from a cultural institutional approach to a more market-oriented strategy that integrates art and business. The role of architecture has changed, too. It not only provides...... a functional framework for these concepts, but tries increasingly to endow the main idea of the cultural project with a spatially aesthetic expression - a shift towards “experience architecture.” A great number of these projects typically recycle and reinterpret narratives related to historical buildings...

  13. Optimization based tuning approach for offset free MPC

    DEFF Research Database (Denmark)

    Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2012-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system...... identifiation techniques using convex optimization can be used for identification of such models from input-output data. The stochastic model of the ARX model identified from input-output data is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The ARMAX...

  14. An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)

    2008-09-15

    This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)

  15. A Two-stage Optimal Network Reconfiguration Approach for Minimizing Energy Loss of Distribution Networks Using Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Wei-Tzer Huang

    2015-12-01

    Full Text Available This study aimed to minimize energy losses in traditional distribution networks and microgrids through a network reconfiguration and phase balancing approach. To address this problem, an algorithm composed of a multi-objective function and operation constraints is proposed. Network connection matrices based on graph theory and the backward/forward sweep method are used to analyze power flow. A minimizing energy loss approach is developed for network reconfiguration and phase balancing, and the particle swarm optimization (PSO algorithm is adopted to solve this optimal combination problem. The proposed approach is tested on the IEEE 37-bus test system and the first outdoor microgrid test bed established by the Institute of Nuclear Energy Research (INER in Taiwan. Simulation results demonstrate that the proposed two-stage approach can be applied in network reconfiguration to minimize energy loss.

  16. Novel Approach to Nonlinear PID Parameter Optimization Using Ant Colony Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    Duan Hai-bin; Wang Dao-bo; Yu Xiu-fen

    2006-01-01

    This paper presents an application of an Ant Colony Optimization (ACO) algorithm to optimize the parameters in the design of a type of nonlinear PID controller. The ACO algorithm is a novel heuristic bionic algorithm, which is based on the behaviour of real ants in nature searching for food. In order to optimize the parameters of the nonlinear PID controller using ACO algorithm,an objective function based on position tracing error was constructed, and elitist strategy was adopted in the improved ACO algorithm. Detailed simulation steps are presented. This nonlinear PID controller using the ACO algorithm has high precision of control and quick response.

  17. Reverse convex problems: an approach based on optimality conditions

    Directory of Open Access Journals (Sweden)

    Ider Tseveendorj

    2006-01-01

    Full Text Available We present some results concerning reverse convex problems. Global optimality conditions for the problems with a nonsmooth reverse convex constraint are established and convergence of an algorithm in the case of linear program with an additional quadratic reverse convex constraint is studied.

  18. Particle Swarm Optimization approach to defect detection in armour ceramics.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function.

  19. Taxing Strategies for Carbon Emissions: A Bilevel Optimization Approach

    Directory of Open Access Journals (Sweden)

    Wei Wei

    2014-04-01

    Full Text Available This paper presents a quantitative and computational method to determine the optimal tax rate among generating units. To strike a balance between the reduction of carbon emission and the profit of energy sectors, the proposed bilevel optimization model can be regarded as a Stackelberg game between the government agency and the generation companies. The upper-level, which represents the government agency, aims to limit total carbon emissions within a certain level by setting optimal tax rates among generators according to their emission performances. The lower-level, which represents decision behaviors of the grid operator, tries to minimize the total production cost under the tax rates set by the government. The bilevel optimization model is finally reformulated into a mixed integer linear program (MILP which can be solved by off-the-shelf MILP solvers. Case studies on a 10-unit system as well as a provincial power grid in China demonstrate the validity of the proposed method and its capability in practical applications.

  20. Reverse convex problems: an approach based on optimality conditions

    OpenAIRE

    Ider Tseveendorj

    2006-01-01

    We present some results concerning reverse convex problems. Global optimality conditions for the problems with a nonsmooth reverse convex constraint are established and convergence of an algorithm in the case of linear program with an additional quadratic reverse convex constraint is studied.